Recent discussion about the merits (or otherwise) of the ‘British phenomenon’ of ring final circuits has reminded me of a point I’ve often thought of raising.
Particularly in relation to the testing of ring finals, a lot is said and written about the ‘problem’ of bridged rings, and some of the more tedious aspects of testing such a circuit are undertaken at least partially to detect such a ‘problem’.
However, is this really a ‘problem’? As far as I can see, in the absence of any faults, the effect of one or more ‘bridges’ is, if anything, positive – it helps to equalise currents throughout the ring, and the redundancy it introduces even provides a degree of ‘protection’ against the effects of some possible faults (notably ‘breaks’ at some places in the circuit).
The standard argument appears to be that a break in the circuit downstream of a bridge can leave one with two or more sockets effectively on unfused spurs, hence supplied with under-rated cable. That’s very true, but why is it any different from (or worse than) the situation in which a break occurs in a ring final which does not have any bridges? Indeed, in some senses the latter is a worse situation, since it usually leaves all sockets on the circuit supplied with an under-rated cable, whereas a break downstream of a bridge will have that effect only on some sockets. Nor is there any difference between the two situations as regards ‘identfiability’ of a fault – whether or not there are bridges, the user (e.g.householder) will be unaware of a single break, since all sockets will continue to function normally.
So, I wonder if there really is a logical reason (which I am missing) for us to regard bridged ring finals as any more of ‘a problem’ than a ring final without bridges. If bridges were regarded as acceptable, then there would have to be some changes in testing practices – but that, in itself, would not be a good reason for not accepting them - and, as above, it would even be possible to argue that bridges represented an advantage.
Comments?
Kind Regards, John.
Particularly in relation to the testing of ring finals, a lot is said and written about the ‘problem’ of bridged rings, and some of the more tedious aspects of testing such a circuit are undertaken at least partially to detect such a ‘problem’.
However, is this really a ‘problem’? As far as I can see, in the absence of any faults, the effect of one or more ‘bridges’ is, if anything, positive – it helps to equalise currents throughout the ring, and the redundancy it introduces even provides a degree of ‘protection’ against the effects of some possible faults (notably ‘breaks’ at some places in the circuit).
The standard argument appears to be that a break in the circuit downstream of a bridge can leave one with two or more sockets effectively on unfused spurs, hence supplied with under-rated cable. That’s very true, but why is it any different from (or worse than) the situation in which a break occurs in a ring final which does not have any bridges? Indeed, in some senses the latter is a worse situation, since it usually leaves all sockets on the circuit supplied with an under-rated cable, whereas a break downstream of a bridge will have that effect only on some sockets. Nor is there any difference between the two situations as regards ‘identfiability’ of a fault – whether or not there are bridges, the user (e.g.householder) will be unaware of a single break, since all sockets will continue to function normally.
So, I wonder if there really is a logical reason (which I am missing) for us to regard bridged ring finals as any more of ‘a problem’ than a ring final without bridges. If bridges were regarded as acceptable, then there would have to be some changes in testing practices – but that, in itself, would not be a good reason for not accepting them - and, as above, it would even be possible to argue that bridges represented an advantage.
Comments?
Kind Regards, John.