Features

The visible damage caused by lightning can be spectacular but the damage caused to sensitive electronic systems can have a far more profound effect on operations and profits says Andy Malinski of Omega Red Group

Most modern companies place heavy reliance on the uninterrupted functioning of electrical systems used to power everything from sophisticated IT networks and telecoms to lighting and heating systems. Yet many buildings, including those only a couple of decades old, were simply not designed with surge protection in mind even though a strong surge can completely disable the electrical system in place. Protecting electronic equipment from the consequences of surge or lightning activity is essential and not just because of the immediate disruption it can cause.
Big increases in insurance claims for surge related damage have led some insurance companies to increase premiums for companies heavily reliant on electronic technology and to impose exclusions of cover pending this particular problem being addressed. Correctly following BS6651:1999 Annex C general advice on protection against lightning of electronic equipment within or on structures should ensure that any site has adequate surge protection and will help to secure the relevant insurance for the site.

The problem
During lightning activity, and switching operations, transient surge voltages are generated. Surges are short duration voltage spikes appearing on a mains power or low voltage signal line such as a computer or telephone line.
The amount of energy contained within the surge is dependant on the magnitude and duration of the event but values up to several thousands of volts lasting for microseconds can be generated. Two main causes of transient over-voltages are:
• Switching. This results from an electrical load being switched on or off with typical loads including motors, transformers, welders, photo copiers etc. These types of surges happen many times every day, are of a short duration and value and do not usually cause major problems.
• Lightning and atmospheric disturbances: Surges generated by lightning activity tend to be of a much higher level and are consequently much more dangerous. They can be generated either by direct or nearby (up to 1.5km away) lightning strikes though most damage tends to be from nearby strikes as the magnitude of discharge current is more concentrated.

How does it get in?
Copper conductors used for electrical, mains, data, computer and telephone wiring are prime routes for transient surges to enter buildings and there are three easy main ways in which these transient voltages affect wiring systems.
• Resistive coupling – a ‘cloud to ground’ lightning strike injects a massive current into the ground. This raises the ground potential in the area of impact to a high level and for the current to dissipate it will seek the path of least resistance to earth. Cables running between buildings are usually connected to different earthing systems at each end and a cable connected to an earth of a lower value forms an ideal route for the induced current to follow.
• Inductive coupling – a lightning discharge causes a huge current to flow. This in turn sets up a massive magnetic field. Any conductor passing through this magnetic field will have a surge voltage induced on the cable, this is the same principle as used by a transformer, and it can happen either above or below ground.
• Capacitive coupling – atmospheric disturbance causes high voltages to be generated. A low voltage conductor in the area of influence of these voltages can be charged to that same voltage, this is the same effect as charging a capacitor.
Dealing with a surge
The British Standard addresses lightning protection for both external structural strikes and internal surges.
Appendix C of the standard looks at internal protection of electrical and electronic systems. More recently the new European standard EN 62305/4 Electrical and Electronic Systems covering surge protection has been released to work in tandem with the British standard but within a few years the national standard will be withdrawn in favour of the new European standard.
For both external and internal protection the first step is to undertake a risk assessment. This is a comprehensive, complex assessment as many factors need to be considered. Internal surge protection takes into account all factors affecting the electronic equipment and systems within the building.
A risk assessment looks at many factors including the number, length and types of cables entering and leaving the building, equipment types, exposure and risk levels, recommended levels of protection, and cable routing, amongst others.
Results taken from the assessment will determine if protection is required and the correct protection methods to use. This can take the form of surge voltage protection devices, the repositioning of cabling, a more effective earthing system or by other means.
Zones of protection are defined, with the corresponding levels of protection required, and co-ordinated protection devices are fitted at zone interfaces. By using a co-ordinated system the high surge current present at the outer zone will be dissipated and the attenuated surge then handled by devices fitted at subsequent zone interfaces – limiting possible damage.

Surge protection in practice
All electronic equipment has a transient safety level, a maximum surge voltage value that can be applied to the equipment without causing damage.
The protection device must reduce the surge voltage to below this value with any excess voltage shunted to earth in the quickest time possible. The let-through voltage of the protection device needs to be as near as possible to the nominal voltage of the line being protected.

Mains power protection
When applying surge protection to a site the first system to protect is the mains power as the large diameter of the cable will allow surges to pass into the system with minimum attenuation. Of greatest concern, the mains is common to all other systems and a surge entering via this route will quickly spread into all other associated systems.
Protection at the main Low Voltage (LV) incoming supply is necessary to control large transients before they enter the distribution system.
Additionally, protection should also be installed locally to important equipment or sub-distribution boards feeding outside equipment.
This is to guard against both internally and externally generated transients, which may be injected back into the distribution system.
Low voltage telephone signal protection devices fit in series with the cables being protected. On older termination systems the Surge Protection Device (SPD) is wired directly into the circuit.
Plug in protection devices are available for LSA Plus (registered trade mark of Krone) termination strips, these come in either single or ten way configurations.

Building Management System
Typically Bulding Management Systems (BMS) consist of a network of separate stand-alone slave controller panels.
Across the site a RS485 backbone cable will interconnect all panels. A drop cable making the network connection to each panel.
On most systems the drop cable is a single twin twisted pair utilising RS485 protocol. It is recommended to fit a surge protection device in series with each drop cable, as near to the interface card as possible.
The input/output (I/O) from each slave controller is usually internal to the building being monitored and controlled. Any I/O external to the building requires an SPD to be fitted.

Fire and Intruder Alarm Systems
These systems tend to be wired using the same type of backbone network as for the BMS.
The main difference being that they will have more I/O cabling connected to outside sensors and alarms. SPD’s are required to be fitted to any cable connected to an outside device.
The positioning and routing of the cables either side of the SPD is of the utmost importance. Incoming and outgoing cables either touching, or closely running in parallel with one another, can cause surge voltages to be induced ‘across’ cables.

Close Circuit Television (CCTV)
The two most popular types of camera are:
• Pan, Tilt and Zoom (PTZ) cameras
• Fixed cameras
Pan, Tilt and Zoom (PTZ) cameras are controlled from a central location with the vertical and horizontal positioning of the camera being via signals sent over the RS485 data loop. As the central processor would consist of expensive monitoring and control equipment, SPD’s would usually be fitted to this end of the system. It is possible to offer protection to the camera by fitting surge protection devices within the camera.
The level of protection required depends on the vulnerability of the camera. Is it located within the zone of protection of the building? Does the building have a lightning protection system fitted? Is the mounting pole correctly earthed?
Usually these would be monitored from a central point, the main difference being that motion control of the camera is not possible. Protection requirements and the methods of achieving them are the same as for the PTZ cameras.
The initial surge assessment will identify which ends of each system require protection.
As a surge can travel both ways in a cable it is important to protect both ends of the system if necessary.

Professional Contractors
Of course, none of the above holds true unless a competent contractor is used to survey, design, specify, install and maintain the surge protection system.
If a surge protection system is to work effectively to prevent equipment failures then a number of other factors need to be taken into account in addition to the technical expertise that is required.
Look for evidence of a proven track record, particularly for major or technically challenging projects.
There are a lot of suppliers in the market but the best will be able to provide customer references attesting to the professionalism of their work and thoroughness of approach.
Make sure the organisation has the resources needed to do your job –?can they handle large scale, multi-site operations across the country using their own staff or will the job be largely subcontracted to organisations that may not have the same quality standards or professionally trained staff?
By combining technical and service factors, the maximum level of protection from induced surges and over-voltages is delivered to the electronic systems within the building, this protects productivity and, ultimately, profits.

While attention has been paid to the Climate Change Levy and the tax allowances for fitting energy saving equipment such as variable speed drives, Richard Walley of Schneider Electric Building Systems and Solutions, argues a strong case for looking at power factor correction first when looking to cut electricity consumption

Energy consumption was brought into focus for industry when the Climate Change Levy (CCL) was introduced. Essentially, the CCL is an additional tax on energy usage by industry and commerce, but the UK Government tried to soften the blow by introducing Enhanced Capital Allowances for capital investment in certain energy saving technologies. What this means is that the full value of the installation can be offset against income tax in the first year. Although this does provide a small amount of relief for such investments, the allowable equipment only extends to the likes of efficient boiler systems and to variable speed drives used to control electric motors. In practicality, the CCL has not impacted as greatly as first imagined and neither has the ECA had anything like the take up expected.
Why power factor correction equipment was not considered in the tax allowances introduced along with the Climate Change Levy is a mystery, since its primary function is to reduce energy losses! By adopting power factor correction measures it is possible to substantially reduce the current taken from the electricity supply. There are major kW losses on the network defined by the I2R law whereby the square of the current is multiplied by the resistance of the cables, transformers and overhead lines that form the national electricity distribution system. These distribution losses vary between Network Distribution Operators (NDOs) and are quoted as high as 11% typically and up to 19% in one instance. These figures do not include the additional losses occurring on the National Grid system.
The national system electrical energy losses represent an enormous CO2 component emitted from the generation of the wasted power. Although these losses cannot be completely eliminated, there is a very strong argument that their reduction should be encouraged.
The effect of power factor correction on system losses can be dramatic. In one case, a 6MW supply with a power factor of 0.68 improved to 0.95 once correction equipment was installed. This improvement of power factor reduced the system losses, for which this user is responsible, by 46%.
Apart from the CCL, most of the NDOs apply a penalty for poor power factor – either in the form of a reactive energy charge and or a supply capacity charge based upon kVA. These charges are part of the ‘Use of System’ charges and are therefore not dependent on the energy supplier but the host.
There might be no special tax relief provided for installing power factor correction equipment, but it remains one of the best ways to reduce both electricity costs and the resultant pollution caused by the generation of subsequently wasted power. To compensate for the increases in energy charges both from the CCL and from general utility price increases, as well as the reactive energy and supply capacity penalties already imposed, users should first examine their power factor. This is the area where real energy cost savings can be made without switching anything off or disturbing production. It is also one of the measures that will benefit the environment.
The energy supplied to industrial consumers is divided into two components: kilowatts, the energy used to perform work; and kVAR, which is the reactive energy used to energise magnetic fields. A combination of both types of energy is taken from the supply network and both contribute to system losses. This combined energy or power taken from the system is referred to as kVA (kilo volt amperes). Power factor is defined as the ratio of useful energy (in kW) and the total energy taken from the system (in kVA). Inductive-reactive energy is negated by the installation of appropriately sized power factor correction equipment.

Misunderstood
Power factor is one of the most misunderstood areas of electrical engineering, yet it is really very simple. Plant and equipment most likely to contribute to poor power factor are those requiring the creation of a magnetic field to operate, such as electric motors, induction heaters and fluorescent lighting. All these types of devices draw current that is said to lag behind the voltage, thus producing a lagging power factor.
Capacitors, used in most power factor correction equipment, draw current that is said to lead the voltage – hence producing a leading power factor. If capacitors are connected to a circuit that operates at a nominally lagging power factor, the extent that the circuit lags is reduced proportionately. Circuits having no resultant leading or lagging component are described as operating at unity power factor and therefore the total energy used is equal to the useful energy.
So, let us consider the effect of reactive energy on the system. Reactive energy substantially increases the energy losses on the local and national supply networks, including the users’ own installation. This increased loss also applies to the users’ own transformers if they are high voltage consumers. Reactive energy also has the undesirable effect of reducing the capacity of the network and transformers.
From the environmental point of view – and remember, that is what has driven the Climate Change Levy – the additional losses and the provision of the reactive energy itself, require an unnecessary increase in output from the power stations. This results in higher carbon dioxide (CO2) emissions.

Increased costs
The inefficient use of energy ultimately means increased costs for everyone. Many consumers already have power factor correction equipment installed, but some of this inevitably does not function correctly. Now that reactive charges apply, it is worthwhile getting existing equipment checked, maintained and tested to ensure it is adequately sized to meet the penalty levels now being imposed.
The benefits of installing power factor correction equipment, irrespective of the lack of Enhanced Capital Allowances (ECAs), are very clear. Electricity costs are reduced, sometimes by thousands of pounds each year. Reduced power system losses means a reduction in the emission of greenhouse gases and also the depletion of fossil fuels in the case of coal-fired stations. The reduced electrical burden on cables and electrical components leads to increased service life. Finally, by using power factor correction equipment, additional capacity is created in the users’ systems for other loads to be connected.
In short, despite the seeming short sightedness of the Climate Change Levy and the limitations of the provision of ECAs, the installation of power factor correction equipment can bring users bigger cash savings in the short, medium and long term. The environment will benefit too.

Think of a primary substation and you’ll probably envisage a vast outdoor compound with massive transformers connected to overhead powerlines and bank after bank of circuit breakers and disconnectors. This is not always the case says Stephen Trotter, ABB’s director of power systems projects for the UK

Traditional substations have provided excellent service over many years, and many are still being constructed today. However, when it comes to planning substations in urban areas there is an ever increasing demand from utility customers to minimise the space required, not just because of the cost and availability of land, but also to reduce their visual impact on the local environment. New high voltage technology offers the ideal solution in the form of gas insulated switchgear (GIS) which enables substations to be ‘shrunk’ into about 20 per cent of the space required by a traditional design, and housed indoors or even buried underground.

GIS advantages
Until the 1970s, air insulated switchgear (AIS) was the type most commonly in use for substations. AIS requires large distances between earth and phase conductors and therefore a good deal of space. This means that for higher voltages – typically above 36kV - this type of installation is only feasible outdoors.
The situation changed when SF6 (sulphur hexafluoride) became available as an insulating medium in switchgear enclosures in order to reduce phase to earth distances. The advantages of GIS compared to AIS are as follows:
• Less space requirements, especially in congested city areas, saving on land costs and civil works
• Low visibility buildings can be designed to blend in with local surroundings
• Less sensitivity to pollution, as well as salt, sand or even large amounts of snow
• Increased availability and reduced maintenance costs
• Higher personnel safety due to enclosed high voltage equipment and insignificant electromagnetic (EM) fields.
A direct comparison of the component investment for identical switchgear configurations will suggest that the GIS variant is more costly than the AIS solution. However, this does not necessarily show the true story. The capability to install a GIS substation within a significantly smaller site – typically up to 80 per cent smaller - enables it to be located close to the load centres, providing a far more efficient network structure at both the HV (high voltage) and MV (medium voltage) levels. As a result, both the investment and operating costs are reduced.
Sites large enough for new AIS substations are seldom available, and when they are their cost is usually extremely high. But it is not just the smaller size of the site that can make GIS the lower-cost option: GIS is also the more economic alternative when expanding or replacing existing substations. An inner city site that has been used previously for an AIS installation could be sold or rented out and the income used to finance the new substation. The compact nature of GIS enables an HV transformer substation to be fully integrated in an existing building, which may only have to be increased in height or have a basement added.

Port Ham shrinks from view
Central Networks’ £12 million replacement Port Ham switching station, which has recently been constructed by an ABB and Balfour Beatty consortium on the banks of the River Severn, near Gloucester, provides an ideal example of the advantages of the GIS approach.
Port Ham is a grid supply point. It takes electricity at 132kV from the National Grid substation, a few miles away at Walham, and feeds it into the Central Networks distribution network. Through a network of primary and secondary substations, this network feeds over 240,000 customers in Gloucestershire, Herefordshire and much of south and east Worcestershire.
The original outdoor station, built in the early 1950s, had experienced above average load growth, to a peak load of 672MVA. The AIS equipment had reached the end of its useful life, so in 2002 Central Networks decided to completely rebuild the facility to ensure continued reliability of supply, as well as providing scope for further load growth.
Initially, the project was tendered in the expectation that the AIS would be replaced on a like-for-like basis. However, in consultation with the ABB and Balfour Beatty consortium, Central Networks decided building a new indoor GIS switching station would offer a number of important advantages, at around the same overall cost. A key benefit was that ABB’s state of the art compact ELK-04 (GIS) switchgear solution could be condensed into just one-fifth of the space used by the existing station. Port Ham is in an important nature conservation area. So the smaller switchgear allowed Central Networks to meet planning concerns by housing the station in a low-profile building designed to blend in with the local environment.
In addition to saving space, GIS also offered two further advantages. Firstly, circuit downtime could be reduced, as the new GIS circuits were constructed with the existing units still in service. Downtime was limited to the rerouting of the network connections. This was a crucial factor, because of the critical position of Port Ham in the supply network. Secondly, the GIS was constructed outside the existing live compound, considerably reducing health and safety risks to personnel working on site.
One of the major project challenges was the soft ground – on the flood plain of the River Severn – which required major foundation work before construction could begin. In just over 10 days some 120 cast concrete piles were driven down 15 metres to the bedrock. The building itself has been raised on stilts to ensure that the switchgear is at least one metre above the predicted level of the once in 100 years flood level.
The new indoor switching station comprises 20 bays of GIS switchgear: 12 feeder circuits; four National Grid incomers; two bus couplers; and two bus sections. The size of the investment and the strategic importance of Port Ham made it a flagship project for Central Networks.

NEDL’s Norton substation
A similar approach was adopted when NEDL needed to replace its 132kV substation at Norton, near Stockton on Tees, that interconnects the National Grid and NEDL’s distribution network.
The new indoor GIS substation, completed in 2005, occupies just one sixth of the space of the old AIS substation. It is rated at 540MVA, and features 20 bays of switchgear (four of which have been transferred to National Grid) with four incoming circuits fed by Supergrid transformers and 14 outgoing circuits, two of which feed local grid transformers.

Going underground
The GIS switchgear concept has been taken to its logical conclusion in ABB’s Barbana 132kV/20kV transformer substation in the centre of Orense, Spain. The 132kV switchyard, comprising two cable feeder bays and one transformer bay, has been constructed entirely underground and concealed beneath a park. This design requires forced cooling, which inevitably entails unwanted fan noise. But damping features or low-noise fans can be expensive. Instead a waterfall has been created. This acts as a heat exchanger to dissipate the heat created by the transformer while the sound of the falling water also drowns out the noise from the fans.

Google may have made headlines when it stated energy costs outweigh server costs in its data centres, but a sobering thought, according to Rob Potts at APC, is only a third of datacentre energy is actually used for computing – up to 70% may be taken up by power, cooling and inefficiency losses

It is estimated that worldwide, datacentres consume some 40,000,000,000 kW/ hours of electricity annually•. Because of the need to provide high levels of redundancy in order to maximise uptime and reduce downtime – the goal of most facility operators – a degree of electrical inefficiency in the sector seems to be an acceptable fact of life. However, by increasing electrical efficiency there is also an opportunity to reduce energy use and therefore operating expenses.

How efficient is your physical layer?
For any device or system, efficiency is simply defined as the fraction of its input (ie. the fuel that makes it ‘go’) converted into the desired useful result, in this case computing. If all datacentres were 100% efficient, then all power supplied to the data centre would be utilised by IT equipment. However, energy is consumed by devices other than the IT load because of the practical requirements of keeping it properly housed, powered, cooled and protected. The devices that comprise network-critical physical infrastructure (NCPI) include those in series with the IT load (such as UPS and transformers) and those in parallel with the load (such as lighting and fans).
In simplistic terms, the more energy can be expended on computing and reduced on non-IT devices, the more efficient the facility.
Can data centre efficiency be improved?
Virtually all of the electricity feeding a datacentre will end up as heat emission. From a facilities point of view, efficiency can be improved in a number of ways including:
• Improve the design of NCPI devices so that they consume less power
• Rightsize NCPI components to the IT load
• Develop new technologies which reduce the power consumed by non-IT devices
On the face of it, option two provides the most immediate solution to meet current data centre challenges. At the same time, better power efficiency of servers is being achieved through the introduction of multi core processor architectures, and improved utilisation of IT layer is being brought about through virtualisation.

Real world options
Before setting out to realise available power savings, some common misconceptions need to be corrected:
• Firstly, the efficiency of a facility is not a constant, for example air conditioning units and UPS are far less efficient at low loads (and, conversely, far more efficient at higher loads)
• Secondly, the typical IT load tends to be significantly less than the design capacity of the NCPI components (due, in part, to conservative ‘nameplate’ rating by IT manufacturers)
• Thirdly, the heat output of the power and cooling NCPI components themselves creates a significant energy burden for the whole system and should be included when analysing overall facility efficiency.
An additional factor affecting the efficiency of facilities is that the IT load itself is not constant but dynamic, both operationally and through inventory changes. For instance, as computing throughput increases, electrical consumption is also increased. Also, over the lifetime of facilities, IT inventory is in a constant state of flux as new generations of equipment replace old. Until recently, every increase in server performance has come complete with an increase in electrical demand.

Efficiency is dynamic
Finding an improved model for data centre efficiency depends on how accurately individual components are modelled. However, the use of a single efficiency value is inadequate for real data centres as the efficiency of components such as the UPS are a function of the IT load.
Therefore when the UPS operates with a light load, efficiency drops off substantially. The losses that occur along this curve fall under three categories: no-load loss, proportional loss, and square-law loss.
No-load losses can represent more than 40% of all losses in a UPS and are by far the largest opportunity for improving UPS efficiency. These losses are independent of load and result from the need to power components like transformers, capacitors, and communication cards.
Proportional losses increase as load increases, as a larger amount of power must be ‘processed’ by various components in its power path. As the load increases on the UPS, the electrical current running through its components increases. This causes losses in the UPS with the square of the current sometimes referred to as ‘I-squared R’ losses or square-law losses. Square-law losses become significant (1 to 4%) at higher UPS loads.
The efficiency of a device can be effectively modelled using these three parameters, and a graphical output of efficiency can then be created for any component, as a function of load – understanding that typical datacentres operate well below their design capacity.

Effects of under-loading
If the efficiency of NCPI components such as UPS and cooling equipment decreases significantly at low loads, any analysis of data centre efficiency must properly represent load as a fraction of design capacity. It is a fact that in the average data centre, power and cooling equipment is routinely operated below rated capacity. There are four reasons for this:
• The data centre load is simply less than the system design capacity, in fact, research shows the average facility operates at 65% below its design value.
• Components have been purposely oversized to provide a safety margin – in order to provide high availability, ‘derating’ components by 10% - 20% is common design practice.
• Components operate with other similar components in an N+1 or 2N configuration to improve reliability or facilitate concurrent maintenance of hot components. However, such configurations have an impact on physical layer components, for example in a 2N system the loading on any single component is at best half of its design capacity.
• Components are oversized to handle load diversity, for example PDUs are routinely oversized between 30% and 100% in order to utilise capacity and overcome issues caused by imbalance between PDU loads.
Effects of power and cooling equipment
Heat generated by power and cooling equipment in the data centre is no different to heat generated by the IT load, and must also be removed by the cooling system. This creates additional work for the cooling system, causing it to be over sized, which in turn creates additional efficiency losses.

An improved model for datacentre efficiency
Armed with this knowledge it is possible to create an improved model and therefore make improved estimates of data centre efficiency. Using typical values for equipment losses, derating, load diversity, oversizing and redundancy, an efficiency curve can be developed.
Efficiency is dramatically decreased at lower loads where many data centres operate, e.g., if a facility only reaches 10% of its design capacity, only 10% of the power delivered to the data centre reaches the IT load. A staggering 90% is lost through inefficiencies in the NCPI layer.
Another way to look at this analysis is to consider its financial implications: at 30% capacity utilisation over 70% of the total electricity cost is caused by NCPI inefficiencies in power and cooling equipment. The primary contributor to data centre electrical costs are no-load losses of infrastructure components, which typically exceed IT load power consumption. Many of the losses are avoidable and analysis using the model can help identify and prioritise opportunities for increasing efficiencies. Based on this and the need to gain a quick return, the best solution is to right size facilities using an adaptable and modular architecture.

•Figures quoted from White Paper #113 “Electrical Efficiency Modelling for Data Centres”.

he electrical industry was out in force to celebrate the Electrical Industry Awards and the NICEIC’s 50th Anniversary Gala Dinner at London’s Grosvenor House Hotel on 20 September. The evening brought together over 800 electrical engineers, manufacturers, wholesalers, contractors and industry association members, to celebrate the very best of the industry.
Radio 2 and BBC Newsnight presenter, Jeremy Vine, hosted a memorable evening providing lively commentary as the winners stepped up to claim their well deserved awards.
Feedback from those who attended was very positive. Jack McDavid from the HVCA wrote: “I enjoyed the experience very much indeed, and agreed wholeheartedly with the many fellow guests who pronounced the evening a success.”
The standard of entries was extremely high, but after much deliberation, the judges announced the following winners:

Commercial Electrical Contractor of the Year (sponsored by Schneider Electric)
Winner: Unidata

Best Electrical Health & Safety Initiative (sponsored by NICEIC Insurance Service)
Winner: The Dodd Group
Best Wholesaling Initiative (sponsored by Basec)
Winner: BDC

Customer Service in Wholesaling (sponsored
by ABB)
Winner: WF Electrical

Test & Measurement Product of the Year (sponsored by Electrical Times)
Winner: Kew Technik

Best Lighting Initiative (sponsored by WF Electrical)
Winner: Thorn

Best Product Innovation (sponsored by Electrium)
Winner: Thorn

Wholesaler of the Year (sponsored by Unitrunk)
Winner: Edmundson Electrical

Energy Efficiency Product of the Year (sponsored by ABB)
Winner: Kirklees Metropolitan Council

Outstanding Communications in the Electrical Industry (sponsored by Yell.com)
Winner: Voltimum UK and Ireland

Best Registered Training Provider (sponsored by EDF Energy)
Winner: Electrical Test Services

Automation Project of the Year (sponsored by Electrical Review)
Winner: Schneider Electric

Power Product of the Year (sponsored by Amps)
Winner: Terasaki

Best Electrical Product of the Last 50 Years (sponsored by Professional Electrician Magazine.)
This category asked voters to choose from one of five products that attracted the most requests for information from Professional Electrician Magazine. The winner was Super Rod for its cable rods.

Best Environmental Initiative of the Year (sponsored by Edmundson Electrical)
Winner: ABB

Best Practice in Energy Efficiency (sponsored by Kew Technik)
Winner: The Lowe Group

Domestic Electrical Contractor of the Year (sponsored by Rexel Senate)
Winner: Owen Bowness & Son

Electrical Skills for the Future (sponsored by Mr Electric)
Winner: Clarkson Evans

Best Customer Service Provider for Domestic Installations (sponsored by Domestic & General)
Winner: TBS Adaptations

Outstanding Contribution to Electrical Excellence (sponsored by Megger)
The winner of this special award was Peter Lawson-Smith for his dedication to promoting electrical safety.

Spotlight on Automation Project of the Year

Schneider Electric provided the winning entry in this category, with a £3m product packaging line for a leading pharmaceuticals manufacturer, relying on the expertise of the system designer PES Technology and Schneider Electric products.
The packaging line in question handles diagnostic treatments for individual patients, treatments with a useful life measured in hours. The flasks containing the treatments are despatched all over the world. Product must be at a hospital within 36 hours and used within a further 72 hours. If a production delay occurs, the customer must scrap each order in-process – a truly mission-critical system. Drives, servos, motion controls, sensors, automation controllers, HMIs and software from Telemecanique, a brand of Schneider Electric, were all integrated within the system.
A full review of the Electrical Industry Awards can be found in the Book of the Night Souvenir Issue accompanying this magazine.

Steve Landau from Philips Lumileds Lighting Company looks at the different approach needed by lighting engineers when working with high-power LEDs.

As the lighting and architectural communities continue to embrace high-power LEDs, there remains a challenge to recognise the new engineering paradigm that comes with solid-state lighting. Many lighting designers and engineers continue to consider the LED as the equivalent of a conventional light bulb, perhaps because LED manufacturers have failed to adequately educate the market about the different approach required to specify LED-based lighting products.
Anyone who works with LEDs must understand four key concepts:
• LED performance is not defined by wattage
• Maximising drive current will maximise light output
• Efficiency must be measured by the total system, not by light source
• Total usable light differs from part number to part number and from manufacturer to manufacturer and from application to application
A Question of Wattage?
First, consider the wattage question. While the light output from conventional bulbs is commonly expressed in wattage (as well as lumens), it is misleading to use wattage as a yardstick when selecting an LED because the actual wattage is affected by multiple factors. For example, at 220 volts, a 60-watt light bulb will use 60 watts of power regardless of the socket it’s attached to.
The power or watts that an LED uses is primarily dependent on the current (usually expressed in mA) that is applied to the LED and the voltage. The current will vary from application to application and the voltage will depend in part on the LED and in part on the source, such as a battery or an electronic driver. A single high-power LED, such as a Luxeon K2 that can be driven from 350mA to 1500mA, uses anywhere from one to seven watts of power in a given luminaire, depending on the drive current as well as other factors such as voltage, thermal conditions and the nature of the LED itself. Wattage is not defined by the LED but instead by the system. To describe an LED as a “one watt LED” only describes the LED given a very narrow set of parameters and does not indicate the light output of the LED. Failing to understand this can result in the wrong LEDs being used and lead to luminaire performance that does not meet expectations.
Increasing light output
To date, high power LEDs typically could not be driven at more than 350mA without potentially reducing the effective life of the device. With many LED applications needing far more light output than is possible at this drive current, LED usage was constrained.
Even incremental gains in light output and efficacy at this level are not enough to meet the growing demands of architects and lighting designers for more available lumens for their applications.
The ability to get more light from fewer LEDs enables entirely new types of general lighting luminaires and applications.
It is this demand that has driven LED manufacturers to develop their technologies further to provide a higher drive current, making significant advancements in die and packaging to manage the additional heat. For example, the latest Luxeon K2 LEDs are the first to be tested and binned at 1000mA with specified minimum performance and no sacrifice in lumen maintenance. The ability to drive the LED at higher currents not only delivers more light, a lower cost per lumen, and more thermal flexibility but also provides the ability to incorporate the same LED in a range of applications to minimise manufacturing costs.
The Bigger Picture
It is essential to consider overall system efficiency rather than the efficacy of the light source alone in order to compare solid-state to conventional technologies. Take the example of an under-cabinet luminaire. Today, the best high-power LEDs achieve efficacies of 50 to 60lm per watt, compared to a fluorescent lamp that may be rated at 70lm per watt. When deployed in luminaires, however, both solutions provide the same illuminance on the work surface. This stems both from the nature of LEDs as a directed light source and from the efficiencies of the power, optical, thermal and electrical components. If in fact the desired illuminace is achieved using fewer watts of power with the LED solution than the fluorescent solution, then the LED solution will deliver better system efficiency. When considered as a system, the high-power LED solution can offer superior performance and lifecycle costs.
Real Results
Finally, one must look beyond the raw luminous flux claims in LED datasheets before determining which LED to use. There is no industry standard for the numbers shown on datasheets so minimum, typical and maximum lumen values may not have the same definition or have been derived under similar conditions.
In fact, LEDs from different manufacturers will deliver different light output when placed into identical lighting fixtures.
In order to make an apples-to-apples comparison of LEDs, one must consider the maximum allowable drive current, how hot the LED can get without sacrificing lumen maintenance, the thermal management system, the voltage, LED binning and other system parameters.
Embracing the differences between LEDs and conventional lamps, and recognising the difference between LEDs from different manufacturers, will go a long way toward improving the design of LED lighting products and delivering superior products to consumers as the market continues to grow and develop.
Everyone knows the size, longevity and other benefits of solid-state lighting, but now we have LEDs that open up a new realm of possibilities, far beyond the previously standard 350mA LEDs.
Designers and engineers using effective system designs and knowledgeable LED selection will maximise the usable light from LEDs and the market opportunities for their products.

Flexible conduit is often used in very arduous environments and if products don’t come up to scratch there can be disastrous consequences for the end user. Here Ian Gibson, chairman of the combined IEC & Cenelec Committee for Conduits and technical director of Flexicon, looks at the issues surrounding conduit quality and the importance of picking the right product for the job.

Good quality in the flexible conduit market is sometimes overlooked but low-cost inferior products often have inherent weaknesses that mean they can be problematic and costly to install, or can lead to their failure during service. Using cheaper, poorer quality products can be a false economy as higher installation costs combined with the potential cost of remedial work and associated wasted labour time on reworks can more than outweigh initial outlay.
No longer a threat
Low-cost flexible conduit imports have been a threat to the UK market for some time but the tide is turning and there is recognition by many wholesalers, specifiers and end-users that quality conduit systems are a worthwhile investment. Manufacturers understand the importance of margin and profit for wholesalers and in turn installers, but if quality is being compromised in order to achieve this then it can only be a matter of time until there is a disastrous result.
Flexible conduit plays a key role in cable management and is responsible for protecting some of the most vulnerable and potentially dangerous materials in a building environment. They need to shield important cabling from external damage while protecting personnel and property from dangerous electrical exposure.
The conduit will often be subjected to harsh treatment in very poor conditions, including extremes of temperature, physical damage, chemical corrosion and wetness so it is essential these products can meet high standards of strength, safety, durability and performance and provide reliable cable protection.
New requirements
Following the devastating Kings Cross disaster of 1987, where fumes from melting fixtures and cables contributed significantly to the number of fatalities, new requirements for zero halogen (HF) and low fire hazard (LFH) products have now been introduced.
LFH conduit systems are increasingly specified in many cabling applications in order to protect both staff and the general public in the event of a fire. Public buildings, retail outlets, high rise office blocks, hospitals and transport installations are all likely to require LFH conduit and, in certain incidences, fire services and even insurers are demanding these products to be specified.
High-risk environments such as the London Underground have even introduced their own stringent regulations for suppliers. The ‘Section 12’ requirements of LUL standard 2-01001-002 pertain to the fire safety performance of materials, ensuring conduits can be safely used by OEM’s and contractors supplying and working in the London Underground system. A wide range of Flexicon’s products have documented compliance with this standard and it’s advisable that customers operating in other sectors of the market where public safety is a key criteria should look out for this type of certification.
Potential problems
At Flexicon we define a LFH product as being ‘halogen free’ (no halogen acid gas emission which can destroy computer equipment and damage building structure), ‘highly flame retardant’ (products will prevent a fire or limit its development if one does start), ‘low smoke emission’ (personnel will be able to see their way to escape in the event of a fire) and ‘low toxic fume’ (personnel will not be overcome by dangerous fumes during their escape).
Some conduits – in particular cheaper versions – claim to be LFH but don’t offer all these properties which means they can be dangerous in high-risk environments where safety is crucial.
Metallic conduit is typically manufactured from stainless or galvanised steel. One of the potential problems with metallic conduit arises from the use of inferior quality steel, which in turn results in structural and performance issues. It can be prone to kinking when bent, which weakens the product, or have other defects such as poor welds that fail under pressure or sharp edges inside the conduit which can damage the cables inside.
Another issue is differing standards in the galavanising process. Higher specification galvanised steel conduit is hot-dipped after manufacture which means it boasts both internal and external heavy protection and provides the best anti-corrosive properties.
Meanwhile, pre-galvanised products – which are usually lower in cost – tend to corrode after a relatively short period when exposed to damp or moist conditions due to the absence of zinc protection on the internal walls.
Another concern stems from the application and thickness of the galvanised layer which may look ‘shiny’ but can begin to flake after only a short time in use. This may leave the product open to the elements causing it to corrode quickly as its protective layer will have been compromised.
Non-metallic conduit, typically manufactured from nylon or polypropylene mixes, also carries risks of failure if sub-standard products are selected.
Thin walls can cause conduit to snap, exposing the cabling inside, while poor quality material may be resistant to much smaller temperature ranges, making it prone to damage in extreme cold or heat. It is also common to experience difficulties with fittings, due to poor or inaccurate manufacture and non-complementary product ranges.
As so many problems can arise from the use of poor materials it’s worth thinking about the origination of the conduit you’re purchasing.
Benefits of UK manufactured conduit
UK production gives total control over quality while buying from abroad means sources of materials may not be well documented. UK manufacturers have onsite expertise in place to oversee manufacturing facilities. Production can be monitored to ensure the correct use of materials and processes which ultimately ensures that top quality conduit is produced.
However, the only way to ensure conduit is of a high quality is to make sure it has been independently tested to the relevant BS standard.
This means the BS standard is not only printed on the packaging but stamped on the products themselves, along with the name or trademark of the manufacturer.
The testing carried out is rigorous and designed to ensure compliant products have the correct construction and mechanical properties to make them safe and resistant to damage.
Look out for BS EN 61386 which is a worldwide standard developed by IEC (International Electrotechnical Commission) and Cenelec (European Committee for Electrotechnical Standardisation) and replaces the previous BS EN 50086 and IEC 61386.
Meanwhile, ISO 9001 compliance looks at quality management systems within business processes and the European RoHS (Restriction of Hazardous Substances) directive bans the placing on the EU market of new electrical equipment containing more than agreed levels of various hazardous substances.
The regulations may just seem like annoying hurdles but all legislation is there to improve quality within the industry. Flexicon products comply with all key standards and the company is also a member of Beama (British Electrotechnical and Allied Manufacturers' Association), which helps write the standards and implement the legislation.
It is important to demonstrate a commitment to safety and product quality in order to instil trust in customers and give them peace of mind.
Ultimately, high-quality products eliminate risk, both for yourself and your customers, and ensure the UK electrical sector remains respected in the global marketplace.

The old saying ‘biggest is best’ may be true in some instances, but applied to the selection of modern industrial drives it is not necessarily the case says Jonathan Smith from Rockwell Automation.

Back in the early days of industrial automation, customers would often purchase an over-specified industrial drive, so that they could be sure that it would be rugged enough to handle every need. But there was a premium to pay for this excess capacity and, in today’s lean manufacturing environment, paying extra for something you don’t need is a luxury few can afford.
By studying the application and asking a few pertinent questions, the knowledgeable user can precisely match the performance of the drive to the demands of the application. This ensures costs are controlled but performance is not compromised.
How To Analyse Your Application
The sizing of modern drives like the Allen-Bradley PowerFlex range from Rockwell Automation, has been greatly simplified by having both a normal-duty and a heavy-duty rating. By seeking answers to a few basic questions about the application, the user can quickly determine whether a normal, heavy-duty or even a larger drive is required. By abandoning the old ‘one-size-fits-all’ approach, users can save money and reduce panel space requirements.
The selection criteria starts with knowing the load requirements and selecting a drive with enough current to meet that need. Whether you are moving coal, air or boxes, all have varying torque demands, and their characteristics determine the most appropriate drive solutions.
The first question to consider with any application is, “Do I know the torque requirements that will make the process work?” Since this information is required to select an appropriately rated motor, it should be available to properly size the drive.
The Importance Of Load Data
Most loads can be broken into one of three categories:
• Variable torque
Almost all variable torque loads are either centrifugal fans or pumps. These make up nearly 70% of global motor applications. If the application is variable torque, then it almost always requires a normal-duty rated drive. These drives can supply rated torque with some overload (approximately 110%) for up to one minute, providing enough capacity for these types of load.

Standby power is accepted as essential in an organisation’s armament in the battle for success, and the introduction of fuel cell standby power increases the options available. Karen Sperrey of UPS Systems asks which solution will be right for you?

It is an undisputed fact that many businesses will lose money if they suffer an interruption to their prime power source, even for just a few seconds. It is not just loss of an ability to actually work or receive customer calls. These are bad enough, but many service providers will lose revenue when systems based on automatic processes fail; web-hosting, data centres, on-line catalogues and call centres are just a few examples. If customers can’t access a product or service someone will lose out. Businesses deriving income from web-based information services are especially vulnerable. For core facilities such as emergency services, hospitals, research establishments and education standby power is vital and downtime is not an option that these or other businesses can tolerate. In some cases, non-stop power 24/7 must be guaranteed. Users and providers now talk about zero failure and power up-time in terms of six nines. Systems must be reliable for 99.9999% of the time.
Achieving this high power availability is a constant challenge for the standby power industry where a ‘one-size fits all’ approach is not the solution. As we know, when mains power goes off, the standby power system has to operate either by generating power from fuel or by using stored power. Most often, generated power comes from a diesel generator, and stored power from a UPS with lead-acid batteries and usually the two work together to ensure the even momentary gap (which is harmful to electronic equipment) between the power failing and the generator becoming operational is filled.
Which Option to Choose?
There are many factors that may need to be taken into account when deciding which option(s) to pursue. Some will be more important in certain situations than others. Some specifiers will always choose the cheapest option, but it is rare that all factors are equal when comparing various options and so an element of criteria weighting will be inevitable. These are some of the factors that should be considered:-
Location – Are there constraints about the floor area or ceiling height available? Does it need to be situated indoors or out? Will the existing floors carry the required weight of the unit? What is the lift load-carrying capacity? Are there awkward access arrangements for delivery and installation? Are there any restrictions in the building lease?
Rack mountable – Some users, especially in the IT environment prefer this approach. Can the unit(s) be mounted in a conventional rack?
Sizing – What size unit in terms of power capacity is needed? Can some of the load be segmented as non-critical and therefore switched off after a successful shut down? Is the capacity likely to be expanded in the future?
Automatic change over options – Are there always going to be personnel on site to deal with any power off situations? How will any automatic change over option work? How will the system notify people what is happening?
Maintenance – Who will look after the system? How often will scheduled maintenance be necessary? Will the system have to be shut down while it is carried out? What security problems may be encountered if this is to be done out of normal working hours?
Planning permission – If the unit is deployed outside, will this be necessary? How long will this process take? Will the land lord need to be involved?
Exhaust emissions and noise – This is often linked to planning permission but not always so. Is the system to be installed in or near a residential area where these issues may be more important?
Incorporating existing standby power products and upgradeability – No one likes to under-utilise an existing investment. Can it be integrated in some way? Can the unit be upgraded as required?
Ensuring evenly matched Genny and UPS – The generator must be large enough to support the load initially supported by the UPS and to ensure that there is sufficient extra capacity to recharge the UPS batteries. The installation must also comply with the current IEEE G53 guidelines on harmonics. This is a specialist area in itself.
Types of fuel – Is there an obvious fuel source already installed? If not, does the user have a preference? Can a new source be installed?
Refuelling – How often will this be necessary? Are the access arrangements awkward? Are personnel in attendance to organise this even in the event of a prolonged mains failure during the night? Will an automatic monitoring facility be required for low fuel or any other occurrence?
Availability – How soon can the unit be installed? Is there a long lead-time before delivery can be made?
Price - What does this include? What warranty is included? How much will delivery cost? How much will the installation cost?
If the decision is taken that extended auxiliary power is required a choice needs to be made between installing either large banks of batteries to keep the UPS powered for longer or a generator running on a separate fuel source. There are advantages and disadvantages with each solution although it is generally accepted that if all other factors are equal there is a financial advantage to a generator if the extended run following a power loss needs to be in excess of 4-8 hours. But there are many situations where a generator may not be a viable option and for those organisations another solution is needed.
The latest alternative is fuel cell standby power. Fuel cells can be regarded as generators but whereas conventional generators use internal combustion engines to rotate an alternator, fuel cells generate power by producing electrons directly, with no moving parts. As a result, they have the potential to be very efficient and reliable. Moreover, they are comparatively quiet and other than electricity and heat, they produce only water vapour. This makes them ideal for indoor use. With the maturing market in fuel cell technology and increasing awareness of environmental issues, a standby power solution incorporating a fuel cell is now the third alternative..
Why fuel cells now?
UPS Systems has been monitoring fuel cell developments for several years and we are convinced that fuel cells now offer a viable alternative in certain situations. There are areas where the current offerings aren’t suitable e.g. where system requirements are in excess of 60kW. However in the 10-60kW application range which incorporates a lot of smaller IT departments, the option could now be considered.
Firstly, let’s consider why a UPS only solution might be put forward:-
• The systems to be supported are not mission critical or require only limited auxiliary power before a safe shutdown is effected
• The power draw of the mission critical system is low enough that a single UPS can support it for many hours
• There is no room for anything else in or outside
• There are environmental issues if the unit is placed outside. Councils often place stringent restrictions on working equipment and require detailed planning permission
• There may be a company policy, biased against noise or pollution caused by generated power
• There may be personal expertise or preference to stay with one manufacturer
• Internal politics or budgets may prefer several smaller departmental units rather than a single large one
• Security and access policies may prevent equipment being sited outside
Then consider why a UPS and generated power solution might be put forward (whether conventional or fuel cell):-
• Where the size of the batteries needed is excessive
• Where there is more room available outside for generator than inside for batteries.
• 24/7 protection is required and a refuelling contract option needed
• Where shared resource is more economical between several departments or different companies operating within a whole building, especially if a landlord is contracted to provide support
• Where there is a requirement for air conditioning which is better protected by a generator
Now let’s look at the advantages of using a fuel cell as opposed to diesel generator:-
• Standard hydrogen bottles offer a green alternative to conventional fuel and/or batteries
• Unlimited runtime - simply increase the number of bottles
• Low audible noise – no fans and pumps- suitable for indoor installations
• Only heat and water by-products so safer for the environment
• In larger systems the waste heat from a fuel call can be used to provide hot water or space heating
• Easy indoor installation- no major planning permission required
• Modular rack integrated design – easy to add more power
• Few moving parts, so less need for maintenance
• Politically reduces dependence on oil
• Some factories and plants may already have hydrogen installations or can utilise hydrogen produced by existing processes
• Certainly lighter than batteries and lighter than many conventional generators
• More energy efficient in power terms than either a battery or generator
• Generated power close to or inside the computer room
And the advantages of a conventional generator:-
• Tried and tested technology
• More expertise around if there is a problem
• Established supply chain for fuel
• No ‘Hindenburg’ fear factor
• Currently cheaper than fuel cells
• Can deal with much larger loads in kVA terms
Making the right choice on your own will not be easy and we would always recommend working with an independent and impartial supplier that will remove any complexity from the purchasing cycle and help to match one of the three standby power alternatives to your individual requirementsu

The Energy Review has brought concerns about the growth in the amount of standby power in domestic electronic equipment to the forefront of the industry. But just what is being done to hit the maximum 1W levels? George Warren from the environmental engineering department at Nottingham University investigates.

The Government’s Energy Review draws particular attention to increasing concerns about the growth in the amount of standby power in domestic electronic equipment. It commits us to working towards all electronic devices using a maximum of 1W for standby purposes.
Standby power consumption accounts for 1% of the worlds CO2 emissions. If all appliances used only 1W or less in standby, it would result in an 80% reduction of the current CO2 emissions related to standby, a reduction of at least 54 million tonnes of CO2 each year.
The average electricity use for standby power in the European Union is 400 to 500kWh per household, at least 60TWh a year for the EU as a whole. Happily that can be reduced significantly with minimal effort – but with maximum policy co-ordination.
Standby power, formerly known as ‘leaking electricity’, traditionally referred to the electricity consumed by electrical equipment when supposedly switched off or not performing its main function. But definitions seem to vary. Many now see it as power consumed by an appliance during the lowest possible electricity-consuming mode. Unfortunately people have used one or another of these definitions in the past, meaning measurements are not always comparable. While in 1999, the International Energy Agency (IEA) officially defined standby power as referring to the appliance’s “lowest possible consuming mode”, nonetheless several commentators still use the first definition – causing continuing confusion.
The relevant appliances range from televisions to set-top boxes and microwave ovens to cordless telephones. Each product uses varying amounts of electricity whilst in standby. This could be to maintain a digital display, or to be alert for remote control demands or even just wasting the electricity due to poor efficiency.
The actual power draw in standby mode is relatively small, typically 0.5-3.0W. Although at first glance such losses may seem trivial, the problem assumes significance because these products are consuming 24 hours a day: the cumulative total soon adds up. For example, a New Zealand study into the power consumption of microwave ovens found that 40% of them consumed more electricity over the course of a year in standby mode, merely powering the clock and keypad, than in cooking food.
As new electronic equipment continues to proliferate at increasing rates, so too will the energy used attributable to standby power, with more and more goods utilising a standby function.
The standby function can aid the efficiency of a device, as it will undoubtedly use less power than if it were in its normal function. But the degree to which it continues using energy is of concern, as technology is available to reduce the standby power consumption of their products.
Nevertheless, the problems will remain while people still use their older models never designed with energy efficiency in mind. What is really needed, is an incentive for the public to change wither their habits, by turning the equipment off at the mains, or swapping their existing devices for new more efficient ones.
Scale of the problem
Standby power can be measured directly using a high-resolution watt meter. Unfortunately these can be very expensive and difficult to find, so it is unlikely to be a realistic option for customers to test all appliances themselves. Instead there are three ways in which standby power can be quantified.
• The whole-house measurements – Measuring the standby power consumption of every single electrical appliance that consumes standby power in the home.
• Bottom-up estimates – Estimates of either the average standby power consumption per home or the national standby power consumption. This data is normally based on measurements of individual appliances and then multiplied by the amount of devices in the specific area. These estimates utilise a combination of field measurements and known appliance saturation. Although usually accurate for larger appliances, it is not the case for minor ones, as little information is known about their saturation. Therefore bottom-up estimates probably underestimate standby power usage.
• New Product measurements – Measurements are made within the stores of factories, so all new products’ standby power can be measured at one time. Quick and accurate but unfortunately these results omit the performance of older products.
Three strategies
There are three principle research strategies to reducing standby energy consumption:
• Improve (or modify) technologies outside the device and change user behaviour
• Improve the efficiency of the components
• Improve software to help equipment operation better match functional needs
An example of this is the simple power switch design. Pictured are three different power supply set ups:
Fig. 1: Design A is accomplished by placing the switch between the power source and the power supply. This is the perfect solution for eliminating power consumption,as the current does not reach any energy-consuming components.
Fig. 2: Design B has a switch placed in between the power supply and the appliance, which is the case in a significant amount of conventional electrical equipment. Even when the switch is turned off or no appliance is connected to it, the power supply will continue to receive a flow of current and the energy is then usually dissipated as heat. This is a prime example of the manufacturers not adapting their designs to prevent loss of energy.
Fig. 3: Design C shows a secondary load, for example a clock, which requires energy even when the main load is switched off. This can be achieved by:
• Adding an extra power supply for use at low-power levels.
• Using a power supply with two operating ranges (‘on’ and ‘standby’).
• Incorporating a separate source such as a small battery or a photovoltaic cell to power the secondary load.
The other alternative is of course to teach the public to switch their appliances off at the mains when they are not using them. A classic example of this is the phone charger. Many people wrongly assume that when a phone is finished charging, it will cease to draw electricity from the supply. In fact, even when the phone has been unplugged from the charger, if the charger is still connected to the power source there will be energy consumed in the form of heat.
Educating the public into switching any accessible electrical equipment off at the mains would solve the standby power consumption problems. Of course, turning each appliance off at the mains is not always practical. In these instances, the manufacturers need to improve their components efficient.
Two approaches to this are to either improve existing standby mode components efficiencies, or to use new of different components that require less power.
• Power supplies - There are two main types of power supplies, linear and switch-mode. Switch-mode is the more efficient of the two, and could have a major impact on reducing standby power consumption if chosen instead of linear power supplies. Standby power can also be reduced by the addition of a separate/secondary power supply that consumes less energy than the supply used when the appliance is in active mode. An alternative may also be to incorporate a small battery or photovoltaic cell.
• Voltage Regulators - Voltage regulators tend to dissipate a large percentage of the power supplied to them as heat. This increase in heat also shortens the life of the appliances. It is possible to reduce overall standby power by using a more efficient voltage regulator. Another option is to reduce voltage levels, so that fewer voltage regulators are needed.
• Visual Displays - Changing the type or size of visual displays can save power. An LCD screen is the most efficient, although new lower power LEDs are becoming available and are often more practical.
Improvements enable the device to change operating time from active to sleep and then from sleep to off. Although this results in large power consumption when in low power modes, it does improve overall lower energy use.
There are technical solutions that currently exist, which have the ability to reduce the standby power consumed by up to 90% - but are seldom exploited. Additional capital costs to reduce standby power consumption of most appliances are surprisingly modest and will almost always result in lower costs or new benefits elsewhere. The simple fact is that although the new products may cost a small amount more they will easily save the amount on the electricity they do not use, compared to older, less efficient models. As on of the world experts, Alan Meier of the US Lawrence Berkeley Laboratories says: “Most technical solutions are cost-effective given the current price of electricity.”
Nine years ago, the European Commission negotiated agreement with the European Association of Consumer Electronics and Manufacturers (EACEM) setting targets of less than 6W for standby consumption of TVs and VCRs. Ever since then, new targets have been negotiated on various appliances. This has been seen as an effective way of gaining suitable efficiency levels, preferred to setting mandatory efficiency requirements.
Standby levels are not as low as 1W yet, but in 2000 the average standby consumption for new products was 3.7W for TVs and 3.8W for VCRs. Negotiations continue with EACEM with regards to other appliances, and no doubt will do until all reach the 1W mark. Set-top boxes (STBs), the digital television boxes used to receive satellite television channels, seem to be the biggest problem in Europe, as they consume large amounts of power whilst in standby mode. The problem is that the service providers require such devices to remain constantly on, to permit remote access for downloading new software and updates.
In addition, STBs are produced to the specifications of the service provider and so there is no incentive for them to reduce the power consumed in its standby mode. In 2001, the European Commission issued the EU code of Conduct for Digital TV Service, which set the maximum standby power consumption at 9W. This is unfortunately not as low as the 1W at which everything is aimed, but it is a step in the right direction.
Power supplies have also been seen as a problem, often consuming power even when the appliance is switched off. In July 2000, the European Commission issued a Code of Conduct on Efficiency of External Power Supplies in order to reduce the power consumed in standby. The EC then promoted this within the IEA standby initiative for adoption worldwide, as many of these power supplies are traded globally and also many of the manufacturers are not based within the EU. This is still to be accepted internationally.
Elsewhere the Australian government is actively considering a Penalty Labelling Systems, publication of all standby statistics, and perhaps even a Mandatory Energy Performance Standard where the least efficient models on the market are removed. These are yet to be implemented but are being seriously considered in order to improve the situation.
In America, Executive Order 13221 requires every government agency “when it purchases commercially available, off-the-shelf products that used external standby power devices, or that contain an internal standby power function, shall purchase products that use no more than 1W in their standby power consuming mode.”
A Japanese study in 2000, found that standby represented 9.4% of the Japanese residential electricity use. This led to an announcement by several thousand household application associations at the Subcommittee for Energy Conservation of the Advisory Committee for Energy and Resources of the Japanese Ministry of Economy, Trade and Industry 2001, that they intended to reduce standby power consumption to less than 1W or eliminate it entirely by the end of 2003. Unfortunately, no information was found to confirm whether this target was ever reached.
Evidently many national and regional initiatives on regulating standby power use exist. However, it is imperative to coordinate efforts internationally to facilitate participation by industry. This is one of the aims of the IEA initiative. To date, it has failed to generate a joint solution for the most common standby power consuming devices or worldwide discussion of an agreement on requirements for digital TV equipment. But it has achieved the feat of gaining inclusion in energy test protocols and energy efficiency policies for all products that consume significant standby power.
Although success has been achieved in come cases, the standby power problem still presents uncertainties. As the UK government has identified, considerable potential energy savings can be associated with the reduction of standby power in new electronic equipment. It is therefore appropriate to make some recommendations to guide future policy:
• Increase public awareness
• Develop worldwide implementation of regulations
• Improve electrical equipment design
• Develop guidelines for lowering standby power use in appliances no currently covered by any programme
• Establish an international network of accreditation organisations
• Include standby power information on existing appliance energy labels
Internationally coordinated efforts would reduce the burden placed on manufacturers of globally marketed electrical goods. This is the most effective way to achieve an increase in global penetration of these technologies.
Certainly, the anticipated growth in standby power is decreasing, as multinational companies understand the need to reduce the standby power consumed by their products. This progress is encouraging, but there remains an urgent need for government intervention in order to stimulate and reinforce these achievements.

The 2012 Olympics is set to improve the economy of the UK as well as creating hundreds of jobs in the construction industry, and improving the transport infrastructure of the South East. However, it is a very ambitious project and one of the biggest challenges the UK electricial industry has ever faced. Mark Gledhill from the Electrical Markets Division of 3M explains.

The Olympics is one of the biggest and most exciting construction projects the UK has seen for many decades. It is set to be good news for the economy too: according to figures from the Centre of Economic and Business Research (CEBR), the total amount the construction industry will contribute to the GDP figure by 2012 will be £14bn.
The Olympics project also creates massive new job opportunities: forecasts by ConstructionSkills estimate that the 2012 Olympics will create 33,500 additional jobs in the construction sector alone, peaking in 2010. Last but by no means least, let us not forget the likely boost to the tourist industry the games will provide. Yet long before the 2012 Games were awarded to London, there were concerns about the scale of the construction task. David Higgins, head of the Olympic Development Authority (ODA) has himself been recently quoted as saying “The timetable is extremely tight.”
While achievable, and currently several months ahead of schedule, this is a truly ambitious construction project. Moreover, we are not just talking bricks and mortar: the Olympics are going to require many miles of new power cables to be laid, both permanently and temporarily, as well as the installation of hundreds of power sub-stations. For the electrical power industry, the Olympics are both an opportunity and a challenge.
Overview of the project
On July 25 2006, the ODA outlined the key timelines for the next couple of years. The organisation has allocated the first two years – 2006 and 2007 – for planning, design and procurement, four years to build the venues and infrastructure, then one year for test events. That means that the bulk of the construction work is scheduled to take place between 2008-2010.
While that may feel a long way off, and the main construction work cannot start until all the necessary land has been acquired by the ODA, the reality is that the contracts and detailed planning for electrical power projects are going to start from autumn 2006 onwards. In fact, some of the preparatory work has begun, 50 electric pylons have already been removed from the main site.
Power will be fed to the Olympic site via 132kV direct to 11kV connections within the village. This involves taking the existing overhead lines and re-installing them underground. The power requirements created by the Olympics extend beyond the main site in the Lea Valley. Virtually everything to do with the Olympics is going to require a light being switched on, a computer powered or a phone call made. One of the biggest focus areas of the project has been the need to enhance the transport infrastructure in the South East.
Transport infrastructure
The Channel Tunnel high-speed rail-link is perhaps the most publicised, but there are also extensions to several underground lines, including the Jubilee, Northern and Central. The Docklands Light Railway sees what is perhaps the biggest change: its capacity will increase by 50%. New roads, including a new road bridge over the Thames, are proposed. Plus designated existing roads may need upgrading in terms of lighting and illuminated signage. There is also going to be a raft of new buildings that will need power including new hotels, train stations and construction sites.
Beyond the main Olympic village, there are other places in Greater London and the South East that will be involved, such as Eton Dorney in Berkshire for rowing and Waltham Forest in Essex for hockey, cycling and tennis. While some of these facilities exist, others will need to be built from scratch.
Clearly, the pressure on the UK’s already over-stretched power supplies will reach new levels. So, given the scale of the task in hand, where are the challenges and how can they be overcome?
In greenfield environments, installation of new power lines should be relatively straightforward. However, wherever construction work is taking place in areas being regenerated, the situation is quite different. Let us remind ourselves of exactly what is beneath our feet – a mix of different kinds of cable, some polymeric, some paper, other long-forgotten types and, in some cases, cable installed over a century ago. Record keeping in years gone by was not as thorough as it is today, meaning the companies tasked with extending existing cable links or making repairs may not know exactly what they are dealing with until the excavations have been made.
Many of these power projects are going to be taking place beneath roads and other busy areas. Legislation such as the Transport Management Act means utilities and contractors have precious little time in which to carry out construction work in urban areas. Add to the mix the fact these power cables are often sharing trenches with lots of other utilities and the picture becomes even more complicated, because the risk of inadvertent third party damage rises. So how can the electrical power industry address these issues?
Addressing the challenges
Clearly, planning and co-operation is going to be key. Apart from the winning construction consortium, a whole variety of utility companies – not just power, but water and other services – are going to need to collaborate together to ensure work is carried out quickly and efficiently. For example, it would make a lot of sense if once a trench is opened, all the installation by various utilities is carried out, rather than the trench being closed and then re-opened maybe several times. And by working in tandem, the risk of utilities and their contractors inadvertently damaging each other’s cables or pipes would be reduced.
For many years, there have been calls for utilities to work together in this way. In fact, the power utilities tend to communicate closely with one another already, albeit largely on an informal basis. Perhaps there is a need for a more structured approach.
Using the right equipment can make all the difference too. The choice of product can affect how quickly installation takes place, as well as future reliability. One example is jointing and terminating. Considering the many miles of cables that will need to be installed throughout the South East to support various aspects of the Olympics, directly or indirectly, electrical installers are going to be looking at creating thousands of cable joints and terminations between them.
It is widely appreciated jointing and terminating can have a huge impact on the smooth-running of any cable installation project. Areas to watch include the need to connect different kinds of cable, such as new polymeric cable to ageing paper (PILC) cable, or connecting different cross sectional areas and voltage classes. Creating joints like these can be tricky and, depending on the techniques used, can take anywhere from a few hours to a day to complete. The problem is that the installer may well not know what he is dealing with until the joint bay has been excavated.
Cold-applied technique
Using the cold-applied jointing and terminating technique, pioneered by 3M several decades ago and increasingly the preferred method of UK utilities, helps. This is because ranges such as 3M Cold Shrink enable virtually any two cables to be connected together using standard products, and bespoke products are available for the rare eventualities not covered in the main range. The cold-applied approach has other benefits, such as speed of installation and also very small joint bays.
To give an example, creating a medium voltage cold shrink joint, involving connecting paper and polymeric cabling, can take just a couple of hours. As a result, the installer can move on to the next project more quickly, speeding up the overall completion time of the Olympic project. Furthermore, the design of the joint and termination kits ensures a consistent level of installation quality, drastically reducing room for human error, meaning that the chances of faults in the future are lowered. Cold-applied cable jointing and terminating also have health and safety benefits: as no heat is required, gas bottles do not need to be used on site.
The importance of training
While using products such as cold-applied jointing and terminating helps to eliminate mistakes, there is no substitute for proper training. Collectively, the electrical power industry – utilities, manufacturers, installers – needs to focus more on ensuring everyone is trained to a high standard, and refresher training is carried out. This isn’t just for the Olympics, but for the future of the industry as a whole.
The 2012 Olympic Games are a massive opportunity for the UK, including the electrical and construction industries.
As Tessa Jowell was recently quoted as saying, “This is about much more than 29 days of sport in the summer of 2012. This huge and impressive power lines project shows our determination to leave a lasting legacy for generations to come, improving lives and changing the face of London forever. The games are a chance to showcase what we can all do and to forge a new standard in UK power installation.”

The benefits offered by using high-voltage insulation testing as a diagnostic tool are all too often neglected, says Mark Palmer of Megger. He explains why it is needed, what it has to offer and how best to carry it out.
ffective and reliable insulation is essential for the correct and safe operation of virtually every item of electrical equipment. Even in low-voltage systems, regular insulation checks are highly desirable and, in some cases, such as portable appliance testing (PAT), they are a legal requirement.
In medium-voltage systems, insulation testing is even more important, because insulation is often under greater electrical stress and failures are likely to be more costly and, potentially, more dangerous.
Equipment manufacturers, of course, routinely perform insulation tests on their products before supplying them to the end user. Why, then, are further tests needed? The first part of the answer is that it is by no means unknown for insulation damage to occur while equipment is being installed or serviced. More important, however, is that even the best insulation degrades over time.
While this is unavoidable – all insulation starts to deteriorate from the moment it’s put into service – well designed equipment, operated within its ratings, should give many years of reliable service. Nevertheless, the ability to predict accurately when that period of reliable service is coming to an end is an invaluable aid to avoiding costly downtime and unplanned maintenance. Testing is the key but it can only be a reliable indicator if the tests are properly performed, using appropriate equipment.
Dependable results
The first factor to be considered is the test voltage. For dependable results, this needs to be high enough to effectively measure the insulation resistance, but not so high as to overstress the insulation during the test. Some standards relating to specific equipment insulation testing, such as IEEE43:2000, recommend the use of voltages greater than 5kV for an insulation resistance test.
For medium-voltage installations, this means that the usual choice is an insulation tester with a nominal operating voltage of 5kV or 10kV. There is a little more to this choice, however, than simply reading the headline voltage of the tester data sheet, as we shall see.
First, we need to examine briefly what happens during an insulation test and how the test equipment reacts.
When the test voltage is applied to a piece of insulation, a current flows. This current is made up of four major components. The first is the capacitive charging current, which is initially large but in very short time decays exponentially to a value close to zero, provided that the instrument has a sufficient current capability to fully charge the capacitance of the item under test.
The second component is the absorption or polarisation current, which is the result of three effects – a general drift of free electrons through the insulator as a response to the applied electric field, molecular distortion caused by the field and alignment of polarised molecules. This current also decays toward zero but over a very much longer timescale than the capacitive current.
Surface leakage
The third current component is surface leakage, which is present because the surface of the insulation is invariably contaminated to a greater or lesser extent. This current is constant with time but is highly dependent on temperature.
The final current component is conduction current, which is the current that would flow through the insulation if it were fully charged, and full absorption had taken place. This current also remains constant with time. Accurately measuring the conduction current, which is often measured together with the surface leakage current, is the prime objective of insulation testing.
The surface leakage current may, however, be excluded from the measurement if the instrument features a guard connection.
Now let’s examine how an insulation tester behaves when a test is initiated. At first, it must supply the relatively large capacitive charging and polarisation currents.
Portable testers
Portable insulation testers, however, often use high-impedance voltage sources, both for safety and to make the size and weight of instruments manageable. This means that their output voltage falls to a fraction of its nominal value while these currents are flowing.
As we have seen, the charging and polarisation currents reduce with time and, as they do so, the output voltage of the tester will rise. Unfortunately, not all testers are created equal. For those with a good load curve, the output voltage rises quickly as the current decreases and soon reaches a plateau close to the nominal voltage rating of the instrument.
For instruments with a poor load curve, the rate at which the voltage rises as the current decreases is much slower, and the start of the plateau region is poorly defined. In many cases, this means that the desired test voltage is never achieved.
As a result, users of such instruments may be led to believe that the test has reached a steady state condition long before the voltage has actually stabilised. The test may, therefore, indicate acceptable insulation performance when, in fact, that performance has only been checked at voltage which is much lower than that which was intended.
Maximum value
When selecting a high-voltage insulation tester, it is, therefore, beneficial to examine the instrument’s load curve, but this is by no means the only consideration. Another important factor, particularly if diagnostic testing is intended, is the maximum value of insulation resistance that can be measured.
This is often a source of confusion, as many insulation testers will give a “greater than” indication when their measurement range is exceeded. Bear in mind, however, that “greater than” is not a measurement; it’s merely an indication that the result is beyond the measuring range of the instrument. In go/no-go testing, it may be sufficient to know that the insulation resistance is greater than 1 gigohm (1,000 megohm), but the situation is very different with diagnostic testing.
Consider, for example, an item of equipment where the insulation resistance has been recorded over a period of years as relatively steady at around 100 gigohm. A new measurement, however, indicates that this has fallen to 40 gigohm and a further measurement taken a period of time later shows a fall to 10 gigohm.
Potential problems
Clearly, these changes show that there is a potential problem that needs investigation before it can develop into a serious fault. If the same tests had been carried out with an instrument that simply indicated infinity for all resistance values above 1 gigohm, no change at all would have been detected and the insulation would have been given a clean bill of health.
This example illustrates not only the importance of choosing instruments with extended measuring ranges but also of performing insulation tests on a regular basis and recording the results. The database so produced is a powerful tool for initiating preventative maintenance and eliminating the cost and inconvenience of breakdowns.
In this respect, it is worth noting that many of the latest insulation testers have internal facilities for storing test results. These can then be downloaded to a PC for analysis and archiving, a procedure that not only saves time but also eliminates the risk of transcription errors.
One final factor that should be considered when choosing a high-voltage insulation tester is whether a selectable breakdown detector is needed.
With such a detector enabled, the test can be terminated immediately, before a breakdown of insulation and possible damage. If, however, it is an advantage to allow a breakdown to occur – for example, to assist in the location of the weakness in the insulation – the ability to disable the detector is an advantage, as is an instrument with a good short-circuit current capacity.
High-voltage insulation testing, particularly when it is carried out on a regular basis, is an invaluable diagnostic tool and an important aid to predicting and preventing equipment failures.
Maximum benefits can only be achieved, however, if the insulation tester is well chosen. Essentials are a good load curve, an extended measuring range and, for true diagnostic capability, a full battery of pre-programmed tests, including polarisation index, dielectric absorption ratio, step voltage and dielectric discharge. Instruments, like those in the Megger range, which meet all of these requirements, will quickly repay their comparatively modest purchase price.