Features

High availability is one of the most important issues in computing today. Understanding how to achieve the highest possible availability of systems has been a critical issue in mainframe computing for many years, and now it is just as important for IT and networking managers of distributed processing.

A certain amount of mystery surrounds the topic of power availability, but consideration of just a few important points leads to a metric which IT managers can use to increase their systems and applications availability and make a rational price/performance purchase decision.
The importance of high systems availability
Availability is a measure of how much time per year a system is up and available. Usually, companies measure application availability because this is a direct measure of their employees' productivity. With critical applications, or parts of critical applications, physically distributed throughout the enterprise, and even to customer and supplier locations, IT managers need to take the necessary steps to achieve high applications availability throughout the enterprise.
Power availability is the largest single component of systems availability and is a measure of how much time per year a computer system has acceptable power. Without power, the system, and most likely the application, will not work. Since power problems are the largest single cause of computer downtime, increasing power availability is the most effective way for IT managers to increase their overall systems availability. Power availability, like both systems and applications availability, has two components: mean time between failures (MTBF) and mean time to repair (MTTR). The two most important issues in increasing power availability are therefore increasing the MTBF and decreasing the MTTR of the power protection system.
Increasing MTBF
MTBF is the average number of hours it takes for the power protection system to fail. The MTBF of the system can be increased in two ways: by increasing the reliability of every component in the system, or by ensuring that the system remains available even during the failure of an individual component. There is a finite limit to how reliable individual components can get, even with increased cost. Today, typical power protection systems that rely only on high component reliability achieve MTBF between 50,000 hours and 200,000 hours.
By adding a level of redundancy to the system it is possible to achieve a three-to six-fold improvement in MTBF for power protection devices. Redundancy means a single component of a power protection system can fail and the overall system will remain available and protect the critical load.
Of course, component reliability is a requirement of any system. However, Fig. 1 shows the diminishing returns of increasing component reliability. Line 1 shows the plateau that occurs when MTBF is increased by using more reliable (and therefore more costly) components. Line 2 shows how redundancy, in addition to component reliability, can raise MTBF to the next plateau.
Decreasing MTTR
One way that systems downtime can occur is when both the power protection system and the utility power fails. A shorter MTTR can decrease the risk that both of these events will occur at the same time. By driving the MTTR towards zero, it is possible to essentially eliminate this failure mode.
Adding hot-swappability to a power protection system is the most effective way of decreasing MTTR. Hot-swappability means that if a single component fails, it can be removed and replaced by the user while the system is up and running. When hot-swappability is used in conjunction with a redundant system, MTTR is driven close to zero, since the device is repaired when there is a component failure but before there is a systems failure.
The Power Availability (PA) Chart
The relationship between power availability, redundancy, and hot-swappability is easily explained by using the PA Chart, which categorises power protection systems in quadrants according to how well they meet the requirements of high power availability – redundancy and hot-swappablity. As more components in a system become hot-swappable, the system moves from the bottom to the top of the graph (Fig. 2), and as more components become redundant, it moves from the left to the right of the graph. IT managers can choose the solution that is right for them, depending on the need for high availability and the amount of money they want to spend.
The PA Chart corresponds to the types of power protection systems available today as shown in Fig. 3. The standalone UPS is neither hot swappable nor redundant. As shown in the table, a standalone UPS provides normal power availability because uptime is dependent on the reliability of the UPS itself.
The fault tolerant UPS is sometimes described as providing affordable redundancy. Systems of this type have redundant components but not all of the major components are hot-swappable. This type of system offers high power availability because the power protection system will continue to protect the load when a component fails. But because a failed component often results in the entire UPS needing replacement, this type of system can have serious drawbacks, including expensive and time-consuming repair with both systems downtime and a major inconvenience for IT managers. Fault tolerant UPS systems may have some hot-swappable components, such as batteries and a subset of power electronics, but in most cases a high number of critical components, such as the processor electronics, will not be hot-swappable. The more components that are not hot-swappable, the lower the power availability.
Like fault-tolerant UPS, modular UPS offer high power availability. Modular UPS have multiple hot-swappable components and are typically used for multiple servers and critical applications equipment. Many modular UPS also have redundant batteries. Their main advantage over fault-tolerant UPS is that all of the main components which can potentially fail can be hot-swapped, eliminating planned downtime due to a service call.
Highest levels available
The PowerWAVE range of modular UPS offers the highest level of power protection currently available in the UPS market. In a PowerWAVE modular UPS the power electronics, batteries, and processor electronics are both redundant and hot-swappable. This system provides very high power availability and the highest level of protection for IT managers’ critical loads. A PowerWAVE modular UPS may cost a little more than a similarly-rated standalone UPS, but the increased system reliability and availability are invaluable to the IT manager.
The different types of power protection systems in the PA Chart can be measured linearly with the PA Index, according to how much power availability they provide. The PA Index serves as a tool to explain the difference between power protection systems. Fig. 4 shows each of the quadrants from the PA Chart mapped into a level of the PA Index. Fig. 5 shows the relative power availability provided by each type of system. The PA Index maps directly into the PA Chart and makes the different characteristics of high availability power protection systems clear.
In conclusion, IT managers can use the PA Chart and the PA Index to help them choose the right power protection system for their high availability applications. The standalone UPS, the modular UPS, and the PowerWAVE 9000 Series modular UPS all offer real benefits in terms of power availability versus cost. Although fault-tolerant UPS offer high power availability – and are marketed as such – they introduce serious drawbacks including a high MTTR and potentially significant inconveniences for IT managers.

Ensuring worker safety in and around industrial processes is a vital consideration for manufacturers and OEMs. Balancing the needs of safety with commercial considerations becomes ever more complex as safety standards evolve and new technologies become available. But, as Paul Davies of Rockwell Automation explains, by understanding the principles underpinning an effective safety strategy, designers can ensure the needs of both are satisfied.

Any safety programme should start with a thorough risk assessment that will help identify the areas of risk within a facility or machine, and point to the right technology to reduce that risk. Rather than aiming to remove risk altogether, a risk assessment aims to establish acceptable levels of risk. This analysis proves invaluable in helping to identify the kind of safety products that might be required in any particular application to achieve the most effective – and practical – solution. In a manufacturing environment, the assessment process can help to chart a course for an effective machine-guarding strategy, itself forming part of an overall safety strategy designed to protect the company’s investment in both machinery and personnel.
Design-out potential hazards.
The best way to reduce a potential hazard is to remove it at the design stage. A careful review of the risk assessment and risk reduction at the earliest stages of design can highlight potential trouble spots, such as pinch-points or sharp edges, helping companies take the necessary steps to design-out these features long before they require guarding. The removal of risk areas in this way can result in more efficient machines, as with fewer potential hazardous areas, there are fewer risks of unplanned stoppages occurring.
Consider the options for machine guarding.
Where a hazard cannot be removed entirely through design, the reduction of risk by the physical guarding of the hazardous area is the next best option. There are a huge range of machine guarding systems and components available, including safety mats and safety interlock switches, that can be used to protect workers around specific areas of a machine or industrial process. Devices such as light curtains can be used to guard areas – enabling exclusion zones to be created for maximum worker protection. Systems frequently combine elements of both to achieve the most effective solution.
As part of the analysis of the most effective strategy to adopt, careful consideration must be given to how frequently a machine or process will need to be accessed. This analysis will help refine the list of possible machine guarding solutions, allowing designers to arrive at a strategy that balances the commercial needs of the operation with the need to ensure risk levels are reduced to an acceptable level. Naturally, it’s also important to ensure that the solution chosen doesn’t itself cause another hazard!
Add advanced controls.
As well as applying the appropriate machine guarding devices, engineered solutions can be implemented to further reduce potential risks. Electromechanical safety relays have formed the backbone of safety control design for many years. Today’s devices offer a wealth of advanced features that allow sophisticated safety schemas to be implemented without adding unnecessary expense or complexity. Even more advanced protection can be provided by safety-rated controllers. Using these dedicated safety control architectures, extremely sophisticated solutions can be developed employing the full range of inputs, such as light curtains, E-stop buttons and safety mats and outputs such as guard locking solenoids and alarms. Clever design, such as the manual release function found on high-end safety interlocks, can enhance safety functionality still further at very little extra cost.
Promote awareness
Encouraging safety awareness helps reduce levels of potential risk in any workplace, but particularly so in industry. Effective signage and the use of visual/audible warnings can all help reduce the risk of accidents. Careful consideration of positioning should be carried out to ensure that signage and warning devices are positioned where they will best serve their intended purpose. Consideration must also be given to which products would be most appropriate in each given circumstance. For example, an audible alarm would need to be clearly distinguishable above the normal operational noise of a machine or process. Once again, a comprehensive range of warning beacons and audible alarms are available on the market, enabling the designer to chose the most appropriate device for use in each application.
Training
Providing effective training that allows workers to understand the hazards likely to be encountered in the workplace and how to reduce the potential risk is the cornerstone of any safety strategy. The majority of workplace accidents are caused through ignorance and/or failure to follow correct safety procedures. While it is the company’s responsibility to provide such training and equipment as is necessary to reduce risk, it is the employee’s responsibility to ensure that this equipment is used and these procedures are applied in the workplace. While an important element in any safety training programme is to ensure that all employees understand that safety is everybody’s responsibility, choosing safety products which incorporate tamper-resistant features also helps to ensure the overall integrity of the safety strategy.
Follow-up assessments
After installing physical safeguards and establishing safety procedures, it is vital that follow-up assessments are made to ensure that risk has been reduced to an acceptable level. It is also vital that periodic assessments are made to ensure that these measures remain effective. But as well as confirming the effectiveness of the safety measures adopted, such assessments should be made on a commercial basis too: The twin aims of any safety strategy should be safety and productivity, and these two aims are not mutually exclusive. A careful follow-up assessment might reveal ways in which a process could be made more efficient without compromising safety levels. New products and technologies, such as the Safe-Off facility in Allen-Bradley PowerFlex drives, for example, could offer just such an opportunity by delivering both enhanced safety and improved efficiency.
Rely on experience
When embarking on any safety programme, the single most valuable asset a company can have is an experienced partner, well versed in both current legislation and the latest safety techniques and technologies. When choosing their safety partner, designers should consider carefully not just the ability to supply products, but also the expertise available to be able to understand the issues and to make the right recommendations to balance safety effectiveness and cost effectiveness.

The visible damage caused by lightning can be spectacular but the damage caused to sensitive electronic systems can have a far more profound effect on operations and profits says Andy Malinski of Omega Red Group

Most modern companies place heavy reliance on the uninterrupted functioning of electrical systems used to power everything from sophisticated IT networks and telecoms to lighting and heating systems. Yet many buildings, including those only a couple of decades old, were simply not designed with surge protection in mind even though a strong surge can completely disable the electrical system in place. Protecting electronic equipment from the consequences of surge or lightning activity is essential and not just because of the immediate disruption it can cause.
Big increases in insurance claims for surge related damage have led some insurance companies to increase premiums for companies heavily reliant on electronic technology and to impose exclusions of cover pending this particular problem being addressed. Correctly following BS6651:1999 Annex C general advice on protection against lightning of electronic equipment within or on structures should ensure that any site has adequate surge protection and will help to secure the relevant insurance for the site.

The problem
During lightning activity, and switching operations, transient surge voltages are generated. Surges are short duration voltage spikes appearing on a mains power or low voltage signal line such as a computer or telephone line.
The amount of energy contained within the surge is dependant on the magnitude and duration of the event but values up to several thousands of volts lasting for microseconds can be generated. Two main causes of transient over-voltages are:
• Switching. This results from an electrical load being switched on or off with typical loads including motors, transformers, welders, photo copiers etc. These types of surges happen many times every day, are of a short duration and value and do not usually cause major problems.
• Lightning and atmospheric disturbances: Surges generated by lightning activity tend to be of a much higher level and are consequently much more dangerous. They can be generated either by direct or nearby (up to 1.5km away) lightning strikes though most damage tends to be from nearby strikes as the magnitude of discharge current is more concentrated.

How does it get in?
Copper conductors used for electrical, mains, data, computer and telephone wiring are prime routes for transient surges to enter buildings and there are three easy main ways in which these transient voltages affect wiring systems.
• Resistive coupling – a ‘cloud to ground’ lightning strike injects a massive current into the ground. This raises the ground potential in the area of impact to a high level and for the current to dissipate it will seek the path of least resistance to earth. Cables running between buildings are usually connected to different earthing systems at each end and a cable connected to an earth of a lower value forms an ideal route for the induced current to follow.
• Inductive coupling – a lightning discharge causes a huge current to flow. This in turn sets up a massive magnetic field. Any conductor passing through this magnetic field will have a surge voltage induced on the cable, this is the same principle as used by a transformer, and it can happen either above or below ground.
• Capacitive coupling – atmospheric disturbance causes high voltages to be generated. A low voltage conductor in the area of influence of these voltages can be charged to that same voltage, this is the same effect as charging a capacitor.
Dealing with a surge
The British Standard addresses lightning protection for both external structural strikes and internal surges.
Appendix C of the standard looks at internal protection of electrical and electronic systems. More recently the new European standard EN 62305/4 Electrical and Electronic Systems covering surge protection has been released to work in tandem with the British standard but within a few years the national standard will be withdrawn in favour of the new European standard.
For both external and internal protection the first step is to undertake a risk assessment. This is a comprehensive, complex assessment as many factors need to be considered. Internal surge protection takes into account all factors affecting the electronic equipment and systems within the building.
A risk assessment looks at many factors including the number, length and types of cables entering and leaving the building, equipment types, exposure and risk levels, recommended levels of protection, and cable routing, amongst others.
Results taken from the assessment will determine if protection is required and the correct protection methods to use. This can take the form of surge voltage protection devices, the repositioning of cabling, a more effective earthing system or by other means.
Zones of protection are defined, with the corresponding levels of protection required, and co-ordinated protection devices are fitted at zone interfaces. By using a co-ordinated system the high surge current present at the outer zone will be dissipated and the attenuated surge then handled by devices fitted at subsequent zone interfaces – limiting possible damage.

Surge protection in practice
All electronic equipment has a transient safety level, a maximum surge voltage value that can be applied to the equipment without causing damage.
The protection device must reduce the surge voltage to below this value with any excess voltage shunted to earth in the quickest time possible. The let-through voltage of the protection device needs to be as near as possible to the nominal voltage of the line being protected.

Mains power protection
When applying surge protection to a site the first system to protect is the mains power as the large diameter of the cable will allow surges to pass into the system with minimum attenuation. Of greatest concern, the mains is common to all other systems and a surge entering via this route will quickly spread into all other associated systems.
Protection at the main Low Voltage (LV) incoming supply is necessary to control large transients before they enter the distribution system.
Additionally, protection should also be installed locally to important equipment or sub-distribution boards feeding outside equipment.
This is to guard against both internally and externally generated transients, which may be injected back into the distribution system.
Low voltage telephone signal protection devices fit in series with the cables being protected. On older termination systems the Surge Protection Device (SPD) is wired directly into the circuit.
Plug in protection devices are available for LSA Plus (registered trade mark of Krone) termination strips, these come in either single or ten way configurations.

Building Management System
Typically Bulding Management Systems (BMS) consist of a network of separate stand-alone slave controller panels.
Across the site a RS485 backbone cable will interconnect all panels. A drop cable making the network connection to each panel.
On most systems the drop cable is a single twin twisted pair utilising RS485 protocol. It is recommended to fit a surge protection device in series with each drop cable, as near to the interface card as possible.
The input/output (I/O) from each slave controller is usually internal to the building being monitored and controlled. Any I/O external to the building requires an SPD to be fitted.

Fire and Intruder Alarm Systems
These systems tend to be wired using the same type of backbone network as for the BMS.
The main difference being that they will have more I/O cabling connected to outside sensors and alarms. SPD’s are required to be fitted to any cable connected to an outside device.
The positioning and routing of the cables either side of the SPD is of the utmost importance. Incoming and outgoing cables either touching, or closely running in parallel with one another, can cause surge voltages to be induced ‘across’ cables.

Close Circuit Television (CCTV)
The two most popular types of camera are:
• Pan, Tilt and Zoom (PTZ) cameras
• Fixed cameras
Pan, Tilt and Zoom (PTZ) cameras are controlled from a central location with the vertical and horizontal positioning of the camera being via signals sent over the RS485 data loop. As the central processor would consist of expensive monitoring and control equipment, SPD’s would usually be fitted to this end of the system. It is possible to offer protection to the camera by fitting surge protection devices within the camera.
The level of protection required depends on the vulnerability of the camera. Is it located within the zone of protection of the building? Does the building have a lightning protection system fitted? Is the mounting pole correctly earthed?
Usually these would be monitored from a central point, the main difference being that motion control of the camera is not possible. Protection requirements and the methods of achieving them are the same as for the PTZ cameras.
The initial surge assessment will identify which ends of each system require protection.
As a surge can travel both ways in a cable it is important to protect both ends of the system if necessary.

Professional Contractors
Of course, none of the above holds true unless a competent contractor is used to survey, design, specify, install and maintain the surge protection system.
If a surge protection system is to work effectively to prevent equipment failures then a number of other factors need to be taken into account in addition to the technical expertise that is required.
Look for evidence of a proven track record, particularly for major or technically challenging projects.
There are a lot of suppliers in the market but the best will be able to provide customer references attesting to the professionalism of their work and thoroughness of approach.
Make sure the organisation has the resources needed to do your job –?can they handle large scale, multi-site operations across the country using their own staff or will the job be largely subcontracted to organisations that may not have the same quality standards or professionally trained staff?
By combining technical and service factors, the maximum level of protection from induced surges and over-voltages is delivered to the electronic systems within the building, this protects productivity and, ultimately, profits.

While attention has been paid to the Climate Change Levy and the tax allowances for fitting energy saving equipment such as variable speed drives, Richard Walley of Schneider Electric Building Systems and Solutions, argues a strong case for looking at power factor correction first when looking to cut electricity consumption

Energy consumption was brought into focus for industry when the Climate Change Levy (CCL) was introduced. Essentially, the CCL is an additional tax on energy usage by industry and commerce, but the UK Government tried to soften the blow by introducing Enhanced Capital Allowances for capital investment in certain energy saving technologies. What this means is that the full value of the installation can be offset against income tax in the first year. Although this does provide a small amount of relief for such investments, the allowable equipment only extends to the likes of efficient boiler systems and to variable speed drives used to control electric motors. In practicality, the CCL has not impacted as greatly as first imagined and neither has the ECA had anything like the take up expected.
Why power factor correction equipment was not considered in the tax allowances introduced along with the Climate Change Levy is a mystery, since its primary function is to reduce energy losses! By adopting power factor correction measures it is possible to substantially reduce the current taken from the electricity supply. There are major kW losses on the network defined by the I2R law whereby the square of the current is multiplied by the resistance of the cables, transformers and overhead lines that form the national electricity distribution system. These distribution losses vary between Network Distribution Operators (NDOs) and are quoted as high as 11% typically and up to 19% in one instance. These figures do not include the additional losses occurring on the National Grid system.
The national system electrical energy losses represent an enormous CO2 component emitted from the generation of the wasted power. Although these losses cannot be completely eliminated, there is a very strong argument that their reduction should be encouraged.
The effect of power factor correction on system losses can be dramatic. In one case, a 6MW supply with a power factor of 0.68 improved to 0.95 once correction equipment was installed. This improvement of power factor reduced the system losses, for which this user is responsible, by 46%.
Apart from the CCL, most of the NDOs apply a penalty for poor power factor – either in the form of a reactive energy charge and or a supply capacity charge based upon kVA. These charges are part of the ‘Use of System’ charges and are therefore not dependent on the energy supplier but the host.
There might be no special tax relief provided for installing power factor correction equipment, but it remains one of the best ways to reduce both electricity costs and the resultant pollution caused by the generation of subsequently wasted power. To compensate for the increases in energy charges both from the CCL and from general utility price increases, as well as the reactive energy and supply capacity penalties already imposed, users should first examine their power factor. This is the area where real energy cost savings can be made without switching anything off or disturbing production. It is also one of the measures that will benefit the environment.
The energy supplied to industrial consumers is divided into two components: kilowatts, the energy used to perform work; and kVAR, which is the reactive energy used to energise magnetic fields. A combination of both types of energy is taken from the supply network and both contribute to system losses. This combined energy or power taken from the system is referred to as kVA (kilo volt amperes). Power factor is defined as the ratio of useful energy (in kW) and the total energy taken from the system (in kVA). Inductive-reactive energy is negated by the installation of appropriately sized power factor correction equipment.

Misunderstood
Power factor is one of the most misunderstood areas of electrical engineering, yet it is really very simple. Plant and equipment most likely to contribute to poor power factor are those requiring the creation of a magnetic field to operate, such as electric motors, induction heaters and fluorescent lighting. All these types of devices draw current that is said to lag behind the voltage, thus producing a lagging power factor.
Capacitors, used in most power factor correction equipment, draw current that is said to lead the voltage – hence producing a leading power factor. If capacitors are connected to a circuit that operates at a nominally lagging power factor, the extent that the circuit lags is reduced proportionately. Circuits having no resultant leading or lagging component are described as operating at unity power factor and therefore the total energy used is equal to the useful energy.
So, let us consider the effect of reactive energy on the system. Reactive energy substantially increases the energy losses on the local and national supply networks, including the users’ own installation. This increased loss also applies to the users’ own transformers if they are high voltage consumers. Reactive energy also has the undesirable effect of reducing the capacity of the network and transformers.
From the environmental point of view – and remember, that is what has driven the Climate Change Levy – the additional losses and the provision of the reactive energy itself, require an unnecessary increase in output from the power stations. This results in higher carbon dioxide (CO2) emissions.

Increased costs
The inefficient use of energy ultimately means increased costs for everyone. Many consumers already have power factor correction equipment installed, but some of this inevitably does not function correctly. Now that reactive charges apply, it is worthwhile getting existing equipment checked, maintained and tested to ensure it is adequately sized to meet the penalty levels now being imposed.
The benefits of installing power factor correction equipment, irrespective of the lack of Enhanced Capital Allowances (ECAs), are very clear. Electricity costs are reduced, sometimes by thousands of pounds each year. Reduced power system losses means a reduction in the emission of greenhouse gases and also the depletion of fossil fuels in the case of coal-fired stations. The reduced electrical burden on cables and electrical components leads to increased service life. Finally, by using power factor correction equipment, additional capacity is created in the users’ systems for other loads to be connected.
In short, despite the seeming short sightedness of the Climate Change Levy and the limitations of the provision of ECAs, the installation of power factor correction equipment can bring users bigger cash savings in the short, medium and long term. The environment will benefit too.

Think of a primary substation and you’ll probably envisage a vast outdoor compound with massive transformers connected to overhead powerlines and bank after bank of circuit breakers and disconnectors. This is not always the case says Stephen Trotter, ABB’s director of power systems projects for the UK

Traditional substations have provided excellent service over many years, and many are still being constructed today. However, when it comes to planning substations in urban areas there is an ever increasing demand from utility customers to minimise the space required, not just because of the cost and availability of land, but also to reduce their visual impact on the local environment. New high voltage technology offers the ideal solution in the form of gas insulated switchgear (GIS) which enables substations to be ‘shrunk’ into about 20 per cent of the space required by a traditional design, and housed indoors or even buried underground.

GIS advantages
Until the 1970s, air insulated switchgear (AIS) was the type most commonly in use for substations. AIS requires large distances between earth and phase conductors and therefore a good deal of space. This means that for higher voltages – typically above 36kV - this type of installation is only feasible outdoors.
The situation changed when SF6 (sulphur hexafluoride) became available as an insulating medium in switchgear enclosures in order to reduce phase to earth distances. The advantages of GIS compared to AIS are as follows:
• Less space requirements, especially in congested city areas, saving on land costs and civil works
• Low visibility buildings can be designed to blend in with local surroundings
• Less sensitivity to pollution, as well as salt, sand or even large amounts of snow
• Increased availability and reduced maintenance costs
• Higher personnel safety due to enclosed high voltage equipment and insignificant electromagnetic (EM) fields.
A direct comparison of the component investment for identical switchgear configurations will suggest that the GIS variant is more costly than the AIS solution. However, this does not necessarily show the true story. The capability to install a GIS substation within a significantly smaller site – typically up to 80 per cent smaller - enables it to be located close to the load centres, providing a far more efficient network structure at both the HV (high voltage) and MV (medium voltage) levels. As a result, both the investment and operating costs are reduced.
Sites large enough for new AIS substations are seldom available, and when they are their cost is usually extremely high. But it is not just the smaller size of the site that can make GIS the lower-cost option: GIS is also the more economic alternative when expanding or replacing existing substations. An inner city site that has been used previously for an AIS installation could be sold or rented out and the income used to finance the new substation. The compact nature of GIS enables an HV transformer substation to be fully integrated in an existing building, which may only have to be increased in height or have a basement added.

Port Ham shrinks from view
Central Networks’ £12 million replacement Port Ham switching station, which has recently been constructed by an ABB and Balfour Beatty consortium on the banks of the River Severn, near Gloucester, provides an ideal example of the advantages of the GIS approach.
Port Ham is a grid supply point. It takes electricity at 132kV from the National Grid substation, a few miles away at Walham, and feeds it into the Central Networks distribution network. Through a network of primary and secondary substations, this network feeds over 240,000 customers in Gloucestershire, Herefordshire and much of south and east Worcestershire.
The original outdoor station, built in the early 1950s, had experienced above average load growth, to a peak load of 672MVA. The AIS equipment had reached the end of its useful life, so in 2002 Central Networks decided to completely rebuild the facility to ensure continued reliability of supply, as well as providing scope for further load growth.
Initially, the project was tendered in the expectation that the AIS would be replaced on a like-for-like basis. However, in consultation with the ABB and Balfour Beatty consortium, Central Networks decided building a new indoor GIS switching station would offer a number of important advantages, at around the same overall cost. A key benefit was that ABB’s state of the art compact ELK-04 (GIS) switchgear solution could be condensed into just one-fifth of the space used by the existing station. Port Ham is in an important nature conservation area. So the smaller switchgear allowed Central Networks to meet planning concerns by housing the station in a low-profile building designed to blend in with the local environment.
In addition to saving space, GIS also offered two further advantages. Firstly, circuit downtime could be reduced, as the new GIS circuits were constructed with the existing units still in service. Downtime was limited to the rerouting of the network connections. This was a crucial factor, because of the critical position of Port Ham in the supply network. Secondly, the GIS was constructed outside the existing live compound, considerably reducing health and safety risks to personnel working on site.
One of the major project challenges was the soft ground – on the flood plain of the River Severn – which required major foundation work before construction could begin. In just over 10 days some 120 cast concrete piles were driven down 15 metres to the bedrock. The building itself has been raised on stilts to ensure that the switchgear is at least one metre above the predicted level of the once in 100 years flood level.
The new indoor switching station comprises 20 bays of GIS switchgear: 12 feeder circuits; four National Grid incomers; two bus couplers; and two bus sections. The size of the investment and the strategic importance of Port Ham made it a flagship project for Central Networks.

NEDL’s Norton substation
A similar approach was adopted when NEDL needed to replace its 132kV substation at Norton, near Stockton on Tees, that interconnects the National Grid and NEDL’s distribution network.
The new indoor GIS substation, completed in 2005, occupies just one sixth of the space of the old AIS substation. It is rated at 540MVA, and features 20 bays of switchgear (four of which have been transferred to National Grid) with four incoming circuits fed by Supergrid transformers and 14 outgoing circuits, two of which feed local grid transformers.

Going underground
The GIS switchgear concept has been taken to its logical conclusion in ABB’s Barbana 132kV/20kV transformer substation in the centre of Orense, Spain. The 132kV switchyard, comprising two cable feeder bays and one transformer bay, has been constructed entirely underground and concealed beneath a park. This design requires forced cooling, which inevitably entails unwanted fan noise. But damping features or low-noise fans can be expensive. Instead a waterfall has been created. This acts as a heat exchanger to dissipate the heat created by the transformer while the sound of the falling water also drowns out the noise from the fans.

Google may have made headlines when it stated energy costs outweigh server costs in its data centres, but a sobering thought, according to Rob Potts at APC, is only a third of datacentre energy is actually used for computing – up to 70% may be taken up by power, cooling and inefficiency losses

It is estimated that worldwide, datacentres consume some 40,000,000,000 kW/ hours of electricity annually•. Because of the need to provide high levels of redundancy in order to maximise uptime and reduce downtime – the goal of most facility operators – a degree of electrical inefficiency in the sector seems to be an acceptable fact of life. However, by increasing electrical efficiency there is also an opportunity to reduce energy use and therefore operating expenses.

How efficient is your physical layer?
For any device or system, efficiency is simply defined as the fraction of its input (ie. the fuel that makes it ‘go’) converted into the desired useful result, in this case computing. If all datacentres were 100% efficient, then all power supplied to the data centre would be utilised by IT equipment. However, energy is consumed by devices other than the IT load because of the practical requirements of keeping it properly housed, powered, cooled and protected. The devices that comprise network-critical physical infrastructure (NCPI) include those in series with the IT load (such as UPS and transformers) and those in parallel with the load (such as lighting and fans).
In simplistic terms, the more energy can be expended on computing and reduced on non-IT devices, the more efficient the facility.
Can data centre efficiency be improved?
Virtually all of the electricity feeding a datacentre will end up as heat emission. From a facilities point of view, efficiency can be improved in a number of ways including:
• Improve the design of NCPI devices so that they consume less power
• Rightsize NCPI components to the IT load
• Develop new technologies which reduce the power consumed by non-IT devices
On the face of it, option two provides the most immediate solution to meet current data centre challenges. At the same time, better power efficiency of servers is being achieved through the introduction of multi core processor architectures, and improved utilisation of IT layer is being brought about through virtualisation.

Real world options
Before setting out to realise available power savings, some common misconceptions need to be corrected:
• Firstly, the efficiency of a facility is not a constant, for example air conditioning units and UPS are far less efficient at low loads (and, conversely, far more efficient at higher loads)
• Secondly, the typical IT load tends to be significantly less than the design capacity of the NCPI components (due, in part, to conservative ‘nameplate’ rating by IT manufacturers)
• Thirdly, the heat output of the power and cooling NCPI components themselves creates a significant energy burden for the whole system and should be included when analysing overall facility efficiency.
An additional factor affecting the efficiency of facilities is that the IT load itself is not constant but dynamic, both operationally and through inventory changes. For instance, as computing throughput increases, electrical consumption is also increased. Also, over the lifetime of facilities, IT inventory is in a constant state of flux as new generations of equipment replace old. Until recently, every increase in server performance has come complete with an increase in electrical demand.

Efficiency is dynamic
Finding an improved model for data centre efficiency depends on how accurately individual components are modelled. However, the use of a single efficiency value is inadequate for real data centres as the efficiency of components such as the UPS are a function of the IT load.
Therefore when the UPS operates with a light load, efficiency drops off substantially. The losses that occur along this curve fall under three categories: no-load loss, proportional loss, and square-law loss.
No-load losses can represent more than 40% of all losses in a UPS and are by far the largest opportunity for improving UPS efficiency. These losses are independent of load and result from the need to power components like transformers, capacitors, and communication cards.
Proportional losses increase as load increases, as a larger amount of power must be ‘processed’ by various components in its power path. As the load increases on the UPS, the electrical current running through its components increases. This causes losses in the UPS with the square of the current sometimes referred to as ‘I-squared R’ losses or square-law losses. Square-law losses become significant (1 to 4%) at higher UPS loads.
The efficiency of a device can be effectively modelled using these three parameters, and a graphical output of efficiency can then be created for any component, as a function of load – understanding that typical datacentres operate well below their design capacity.

Effects of under-loading
If the efficiency of NCPI components such as UPS and cooling equipment decreases significantly at low loads, any analysis of data centre efficiency must properly represent load as a fraction of design capacity. It is a fact that in the average data centre, power and cooling equipment is routinely operated below rated capacity. There are four reasons for this:
• The data centre load is simply less than the system design capacity, in fact, research shows the average facility operates at 65% below its design value.
• Components have been purposely oversized to provide a safety margin – in order to provide high availability, ‘derating’ components by 10% - 20% is common design practice.
• Components operate with other similar components in an N+1 or 2N configuration to improve reliability or facilitate concurrent maintenance of hot components. However, such configurations have an impact on physical layer components, for example in a 2N system the loading on any single component is at best half of its design capacity.
• Components are oversized to handle load diversity, for example PDUs are routinely oversized between 30% and 100% in order to utilise capacity and overcome issues caused by imbalance between PDU loads.
Effects of power and cooling equipment
Heat generated by power and cooling equipment in the data centre is no different to heat generated by the IT load, and must also be removed by the cooling system. This creates additional work for the cooling system, causing it to be over sized, which in turn creates additional efficiency losses.

An improved model for datacentre efficiency
Armed with this knowledge it is possible to create an improved model and therefore make improved estimates of data centre efficiency. Using typical values for equipment losses, derating, load diversity, oversizing and redundancy, an efficiency curve can be developed.
Efficiency is dramatically decreased at lower loads where many data centres operate, e.g., if a facility only reaches 10% of its design capacity, only 10% of the power delivered to the data centre reaches the IT load. A staggering 90% is lost through inefficiencies in the NCPI layer.
Another way to look at this analysis is to consider its financial implications: at 30% capacity utilisation over 70% of the total electricity cost is caused by NCPI inefficiencies in power and cooling equipment. The primary contributor to data centre electrical costs are no-load losses of infrastructure components, which typically exceed IT load power consumption. Many of the losses are avoidable and analysis using the model can help identify and prioritise opportunities for increasing efficiencies. Based on this and the need to gain a quick return, the best solution is to right size facilities using an adaptable and modular architecture.

•Figures quoted from White Paper #113 “Electrical Efficiency Modelling for Data Centres”.

he electrical industry was out in force to celebrate the Electrical Industry Awards and the NICEIC’s 50th Anniversary Gala Dinner at London’s Grosvenor House Hotel on 20 September. The evening brought together over 800 electrical engineers, manufacturers, wholesalers, contractors and industry association members, to celebrate the very best of the industry.
Radio 2 and BBC Newsnight presenter, Jeremy Vine, hosted a memorable evening providing lively commentary as the winners stepped up to claim their well deserved awards.
Feedback from those who attended was very positive. Jack McDavid from the HVCA wrote: “I enjoyed the experience very much indeed, and agreed wholeheartedly with the many fellow guests who pronounced the evening a success.”
The standard of entries was extremely high, but after much deliberation, the judges announced the following winners:

Commercial Electrical Contractor of the Year (sponsored by Schneider Electric)
Winner: Unidata

Best Electrical Health & Safety Initiative (sponsored by NICEIC Insurance Service)
Winner: The Dodd Group
Best Wholesaling Initiative (sponsored by Basec)
Winner: BDC

Customer Service in Wholesaling (sponsored
by ABB)
Winner: WF Electrical

Test & Measurement Product of the Year (sponsored by Electrical Times)
Winner: Kew Technik

Best Lighting Initiative (sponsored by WF Electrical)
Winner: Thorn

Best Product Innovation (sponsored by Electrium)
Winner: Thorn

Wholesaler of the Year (sponsored by Unitrunk)
Winner: Edmundson Electrical

Energy Efficiency Product of the Year (sponsored by ABB)
Winner: Kirklees Metropolitan Council

Outstanding Communications in the Electrical Industry (sponsored by Yell.com)
Winner: Voltimum UK and Ireland

Best Registered Training Provider (sponsored by EDF Energy)
Winner: Electrical Test Services

Automation Project of the Year (sponsored by Electrical Review)
Winner: Schneider Electric

Power Product of the Year (sponsored by Amps)
Winner: Terasaki

Best Electrical Product of the Last 50 Years (sponsored by Professional Electrician Magazine.)
This category asked voters to choose from one of five products that attracted the most requests for information from Professional Electrician Magazine. The winner was Super Rod for its cable rods.

Best Environmental Initiative of the Year (sponsored by Edmundson Electrical)
Winner: ABB

Best Practice in Energy Efficiency (sponsored by Kew Technik)
Winner: The Lowe Group

Domestic Electrical Contractor of the Year (sponsored by Rexel Senate)
Winner: Owen Bowness & Son

Electrical Skills for the Future (sponsored by Mr Electric)
Winner: Clarkson Evans

Best Customer Service Provider for Domestic Installations (sponsored by Domestic & General)
Winner: TBS Adaptations

Outstanding Contribution to Electrical Excellence (sponsored by Megger)
The winner of this special award was Peter Lawson-Smith for his dedication to promoting electrical safety.

Spotlight on Automation Project of the Year

Schneider Electric provided the winning entry in this category, with a £3m product packaging line for a leading pharmaceuticals manufacturer, relying on the expertise of the system designer PES Technology and Schneider Electric products.
The packaging line in question handles diagnostic treatments for individual patients, treatments with a useful life measured in hours. The flasks containing the treatments are despatched all over the world. Product must be at a hospital within 36 hours and used within a further 72 hours. If a production delay occurs, the customer must scrap each order in-process – a truly mission-critical system. Drives, servos, motion controls, sensors, automation controllers, HMIs and software from Telemecanique, a brand of Schneider Electric, were all integrated within the system.
A full review of the Electrical Industry Awards can be found in the Book of the Night Souvenir Issue accompanying this magazine.

Steve Landau from Philips Lumileds Lighting Company looks at the different approach needed by lighting engineers when working with high-power LEDs.

As the lighting and architectural communities continue to embrace high-power LEDs, there remains a challenge to recognise the new engineering paradigm that comes with solid-state lighting. Many lighting designers and engineers continue to consider the LED as the equivalent of a conventional light bulb, perhaps because LED manufacturers have failed to adequately educate the market about the different approach required to specify LED-based lighting products.
Anyone who works with LEDs must understand four key concepts:
• LED performance is not defined by wattage
• Maximising drive current will maximise light output
• Efficiency must be measured by the total system, not by light source
• Total usable light differs from part number to part number and from manufacturer to manufacturer and from application to application
A Question of Wattage?
First, consider the wattage question. While the light output from conventional bulbs is commonly expressed in wattage (as well as lumens), it is misleading to use wattage as a yardstick when selecting an LED because the actual wattage is affected by multiple factors. For example, at 220 volts, a 60-watt light bulb will use 60 watts of power regardless of the socket it’s attached to.
The power or watts that an LED uses is primarily dependent on the current (usually expressed in mA) that is applied to the LED and the voltage. The current will vary from application to application and the voltage will depend in part on the LED and in part on the source, such as a battery or an electronic driver. A single high-power LED, such as a Luxeon K2 that can be driven from 350mA to 1500mA, uses anywhere from one to seven watts of power in a given luminaire, depending on the drive current as well as other factors such as voltage, thermal conditions and the nature of the LED itself. Wattage is not defined by the LED but instead by the system. To describe an LED as a “one watt LED” only describes the LED given a very narrow set of parameters and does not indicate the light output of the LED. Failing to understand this can result in the wrong LEDs being used and lead to luminaire performance that does not meet expectations.
Increasing light output
To date, high power LEDs typically could not be driven at more than 350mA without potentially reducing the effective life of the device. With many LED applications needing far more light output than is possible at this drive current, LED usage was constrained.
Even incremental gains in light output and efficacy at this level are not enough to meet the growing demands of architects and lighting designers for more available lumens for their applications.
The ability to get more light from fewer LEDs enables entirely new types of general lighting luminaires and applications.
It is this demand that has driven LED manufacturers to develop their technologies further to provide a higher drive current, making significant advancements in die and packaging to manage the additional heat. For example, the latest Luxeon K2 LEDs are the first to be tested and binned at 1000mA with specified minimum performance and no sacrifice in lumen maintenance. The ability to drive the LED at higher currents not only delivers more light, a lower cost per lumen, and more thermal flexibility but also provides the ability to incorporate the same LED in a range of applications to minimise manufacturing costs.
The Bigger Picture
It is essential to consider overall system efficiency rather than the efficacy of the light source alone in order to compare solid-state to conventional technologies. Take the example of an under-cabinet luminaire. Today, the best high-power LEDs achieve efficacies of 50 to 60lm per watt, compared to a fluorescent lamp that may be rated at 70lm per watt. When deployed in luminaires, however, both solutions provide the same illuminance on the work surface. This stems both from the nature of LEDs as a directed light source and from the efficiencies of the power, optical, thermal and electrical components. If in fact the desired illuminace is achieved using fewer watts of power with the LED solution than the fluorescent solution, then the LED solution will deliver better system efficiency. When considered as a system, the high-power LED solution can offer superior performance and lifecycle costs.
Real Results
Finally, one must look beyond the raw luminous flux claims in LED datasheets before determining which LED to use. There is no industry standard for the numbers shown on datasheets so minimum, typical and maximum lumen values may not have the same definition or have been derived under similar conditions.
In fact, LEDs from different manufacturers will deliver different light output when placed into identical lighting fixtures.
In order to make an apples-to-apples comparison of LEDs, one must consider the maximum allowable drive current, how hot the LED can get without sacrificing lumen maintenance, the thermal management system, the voltage, LED binning and other system parameters.
Embracing the differences between LEDs and conventional lamps, and recognising the difference between LEDs from different manufacturers, will go a long way toward improving the design of LED lighting products and delivering superior products to consumers as the market continues to grow and develop.
Everyone knows the size, longevity and other benefits of solid-state lighting, but now we have LEDs that open up a new realm of possibilities, far beyond the previously standard 350mA LEDs.
Designers and engineers using effective system designs and knowledgeable LED selection will maximise the usable light from LEDs and the market opportunities for their products.

Flexible conduit is often used in very arduous environments and if products don’t come up to scratch there can be disastrous consequences for the end user. Here Ian Gibson, chairman of the combined IEC & Cenelec Committee for Conduits and technical director of Flexicon, looks at the issues surrounding conduit quality and the importance of picking the right product for the job.

Good quality in the flexible conduit market is sometimes overlooked but low-cost inferior products often have inherent weaknesses that mean they can be problematic and costly to install, or can lead to their failure during service. Using cheaper, poorer quality products can be a false economy as higher installation costs combined with the potential cost of remedial work and associated wasted labour time on reworks can more than outweigh initial outlay.
No longer a threat
Low-cost flexible conduit imports have been a threat to the UK market for some time but the tide is turning and there is recognition by many wholesalers, specifiers and end-users that quality conduit systems are a worthwhile investment. Manufacturers understand the importance of margin and profit for wholesalers and in turn installers, but if quality is being compromised in order to achieve this then it can only be a matter of time until there is a disastrous result.
Flexible conduit plays a key role in cable management and is responsible for protecting some of the most vulnerable and potentially dangerous materials in a building environment. They need to shield important cabling from external damage while protecting personnel and property from dangerous electrical exposure.
The conduit will often be subjected to harsh treatment in very poor conditions, including extremes of temperature, physical damage, chemical corrosion and wetness so it is essential these products can meet high standards of strength, safety, durability and performance and provide reliable cable protection.
New requirements
Following the devastating Kings Cross disaster of 1987, where fumes from melting fixtures and cables contributed significantly to the number of fatalities, new requirements for zero halogen (HF) and low fire hazard (LFH) products have now been introduced.
LFH conduit systems are increasingly specified in many cabling applications in order to protect both staff and the general public in the event of a fire. Public buildings, retail outlets, high rise office blocks, hospitals and transport installations are all likely to require LFH conduit and, in certain incidences, fire services and even insurers are demanding these products to be specified.
High-risk environments such as the London Underground have even introduced their own stringent regulations for suppliers. The ‘Section 12’ requirements of LUL standard 2-01001-002 pertain to the fire safety performance of materials, ensuring conduits can be safely used by OEM’s and contractors supplying and working in the London Underground system. A wide range of Flexicon’s products have documented compliance with this standard and it’s advisable that customers operating in other sectors of the market where public safety is a key criteria should look out for this type of certification.
Potential problems
At Flexicon we define a LFH product as being ‘halogen free’ (no halogen acid gas emission which can destroy computer equipment and damage building structure), ‘highly flame retardant’ (products will prevent a fire or limit its development if one does start), ‘low smoke emission’ (personnel will be able to see their way to escape in the event of a fire) and ‘low toxic fume’ (personnel will not be overcome by dangerous fumes during their escape).
Some conduits – in particular cheaper versions – claim to be LFH but don’t offer all these properties which means they can be dangerous in high-risk environments where safety is crucial.
Metallic conduit is typically manufactured from stainless or galvanised steel. One of the potential problems with metallic conduit arises from the use of inferior quality steel, which in turn results in structural and performance issues. It can be prone to kinking when bent, which weakens the product, or have other defects such as poor welds that fail under pressure or sharp edges inside the conduit which can damage the cables inside.
Another issue is differing standards in the galavanising process. Higher specification galvanised steel conduit is hot-dipped after manufacture which means it boasts both internal and external heavy protection and provides the best anti-corrosive properties.
Meanwhile, pre-galvanised products – which are usually lower in cost – tend to corrode after a relatively short period when exposed to damp or moist conditions due to the absence of zinc protection on the internal walls.
Another concern stems from the application and thickness of the galvanised layer which may look ‘shiny’ but can begin to flake after only a short time in use. This may leave the product open to the elements causing it to corrode quickly as its protective layer will have been compromised.
Non-metallic conduit, typically manufactured from nylon or polypropylene mixes, also carries risks of failure if sub-standard products are selected.
Thin walls can cause conduit to snap, exposing the cabling inside, while poor quality material may be resistant to much smaller temperature ranges, making it prone to damage in extreme cold or heat. It is also common to experience difficulties with fittings, due to poor or inaccurate manufacture and non-complementary product ranges.
As so many problems can arise from the use of poor materials it’s worth thinking about the origination of the conduit you’re purchasing.
Benefits of UK manufactured conduit
UK production gives total control over quality while buying from abroad means sources of materials may not be well documented. UK manufacturers have onsite expertise in place to oversee manufacturing facilities. Production can be monitored to ensure the correct use of materials and processes which ultimately ensures that top quality conduit is produced.
However, the only way to ensure conduit is of a high quality is to make sure it has been independently tested to the relevant BS standard.
This means the BS standard is not only printed on the packaging but stamped on the products themselves, along with the name or trademark of the manufacturer.
The testing carried out is rigorous and designed to ensure compliant products have the correct construction and mechanical properties to make them safe and resistant to damage.
Look out for BS EN 61386 which is a worldwide standard developed by IEC (International Electrotechnical Commission) and Cenelec (European Committee for Electrotechnical Standardisation) and replaces the previous BS EN 50086 and IEC 61386.
Meanwhile, ISO 9001 compliance looks at quality management systems within business processes and the European RoHS (Restriction of Hazardous Substances) directive bans the placing on the EU market of new electrical equipment containing more than agreed levels of various hazardous substances.
The regulations may just seem like annoying hurdles but all legislation is there to improve quality within the industry. Flexicon products comply with all key standards and the company is also a member of Beama (British Electrotechnical and Allied Manufacturers' Association), which helps write the standards and implement the legislation.
It is important to demonstrate a commitment to safety and product quality in order to instil trust in customers and give them peace of mind.
Ultimately, high-quality products eliminate risk, both for yourself and your customers, and ensure the UK electrical sector remains respected in the global marketplace.

The old saying ‘biggest is best’ may be true in some instances, but applied to the selection of modern industrial drives it is not necessarily the case says Jonathan Smith from Rockwell Automation.

Back in the early days of industrial automation, customers would often purchase an over-specified industrial drive, so that they could be sure that it would be rugged enough to handle every need. But there was a premium to pay for this excess capacity and, in today’s lean manufacturing environment, paying extra for something you don’t need is a luxury few can afford.
By studying the application and asking a few pertinent questions, the knowledgeable user can precisely match the performance of the drive to the demands of the application. This ensures costs are controlled but performance is not compromised.
How To Analyse Your Application
The sizing of modern drives like the Allen-Bradley PowerFlex range from Rockwell Automation, has been greatly simplified by having both a normal-duty and a heavy-duty rating. By seeking answers to a few basic questions about the application, the user can quickly determine whether a normal, heavy-duty or even a larger drive is required. By abandoning the old ‘one-size-fits-all’ approach, users can save money and reduce panel space requirements.
The selection criteria starts with knowing the load requirements and selecting a drive with enough current to meet that need. Whether you are moving coal, air or boxes, all have varying torque demands, and their characteristics determine the most appropriate drive solutions.
The first question to consider with any application is, “Do I know the torque requirements that will make the process work?” Since this information is required to select an appropriately rated motor, it should be available to properly size the drive.
The Importance Of Load Data
Most loads can be broken into one of three categories:
• Variable torque
Almost all variable torque loads are either centrifugal fans or pumps. These make up nearly 70% of global motor applications. If the application is variable torque, then it almost always requires a normal-duty rated drive. These drives can supply rated torque with some overload (approximately 110%) for up to one minute, providing enough capacity for these types of load.

Standby power is accepted as essential in an organisation’s armament in the battle for success, and the introduction of fuel cell standby power increases the options available. Karen Sperrey of UPS Systems asks which solution will be right for you?

It is an undisputed fact that many businesses will lose money if they suffer an interruption to their prime power source, even for just a few seconds. It is not just loss of an ability to actually work or receive customer calls. These are bad enough, but many service providers will lose revenue when systems based on automatic processes fail; web-hosting, data centres, on-line catalogues and call centres are just a few examples. If customers can’t access a product or service someone will lose out. Businesses deriving income from web-based information services are especially vulnerable. For core facilities such as emergency services, hospitals, research establishments and education standby power is vital and downtime is not an option that these or other businesses can tolerate. In some cases, non-stop power 24/7 must be guaranteed. Users and providers now talk about zero failure and power up-time in terms of six nines. Systems must be reliable for 99.9999% of the time.
Achieving this high power availability is a constant challenge for the standby power industry where a ‘one-size fits all’ approach is not the solution. As we know, when mains power goes off, the standby power system has to operate either by generating power from fuel or by using stored power. Most often, generated power comes from a diesel generator, and stored power from a UPS with lead-acid batteries and usually the two work together to ensure the even momentary gap (which is harmful to electronic equipment) between the power failing and the generator becoming operational is filled.
Which Option to Choose?
There are many factors that may need to be taken into account when deciding which option(s) to pursue. Some will be more important in certain situations than others. Some specifiers will always choose the cheapest option, but it is rare that all factors are equal when comparing various options and so an element of criteria weighting will be inevitable. These are some of the factors that should be considered:-
Location – Are there constraints about the floor area or ceiling height available? Does it need to be situated indoors or out? Will the existing floors carry the required weight of the unit? What is the lift load-carrying capacity? Are there awkward access arrangements for delivery and installation? Are there any restrictions in the building lease?
Rack mountable – Some users, especially in the IT environment prefer this approach. Can the unit(s) be mounted in a conventional rack?
Sizing – What size unit in terms of power capacity is needed? Can some of the load be segmented as non-critical and therefore switched off after a successful shut down? Is the capacity likely to be expanded in the future?
Automatic change over options – Are there always going to be personnel on site to deal with any power off situations? How will any automatic change over option work? How will the system notify people what is happening?
Maintenance – Who will look after the system? How often will scheduled maintenance be necessary? Will the system have to be shut down while it is carried out? What security problems may be encountered if this is to be done out of normal working hours?
Planning permission – If the unit is deployed outside, will this be necessary? How long will this process take? Will the land lord need to be involved?
Exhaust emissions and noise – This is often linked to planning permission but not always so. Is the system to be installed in or near a residential area where these issues may be more important?
Incorporating existing standby power products and upgradeability – No one likes to under-utilise an existing investment. Can it be integrated in some way? Can the unit be upgraded as required?
Ensuring evenly matched Genny and UPS – The generator must be large enough to support the load initially supported by the UPS and to ensure that there is sufficient extra capacity to recharge the UPS batteries. The installation must also comply with the current IEEE G53 guidelines on harmonics. This is a specialist area in itself.
Types of fuel – Is there an obvious fuel source already installed? If not, does the user have a preference? Can a new source be installed?
Refuelling – How often will this be necessary? Are the access arrangements awkward? Are personnel in attendance to organise this even in the event of a prolonged mains failure during the night? Will an automatic monitoring facility be required for low fuel or any other occurrence?
Availability – How soon can the unit be installed? Is there a long lead-time before delivery can be made?
Price - What does this include? What warranty is included? How much will delivery cost? How much will the installation cost?
If the decision is taken that extended auxiliary power is required a choice needs to be made between installing either large banks of batteries to keep the UPS powered for longer or a generator running on a separate fuel source. There are advantages and disadvantages with each solution although it is generally accepted that if all other factors are equal there is a financial advantage to a generator if the extended run following a power loss needs to be in excess of 4-8 hours. But there are many situations where a generator may not be a viable option and for those organisations another solution is needed.
The latest alternative is fuel cell standby power. Fuel cells can be regarded as generators but whereas conventional generators use internal combustion engines to rotate an alternator, fuel cells generate power by producing electrons directly, with no moving parts. As a result, they have the potential to be very efficient and reliable. Moreover, they are comparatively quiet and other than electricity and heat, they produce only water vapour. This makes them ideal for indoor use. With the maturing market in fuel cell technology and increasing awareness of environmental issues, a standby power solution incorporating a fuel cell is now the third alternative..
Why fuel cells now?
UPS Systems has been monitoring fuel cell developments for several years and we are convinced that fuel cells now offer a viable alternative in certain situations. There are areas where the current offerings aren’t suitable e.g. where system requirements are in excess of 60kW. However in the 10-60kW application range which incorporates a lot of smaller IT departments, the option could now be considered.
Firstly, let’s consider why a UPS only solution might be put forward:-
• The systems to be supported are not mission critical or require only limited auxiliary power before a safe shutdown is effected
• The power draw of the mission critical system is low enough that a single UPS can support it for many hours
• There is no room for anything else in or outside
• There are environmental issues if the unit is placed outside. Councils often place stringent restrictions on working equipment and require detailed planning permission
• There may be a company policy, biased against noise or pollution caused by generated power
• There may be personal expertise or preference to stay with one manufacturer
• Internal politics or budgets may prefer several smaller departmental units rather than a single large one
• Security and access policies may prevent equipment being sited outside
Then consider why a UPS and generated power solution might be put forward (whether conventional or fuel cell):-
• Where the size of the batteries needed is excessive
• Where there is more room available outside for generator than inside for batteries.
• 24/7 protection is required and a refuelling contract option needed
• Where shared resource is more economical between several departments or different companies operating within a whole building, especially if a landlord is contracted to provide support
• Where there is a requirement for air conditioning which is better protected by a generator
Now let’s look at the advantages of using a fuel cell as opposed to diesel generator:-
• Standard hydrogen bottles offer a green alternative to conventional fuel and/or batteries
• Unlimited runtime - simply increase the number of bottles
• Low audible noise – no fans and pumps- suitable for indoor installations
• Only heat and water by-products so safer for the environment
• In larger systems the waste heat from a fuel call can be used to provide hot water or space heating
• Easy indoor installation- no major planning permission required
• Modular rack integrated design – easy to add more power
• Few moving parts, so less need for maintenance
• Politically reduces dependence on oil
• Some factories and plants may already have hydrogen installations or can utilise hydrogen produced by existing processes
• Certainly lighter than batteries and lighter than many conventional generators
• More energy efficient in power terms than either a battery or generator
• Generated power close to or inside the computer room
And the advantages of a conventional generator:-
• Tried and tested technology
• More expertise around if there is a problem
• Established supply chain for fuel
• No ‘Hindenburg’ fear factor
• Currently cheaper than fuel cells
• Can deal with much larger loads in kVA terms
Making the right choice on your own will not be easy and we would always recommend working with an independent and impartial supplier that will remove any complexity from the purchasing cycle and help to match one of the three standby power alternatives to your individual requirementsu

The Energy Review has brought concerns about the growth in the amount of standby power in domestic electronic equipment to the forefront of the industry. But just what is being done to hit the maximum 1W levels? George Warren from the environmental engineering department at Nottingham University investigates.

The Government’s Energy Review draws particular attention to increasing concerns about the growth in the amount of standby power in domestic electronic equipment. It commits us to working towards all electronic devices using a maximum of 1W for standby purposes.
Standby power consumption accounts for 1% of the worlds CO2 emissions. If all appliances used only 1W or less in standby, it would result in an 80% reduction of the current CO2 emissions related to standby, a reduction of at least 54 million tonnes of CO2 each year.
The average electricity use for standby power in the European Union is 400 to 500kWh per household, at least 60TWh a year for the EU as a whole. Happily that can be reduced significantly with minimal effort – but with maximum policy co-ordination.
Standby power, formerly known as ‘leaking electricity’, traditionally referred to the electricity consumed by electrical equipment when supposedly switched off or not performing its main function. But definitions seem to vary. Many now see it as power consumed by an appliance during the lowest possible electricity-consuming mode. Unfortunately people have used one or another of these definitions in the past, meaning measurements are not always comparable. While in 1999, the International Energy Agency (IEA) officially defined standby power as referring to the appliance’s “lowest possible consuming mode”, nonetheless several commentators still use the first definition – causing continuing confusion.
The relevant appliances range from televisions to set-top boxes and microwave ovens to cordless telephones. Each product uses varying amounts of electricity whilst in standby. This could be to maintain a digital display, or to be alert for remote control demands or even just wasting the electricity due to poor efficiency.
The actual power draw in standby mode is relatively small, typically 0.5-3.0W. Although at first glance such losses may seem trivial, the problem assumes significance because these products are consuming 24 hours a day: the cumulative total soon adds up. For example, a New Zealand study into the power consumption of microwave ovens found that 40% of them consumed more electricity over the course of a year in standby mode, merely powering the clock and keypad, than in cooking food.
As new electronic equipment continues to proliferate at increasing rates, so too will the energy used attributable to standby power, with more and more goods utilising a standby function.
The standby function can aid the efficiency of a device, as it will undoubtedly use less power than if it were in its normal function. But the degree to which it continues using energy is of concern, as technology is available to reduce the standby power consumption of their products.
Nevertheless, the problems will remain while people still use their older models never designed with energy efficiency in mind. What is really needed, is an incentive for the public to change wither their habits, by turning the equipment off at the mains, or swapping their existing devices for new more efficient ones.
Scale of the problem
Standby power can be measured directly using a high-resolution watt meter. Unfortunately these can be very expensive and difficult to find, so it is unlikely to be a realistic option for customers to test all appliances themselves. Instead there are three ways in which standby power can be quantified.
• The whole-house measurements – Measuring the standby power consumption of every single electrical appliance that consumes standby power in the home.
• Bottom-up estimates – Estimates of either the average standby power consumption per home or the national standby power consumption. This data is normally based on measurements of individual appliances and then multiplied by the amount of devices in the specific area. These estimates utilise a combination of field measurements and known appliance saturation. Although usually accurate for larger appliances, it is not the case for minor ones, as little information is known about their saturation. Therefore bottom-up estimates probably underestimate standby power usage.
• New Product measurements – Measurements are made within the stores of factories, so all new products’ standby power can be measured at one time. Quick and accurate but unfortunately these results omit the performance of older products.
Three strategies
There are three principle research strategies to reducing standby energy consumption:
• Improve (or modify) technologies outside the device and change user behaviour
• Improve the efficiency of the components
• Improve software to help equipment operation better match functional needs
An example of this is the simple power switch design. Pictured are three different power supply set ups:
Fig. 1: Design A is accomplished by placing the switch between the power source and the power supply. This is the perfect solution for eliminating power consumption,as the current does not reach any energy-consuming components.
Fig. 2: Design B has a switch placed in between the power supply and the appliance, which is the case in a significant amount of conventional electrical equipment. Even when the switch is turned off or no appliance is connected to it, the power supply will continue to receive a flow of current and the energy is then usually dissipated as heat. This is a prime example of the manufacturers not adapting their designs to prevent loss of energy.
Fig. 3: Design C shows a secondary load, for example a clock, which requires energy even when the main load is switched off. This can be achieved by:
• Adding an extra power supply for use at low-power levels.
• Using a power supply with two operating ranges (‘on’ and ‘standby’).
• Incorporating a separate source such as a small battery or a photovoltaic cell to power the secondary load.
The other alternative is of course to teach the public to switch their appliances off at the mains when they are not using them. A classic example of this is the phone charger. Many people wrongly assume that when a phone is finished charging, it will cease to draw electricity from the supply. In fact, even when the phone has been unplugged from the charger, if the charger is still connected to the power source there will be energy consumed in the form of heat.
Educating the public into switching any accessible electrical equipment off at the mains would solve the standby power consumption problems. Of course, turning each appliance off at the mains is not always practical. In these instances, the manufacturers need to improve their components efficient.
Two approaches to this are to either improve existing standby mode components efficiencies, or to use new of different components that require less power.
• Power supplies - There are two main types of power supplies, linear and switch-mode. Switch-mode is the more efficient of the two, and could have a major impact on reducing standby power consumption if chosen instead of linear power supplies. Standby power can also be reduced by the addition of a separate/secondary power supply that consumes less energy than the supply used when the appliance is in active mode. An alternative may also be to incorporate a small battery or photovoltaic cell.
• Voltage Regulators - Voltage regulators tend to dissipate a large percentage of the power supplied to them as heat. This increase in heat also shortens the life of the appliances. It is possible to reduce overall standby power by using a more efficient voltage regulator. Another option is to reduce voltage levels, so that fewer voltage regulators are needed.
• Visual Displays - Changing the type or size of visual displays can save power. An LCD screen is the most efficient, although new lower power LEDs are becoming available and are often more practical.
Improvements enable the device to change operating time from active to sleep and then from sleep to off. Although this results in large power consumption when in low power modes, it does improve overall lower energy use.
There are technical solutions that currently exist, which have the ability to reduce the standby power consumed by up to 90% - but are seldom exploited. Additional capital costs to reduce standby power consumption of most appliances are surprisingly modest and will almost always result in lower costs or new benefits elsewhere. The simple fact is that although the new products may cost a small amount more they will easily save the amount on the electricity they do not use, compared to older, less efficient models. As on of the world experts, Alan Meier of the US Lawrence Berkeley Laboratories says: “Most technical solutions are cost-effective given the current price of electricity.”
Nine years ago, the European Commission negotiated agreement with the European Association of Consumer Electronics and Manufacturers (EACEM) setting targets of less than 6W for standby consumption of TVs and VCRs. Ever since then, new targets have been negotiated on various appliances. This has been seen as an effective way of gaining suitable efficiency levels, preferred to setting mandatory efficiency requirements.
Standby levels are not as low as 1W yet, but in 2000 the average standby consumption for new products was 3.7W for TVs and 3.8W for VCRs. Negotiations continue with EACEM with regards to other appliances, and no doubt will do until all reach the 1W mark. Set-top boxes (STBs), the digital television boxes used to receive satellite television channels, seem to be the biggest problem in Europe, as they consume large amounts of power whilst in standby mode. The problem is that the service providers require such devices to remain constantly on, to permit remote access for downloading new software and updates.
In addition, STBs are produced to the specifications of the service provider and so there is no incentive for them to reduce the power consumed in its standby mode. In 2001, the European Commission issued the EU code of Conduct for Digital TV Service, which set the maximum standby power consumption at 9W. This is unfortunately not as low as the 1W at which everything is aimed, but it is a step in the right direction.
Power supplies have also been seen as a problem, often consuming power even when the appliance is switched off. In July 2000, the European Commission issued a Code of Conduct on Efficiency of External Power Supplies in order to reduce the power consumed in standby. The EC then promoted this within the IEA standby initiative for adoption worldwide, as many of these power supplies are traded globally and also many of the manufacturers are not based within the EU. This is still to be accepted internationally.
Elsewhere the Australian government is actively considering a Penalty Labelling Systems, publication of all standby statistics, and perhaps even a Mandatory Energy Performance Standard where the least efficient models on the market are removed. These are yet to be implemented but are being seriously considered in order to improve the situation.
In America, Executive Order 13221 requires every government agency “when it purchases commercially available, off-the-shelf products that used external standby power devices, or that contain an internal standby power function, shall purchase products that use no more than 1W in their standby power consuming mode.”
A Japanese study in 2000, found that standby represented 9.4% of the Japanese residential electricity use. This led to an announcement by several thousand household application associations at the Subcommittee for Energy Conservation of the Advisory Committee for Energy and Resources of the Japanese Ministry of Economy, Trade and Industry 2001, that they intended to reduce standby power consumption to less than 1W or eliminate it entirely by the end of 2003. Unfortunately, no information was found to confirm whether this target was ever reached.
Evidently many national and regional initiatives on regulating standby power use exist. However, it is imperative to coordinate efforts internationally to facilitate participation by industry. This is one of the aims of the IEA initiative. To date, it has failed to generate a joint solution for the most common standby power consuming devices or worldwide discussion of an agreement on requirements for digital TV equipment. But it has achieved the feat of gaining inclusion in energy test protocols and energy efficiency policies for all products that consume significant standby power.
Although success has been achieved in come cases, the standby power problem still presents uncertainties. As the UK government has identified, considerable potential energy savings can be associated with the reduction of standby power in new electronic equipment. It is therefore appropriate to make some recommendations to guide future policy:
• Increase public awareness
• Develop worldwide implementation of regulations
• Improve electrical equipment design
• Develop guidelines for lowering standby power use in appliances no currently covered by any programme
• Establish an international network of accreditation organisations
• Include standby power information on existing appliance energy labels
Internationally coordinated efforts would reduce the burden placed on manufacturers of globally marketed electrical goods. This is the most effective way to achieve an increase in global penetration of these technologies.
Certainly, the anticipated growth in standby power is decreasing, as multinational companies understand the need to reduce the standby power consumed by their products. This progress is encouraging, but there remains an urgent need for government intervention in order to stimulate and reinforce these achievements.