The effects of a lightning strike can be dramatic and highly visible, however it’s the impact on businesses critical systems such as computer or telecoms networks, the effects you can’t see, that can have a much greater impact on a business says Mike Forsey of Omega Red Group...

For many years the British Standard BS6651 Code of Practice for Protection of Structures Against Lightning has been the accepted text when looking to apply protection to structures and electronic contents.
This will be superseded in August 2008 when BS EN 62305 becomes the new Lightning Protection Standard. Published in August 2006, it will run in parallel with the current standard until August 2008 when the old standard will be withdrawn. Unlike its predecessor the new standard is made up of four documents:
• General principles

We are living in interesting times in the lighting industry. Today we are seeing more exciting innovations - in both technology and lighting concepts - than at any time in the last 30 years. And it is interesting to note many of these new products and ideas relate more to comfort, health and wellbeing than just getting more light out of existing light sources. These developments are also presenting new challenges to us in the lighting controls business says John Aston of Philips...

For a long time the inclusion of lighting controls in the design of an office building has been pretty much a must have. The same is by no means true in either the healthcare (hospitals) or education sectors. But some of the new lighting concepts are specifically targeted at these environments and are hence driving the use of new technologies like LEDs – solid state lighting – and even dynamic fluorescent lighting. What is bringing about this change? The most important influence has been the recent realisation that certain wavelengths of light can have more than visual effects. And these have now been demonstrated to be of significant benefit in a number of applications. Over the past 10 years Philips Lighting has worked closely with researchers to try to understand the impact of artificial light on humans in our present intense 24/7 society.
Most of these studies have focussed on the impact of fluorescent lighting in the workplace. The initial commercial result was the launch of Dynamic Lighting in both a ‘personal’ form or in a wider context as a concept we call Dynamic Ambience. This approach to lighting interiors used a colour changing luminaire capable of providing light with a colour temperature anywhere between 2700°K and 6500°K. The thinking behind this related to the fact that natural light changes throughout the day – so why not do the same with artificial light? But this thinking is already developing further, with a much cooler lamp being introduced that has a proven beneficial effect even used on its own; a 17000°K lamp called ActiViva Active.
The very positive results from the initial research projects into both Dynamic Lighting and the ActiViva product have shown real improvements in both workplace performance and people’s perception of their health and well being.
But before we discuss the impact of these fluorescent lighting innovations on controls let us turn to the other big news in lighting – LEDs. This technology is getting a lot of press and it is not necessary to dwell on how it works or where we are up to with efficacy here. Instead it is the impact this technology is having on control systems that is worth examining. Suddenly we have been presented with an instant light source that is available in a variety of colours that can be deployed both internally and externally to give very creative effects – and delivering this is the present challenge. It is good to know virtually all solid state lighting installations are specified with a controls system. In fact it is assumed the controls are, in effect, the delivery system for this technology.
Interestingly many of the initial, and current, lighting controls employed to control LEDs are based on the entertainment industry’s DMX (digital multiplex) protocol. This has been a new learning experience for electrical installers in the construction industry. And, of course, many of these installations have been almost theatrical, particularly in external architectural projects. But LEDs may already be able to offer us rather more than just glitzy effects, and this is where we return to comfort, health and well being and a longer look at healthcare and education applications.
As mentioned above we are now realising that light has both non-visual and biological effects. Currently there are research projects and trials investigating the use of colour changing in both hospitals and schools. The early results (in the latter) showed improved concentration and more accurate working at a nursing station during the difficult night shift. The nurses also reported better sleeping patterns. So if this sort of controlled delivery of light – both in colour and intensity – can help the healthy to work better, what can it do for the patients? Presently we do not know and this area needs investigation. But we do know light can be helpful in some important clinical procedures.
Ask someone to undergo an MRI scan and they are normally confronted with a dramatic, daunting machine set in cold, unappealing, surroundings. A nerve racking situation in itself, quite apart from the patient’s not unnatural concern about the reason for the scan in the first place. Philips has put together its lighting and medical divisions to address this issue by creating a package that provides both the scanner and a complete, controlled, lighting installation that allows the patients to choose the colour and intensity of the room lighting. Early experience of these solutions is showing calmer patients, and a quicker throughput, giving real improvements in operational efficiency. You could even argue that the patients’ carbon footprint is being reduced.
These MRI room packages rely on LED technology to offer a full palette of colours for the patients – as well as the clinical staff, who can even use the lights to signal to the patient! The lighting can even be integrated into a broader audio visual experience that involves projected images – truly an Ambient Experience. Philips calls this Ambiscene.
Of course lighting controls have been used already in both hospitals and schools but the reasons have primarily been related to cost of ownership and particularly energy consumption. We know lighting can be turned off or down when there is adequate daylight or when there is no-one there; we’ve seen the evidence of 30% to 50% savings in these circumstances. Clearly any health trust or school authority will be considering any measure capable of reducing reliably reducing their carbon emissions. Another good reason to adopt a networked lighting management system is its ability to monitor aspects of the installation and meet obligations like the testing and logging of the emergency lighting. Philips has successfully combined all these ‘cost of ownership’ functions in its LightMaster Modular System; a networked solution that readily involves all users by readily allowing the use of local controls.
The next generation of Philips lighting controls will address all the usual functions and add the abilities now being identified in colour changing and effect lighting. The present independent solutions will develop or evolve into extensions of the overall lighting management system adopting the same interactive (user oriented) approach already provided for many office workers. Indeed it is this knowledge of the importance of the affected users being able to interact with the lighting that is a key factor in any product definitions. But how do you involve all the users when the application is, say, a hospital? Well most hospital beds already provide the patient with some control of their local lighting – at least the reading light. However, when the doctors do their rounds they need, perhaps, a different level, or even quality, of lighting to carry out their examination. This facility is already a practical solution.
In conclusion, then, new lighting is today challenging us to keep up, and to deliver it in the right quantity, quality and colour while minimising the cost of ownership and carbon emissions. Lighting consumes nearly 20% of global electricity generation, so we cannot ignore its cost; but we must also recognise that we are now providing much more than just enough light to work by. Lighting is now being employed to offer a better learning environment as well as measured improvements in productivity in the workplace. More importantly we are beginning to understand that where we cannot use daylight we may be able to effectively help patients in their treatment and their speed of recovery. But without effective, easily understood, lighting controls we will struggle to deliver all these benefits. And at last we can say that lighting controls not only put the right light, in the right place, at the right time but also in the right colour.

Power quality is an unfamiliar issue for many in UK industry, but it is likely to become very well known in the near future. Power quality covers a vast range of issues from voltage excursions, frequency variations, supply imbalance and harmonic distortion. Uncorrected power quality issues can bring a host of problems from unnecessary power losses which can disrupt production through to catastrophic equipment failure. Steve Barker, energy and power quality manager at Siemens Automation & Drives, outlines the extent of the power quality problem in the UK and offers some solutions for business...

British industry of all sizes tends to take its electricity supply for granted. A simple flick of a switch and a plant or production facility has access to the supply network which will give a business as much electricity as it requires at any time of the day, no questions asked. A whole range of equipment from machinery and computers through to lighting and electric motors rely on the mains supply network with little thought given thereafter, other than paying the bill.
However, the use of increasing levels of electronic equipment by business is causing a phenomenon called “harmonic distortion” on the UK electricity supply network. Harmonic distortion is caused by non-linear loads on the electricity supply system, such as personal computers, lighting systems, switch mode power supplies and variable speed drives.
Regulation ER G5/4-1, published by the energy networks association (ENA) is the UK’s instrument to control this distortion and to assist compliance by business with the harmonised network standards such as the European EN50160 (it is important to note however that the UK measures are more severe than in the rest of Europe).
ER G5/4-1 which was first published in 2001 and subsequently updated in November 2005, is the UK’s attempt to control harmonic distortion back onto the supply network and is the updated version of the earlier ER G5/3 which was originally published in 1976. Ironically, many of those businesses affected by power quality issues remain unaware of the original regulations let alone the updated version which are far more stringent.
The updated regulation is far more onerous than previous regulations and specifies voltage and current limitation to which all industrial sites in the UK must comply in a three stage approach which takes into account different sizes of installation. Stage 1 applies mainly to small commercial installations supplied from the public low voltage network. Most industrial sites are typically assessed under Stage 2. Industrial sites of large users may fall under Stage 3 which applies for incoming supplies taken at 33kV and above.
Excess harmonic distortion on a site can lead to two types of problem. Firstly, ER G5/4-1 compliance issues which can ultimately result in disconnection if remedial measures are not taken. More often than not, electricity users are not familiar with compliance issues and attempts by a company to achieve compliance with ER G5/4-1 can consume huge resources of time and money. I have personal experience of a number of installations where compliance issues have been tackled badly and the remedial measures have been more costly than early preventative considerations. One example involved a company foregoing a £50k investment in preventative measures that could have saved a small food and beverage company in the North of England around £1m – a figure which was later spent on mandatory remedial issues to correct the problem.
Secondly there are a number of practical issues for end users involving power frequency harmonic distortion that can cause often hidden problems which can include:

The need for businesses and consumers to become more energy efficient and cut energy bills is making smart energy meters an increasingly attractive solution for managing energy use. Alan Roadway from ABB, explains what’s possible with smart meters and outlines some of the benefits they can bring for domestic, commercial and industrial users.

Spiralling energy prices and government imposed initiatives and targets for improving energy efficiency are making both consumers and businesses ever more aware of the amount of energy they are using. Part L of the UK Government’s Building Regulations Act encourages accurate measurement of energy consumption. For businesses and industrial end users in particular, the challenge to date has been to identify the most appropriate way to fulfil this obligation. The consensus of opinion is smart meters are increasingly providing the answer. These meters provide users with the technology to gain an immediate and accurate picture of their energy use which can be usefully employed to encourage a change energy consumption behaviour.
The underlying rationale behind Part L is that if users are made responsible for monitoring their energy consumption, they will take more action to reduce their usage and employ more energy efficient practices. To date, however, it has not been easy for either consumers or businesses to do this. If the home or building owner wants to get a better idea of their next bill, some complicated maths and knowledge of multiple tariff rates is required.
Energy bills for most buildings are either the result of a meter reading by the supplier, or more commonly, are based on an estimated reading made as the result of the meter reader being unable to gain access to the meter. According to Energywatch, at least 7 million domestic customers receive estimated bills, which can result in inaccurate charges to the customer and affect the ability of energy suppliers to maximise their revenue collection.
Smart metering technology offers a solution, with meters able to show the kWh consumption figure on an LCD screen.
For users, the benefits include being able to monitor consumption levels at different times on a regular basis, helping to identify trends. This particularly benefits SMEs, as they are able to get a more accurate idea of their energy consumption before their bills arrive, and be able to take steps to try to minimise future energy usage.
Another key benefit of smart meters is that there is no need for them to be physically visited by a meter reader. Depending on the meter’s capabilities, data can be collected remotely, using bluetooth, a pulsed output or wi-fi connectivity. This would be particularly beneficial when collecting domestic meter data, as homeowners are often not at home to provide access.
More sophisticated smart meters can also be connected to the internet, making it possible to include tariff control functionality. This could potentially enable consumers to switch between tariff rates according to normal or peak periods or to switch between tariffs offered by different suppliers. For businesses in particular, this provides the ability to better manage energy costs by being able to monitor the effect of existing practices at different times of the day.
Furthermore, by connecting smart meters to an Ethernet, business users are able to monitor spending at different office locations, enabling them to identify areas of excess consumption, and encourage best practice schemes across different sites.
According to Energywatch, the estimated cost of conducting a wholesale installation of smart energy meters into just domestic properties in the UK is approximately £86m. This is on top of the £800million a year already spent on replacing, installing, maintaining and reading existing meters. Energywatch is encouraging the installation of smart meters as part of suppliers’ existing replacement programmes, growing the base of installed meters gradually. However, debate continues as to who should bear the cost of installing smart meters into homes and other buildings and facilities around the UK.
Utilities argue it would be difficult for them to recoup the costs of installing the meters due to the de-regulated nature of the UK utilities industry, where consumers can move easily between suppliers. This makes it harder for suppliers to pass on the cost of installation to homeowners who can switch to a competitor after having a smart meter installed.
In Europe, the adoption of smart meters is greater as competition is less intense and government measures and intervention have helped encourage installations.
However, it can be argued having smart meters installed into consumers’ homes may actually increase trust between consumers and their energy supplier, as they would be able to better understand the information displayed by their meter and thus have a more accurate picture of their costs.
For utilities, there is also the prospect of maximising their revenue collection through being able to more accurately bill customers based on actual, rather than estimated, energy consumption. For businesses, the case for installing smart meters is based around cost versus benefits. The benefits, in terms of reduced energy costs, appear to outweigh any short-term costs involved in installation. Smart meters will enable businesses to invest in technology that will help reduce consumption in the long-term, by providing accurate measurements of energy use, enabling businesses to act upon the instant information supplied. Commercial building owners can also benefit, as they will be able to get separate readings for different occupants, and monitor their consumption levels accordingly.
ABB offers a range of energy metering equipment and can provide advice on which meter will be the most suitable for a specific facility. The company can also provide support to ensure businesses get the most out of the meter’s capabilities to help with cutting their energy consumption levels.

The Government projects the UK will go significantly beyond its commitment under the Kyoto Protocol and reduce its greenhouse gas emissions by almost 20 per cent below 1990 levels by 2010. With such emphasis being placed on renewable energy sources the energy industry needs to look ahead in order to identify what the effects of the system changes will be, as a consequence of the changing power sources. Masoud Bazargan at Areva T&D explains the importance of suppliers within the industry working with their clients’ systems departments.

In order to work more effectively, suppliers and users need to make sure transmission equipment is designed to incorporate current and future system requirements taking into account the changes in generating methodology needed to support the rigorous climate change programme the UK is undertaking.
Transmission and distribution plants have always been subject to continuous research and development in order to maximise reliability, safety, and wherever possible reduce capital and operating costs for the network operator. There are a large number of areas where development has provided benefits. Examples for transmission switchgear, have been the move from hydraulic to spring operated mechanisms; reduced energy mechanisms; more sophisticated interruptive technology; design improvements, modelling technology, arc physics; compact solutions; hybrid GIS solutions; end of life disposal; environmental considerations; reduced power to weight ratio; less site assembly times and lower volumes (i.e. space-saving).
However, one key area where development effort is being focused has been driven by more pressing external forces. Indeed, concerns about damage to the ozone layer and associated climatic effects caused by carbon emissions has accelerated the decline in the popularity of traditional fossil fuels and in turn driven the ascent of renewable energy sources. Generally this shift which has been welcomed, despite some opposition concerning local issues, is set to have wide ranging impact on transmission and distribution networks. Fundamental differences associated with renewable energy include fluctuating outputs and often remote geographical location of suitable power sources. Existing networks were designed for ‘traditional’ fossil fuel based generation in centralised locations and the change in generation mode and location to embedded generation from renewable sources in remote locations will inevitably impact on the specification for switchgear and other equipment.
In order to provide products that will fulfil the needs of network operators today and in the future, it is important to ensure that equipment is designed in line with the DNO’s development strategy. In order to achieve this, Areva T&D has developed close working links with system designers, using partnerships wherever possible in order to ensure that product design and development is aligned to the needs of the market, especially as what was an extremely stable market is about to go through one of the biggest changes since its inception. For these partnerships to succeed, they need to be a two way process with transfer of information of value, to and from both parties. The fact that continuity of specification ensures that substation components from different suppliers are in some cases very similar, makes this type of value add extremely important.
One of the most effective and favoured forms of renewable energy is wind-power. However due to public objections and land availability constraints, we are looking more and more to utilise the renewable energy available in the marine environment through construction of offshore wind farms. By its very nature, the offshore wind farm creates huge challenges for the transmission and distribution sector. The challenge facing the industry is not only to capture and convert the natural energies of the wind and the ocean but also how to transmit this power to the shore considering the difficult environment. These considerations can range from ecological impacts on the environment, e.g. effects on marine life and by the environment, such as marine growth on the installation, through to shipping lanes, oil and gas pipelines, geological and seabed consideration, access, weather condition, foundation, corrosion and cost.
One proven way of addressing the transmission issue is to connect the wind farm turbines via an inter array network of cables which link at offshore transformer substations located within the wind farm. Electricity from all the individual wind turbines is collected and the voltage stepped up to 132kV to make it more easily transported to shore via high voltage cables reducing the power losses.
The concept of an offshore substation, close to the turbines, does help resolve transmission issues, however, it also introduces problems of a different kind as the module will need to resist highly aggressive marine conditions, alien to its land-based cousins. Seawater induced corrosion can be minimised by locating the module out of the splash zone, around 10m above the high water mark, but a specialised external paint finish should be utilised to protect the structure and reduce the need for maintenance. The location of equipment types within the substation and its surface profile can also influence corrosion rates and must be carefully planned. Mechanical elements such as diesel generators must also be protected from water and salt ingress and suitable arrangements made for the substation to be self sustaining for say seven days in case of power loss. The transformer itself should also be reviewed in terms of layout, corrosion resistance and long term maintenance with particular reference to radiators, tank, fan and pump. Ventilation also needs to be considered, employing special filters to prevent salt ingress that could cause contamination and corrosion. In addition, handles and all other external fittings need to be re-specified for a marine environment to prevent corrosion.
Another consideration for offshore substations is the mounting arrangements and weight distribution. While a traditional land based substation will usually be mounted on a reinforced concrete plinth, an offshore module may be mounted on a large diameter steel monopile. The very nature of the support structure dictates that loadings must be minimised by managing the centre of gravity to the module and ensuring even weight distribution. Consideration must also be given to both dynamic and static loading in temporary as well as service conditions. Finally but no less importantly, the installation, commissioning, and any subsequent maintenance will have to be carried out in an alien, hostile environment far from overland access.
These changes are fundamental enough to require a paradigm shift in mind-set and an evolution of the current skill set. But we believe the whole industry will rise to the challenge and ensure that energy can be generated from renewable sources and distributed to where it is needed, consistently, cleanly, efficiently and safely ensuring that we all play our part in providing the electricity we need while combating global warming.

High availability is one of the most important issues in computing today. Understanding how to achieve the highest possible availability of systems has been a critical issue in mainframe computing for many years, and now it is just as important for IT and networking managers of distributed processing.

A certain amount of mystery surrounds the topic of power availability, but consideration of just a few important points leads to a metric which IT managers can use to increase their systems and applications availability and make a rational price/performance purchase decision.
The importance of high systems availability
Availability is a measure of how much time per year a system is up and available. Usually, companies measure application availability because this is a direct measure of their employees' productivity. With critical applications, or parts of critical applications, physically distributed throughout the enterprise, and even to customer and supplier locations, IT managers need to take the necessary steps to achieve high applications availability throughout the enterprise.
Power availability is the largest single component of systems availability and is a measure of how much time per year a computer system has acceptable power. Without power, the system, and most likely the application, will not work. Since power problems are the largest single cause of computer downtime, increasing power availability is the most effective way for IT managers to increase their overall systems availability. Power availability, like both systems and applications availability, has two components: mean time between failures (MTBF) and mean time to repair (MTTR). The two most important issues in increasing power availability are therefore increasing the MTBF and decreasing the MTTR of the power protection system.
Increasing MTBF
MTBF is the average number of hours it takes for the power protection system to fail. The MTBF of the system can be increased in two ways: by increasing the reliability of every component in the system, or by ensuring that the system remains available even during the failure of an individual component. There is a finite limit to how reliable individual components can get, even with increased cost. Today, typical power protection systems that rely only on high component reliability achieve MTBF between 50,000 hours and 200,000 hours.
By adding a level of redundancy to the system it is possible to achieve a three-to six-fold improvement in MTBF for power protection devices. Redundancy means a single component of a power protection system can fail and the overall system will remain available and protect the critical load.
Of course, component reliability is a requirement of any system. However, Fig. 1 shows the diminishing returns of increasing component reliability. Line 1 shows the plateau that occurs when MTBF is increased by using more reliable (and therefore more costly) components. Line 2 shows how redundancy, in addition to component reliability, can raise MTBF to the next plateau.
Decreasing MTTR
One way that systems downtime can occur is when both the power protection system and the utility power fails. A shorter MTTR can decrease the risk that both of these events will occur at the same time. By driving the MTTR towards zero, it is possible to essentially eliminate this failure mode.
Adding hot-swappability to a power protection system is the most effective way of decreasing MTTR. Hot-swappability means that if a single component fails, it can be removed and replaced by the user while the system is up and running. When hot-swappability is used in conjunction with a redundant system, MTTR is driven close to zero, since the device is repaired when there is a component failure but before there is a systems failure.
The Power Availability (PA) Chart
The relationship between power availability, redundancy, and hot-swappability is easily explained by using the PA Chart, which categorises power protection systems in quadrants according to how well they meet the requirements of high power availability – redundancy and hot-swappablity. As more components in a system become hot-swappable, the system moves from the bottom to the top of the graph (Fig. 2), and as more components become redundant, it moves from the left to the right of the graph. IT managers can choose the solution that is right for them, depending on the need for high availability and the amount of money they want to spend.
The PA Chart corresponds to the types of power protection systems available today as shown in Fig. 3. The standalone UPS is neither hot swappable nor redundant. As shown in the table, a standalone UPS provides normal power availability because uptime is dependent on the reliability of the UPS itself.
The fault tolerant UPS is sometimes described as providing affordable redundancy. Systems of this type have redundant components but not all of the major components are hot-swappable. This type of system offers high power availability because the power protection system will continue to protect the load when a component fails. But because a failed component often results in the entire UPS needing replacement, this type of system can have serious drawbacks, including expensive and time-consuming repair with both systems downtime and a major inconvenience for IT managers. Fault tolerant UPS systems may have some hot-swappable components, such as batteries and a subset of power electronics, but in most cases a high number of critical components, such as the processor electronics, will not be hot-swappable. The more components that are not hot-swappable, the lower the power availability.
Like fault-tolerant UPS, modular UPS offer high power availability. Modular UPS have multiple hot-swappable components and are typically used for multiple servers and critical applications equipment. Many modular UPS also have redundant batteries. Their main advantage over fault-tolerant UPS is that all of the main components which can potentially fail can be hot-swapped, eliminating planned downtime due to a service call.
Highest levels available
The PowerWAVE range of modular UPS offers the highest level of power protection currently available in the UPS market. In a PowerWAVE modular UPS the power electronics, batteries, and processor electronics are both redundant and hot-swappable. This system provides very high power availability and the highest level of protection for IT managers’ critical loads. A PowerWAVE modular UPS may cost a little more than a similarly-rated standalone UPS, but the increased system reliability and availability are invaluable to the IT manager.
The different types of power protection systems in the PA Chart can be measured linearly with the PA Index, according to how much power availability they provide. The PA Index serves as a tool to explain the difference between power protection systems. Fig. 4 shows each of the quadrants from the PA Chart mapped into a level of the PA Index. Fig. 5 shows the relative power availability provided by each type of system. The PA Index maps directly into the PA Chart and makes the different characteristics of high availability power protection systems clear.
In conclusion, IT managers can use the PA Chart and the PA Index to help them choose the right power protection system for their high availability applications. The standalone UPS, the modular UPS, and the PowerWAVE 9000 Series modular UPS all offer real benefits in terms of power availability versus cost. Although fault-tolerant UPS offer high power availability – and are marketed as such – they introduce serious drawbacks including a high MTTR and potentially significant inconveniences for IT managers.

Ensuring worker safety in and around industrial processes is a vital consideration for manufacturers and OEMs. Balancing the needs of safety with commercial considerations becomes ever more complex as safety standards evolve and new technologies become available. But, as Paul Davies of Rockwell Automation explains, by understanding the principles underpinning an effective safety strategy, designers can ensure the needs of both are satisfied.

Any safety programme should start with a thorough risk assessment that will help identify the areas of risk within a facility or machine, and point to the right technology to reduce that risk. Rather than aiming to remove risk altogether, a risk assessment aims to establish acceptable levels of risk. This analysis proves invaluable in helping to identify the kind of safety products that might be required in any particular application to achieve the most effective – and practical – solution. In a manufacturing environment, the assessment process can help to chart a course for an effective machine-guarding strategy, itself forming part of an overall safety strategy designed to protect the company’s investment in both machinery and personnel.
Design-out potential hazards.
The best way to reduce a potential hazard is to remove it at the design stage. A careful review of the risk assessment and risk reduction at the earliest stages of design can highlight potential trouble spots, such as pinch-points or sharp edges, helping companies take the necessary steps to design-out these features long before they require guarding. The removal of risk areas in this way can result in more efficient machines, as with fewer potential hazardous areas, there are fewer risks of unplanned stoppages occurring.
Consider the options for machine guarding.
Where a hazard cannot be removed entirely through design, the reduction of risk by the physical guarding of the hazardous area is the next best option. There are a huge range of machine guarding systems and components available, including safety mats and safety interlock switches, that can be used to protect workers around specific areas of a machine or industrial process. Devices such as light curtains can be used to guard areas – enabling exclusion zones to be created for maximum worker protection. Systems frequently combine elements of both to achieve the most effective solution.
As part of the analysis of the most effective strategy to adopt, careful consideration must be given to how frequently a machine or process will need to be accessed. This analysis will help refine the list of possible machine guarding solutions, allowing designers to arrive at a strategy that balances the commercial needs of the operation with the need to ensure risk levels are reduced to an acceptable level. Naturally, it’s also important to ensure that the solution chosen doesn’t itself cause another hazard!
Add advanced controls.
As well as applying the appropriate machine guarding devices, engineered solutions can be implemented to further reduce potential risks. Electromechanical safety relays have formed the backbone of safety control design for many years. Today’s devices offer a wealth of advanced features that allow sophisticated safety schemas to be implemented without adding unnecessary expense or complexity. Even more advanced protection can be provided by safety-rated controllers. Using these dedicated safety control architectures, extremely sophisticated solutions can be developed employing the full range of inputs, such as light curtains, E-stop buttons and safety mats and outputs such as guard locking solenoids and alarms. Clever design, such as the manual release function found on high-end safety interlocks, can enhance safety functionality still further at very little extra cost.
Promote awareness
Encouraging safety awareness helps reduce levels of potential risk in any workplace, but particularly so in industry. Effective signage and the use of visual/audible warnings can all help reduce the risk of accidents. Careful consideration of positioning should be carried out to ensure that signage and warning devices are positioned where they will best serve their intended purpose. Consideration must also be given to which products would be most appropriate in each given circumstance. For example, an audible alarm would need to be clearly distinguishable above the normal operational noise of a machine or process. Once again, a comprehensive range of warning beacons and audible alarms are available on the market, enabling the designer to chose the most appropriate device for use in each application.
Providing effective training that allows workers to understand the hazards likely to be encountered in the workplace and how to reduce the potential risk is the cornerstone of any safety strategy. The majority of workplace accidents are caused through ignorance and/or failure to follow correct safety procedures. While it is the company’s responsibility to provide such training and equipment as is necessary to reduce risk, it is the employee’s responsibility to ensure that this equipment is used and these procedures are applied in the workplace. While an important element in any safety training programme is to ensure that all employees understand that safety is everybody’s responsibility, choosing safety products which incorporate tamper-resistant features also helps to ensure the overall integrity of the safety strategy.
Follow-up assessments
After installing physical safeguards and establishing safety procedures, it is vital that follow-up assessments are made to ensure that risk has been reduced to an acceptable level. It is also vital that periodic assessments are made to ensure that these measures remain effective. But as well as confirming the effectiveness of the safety measures adopted, such assessments should be made on a commercial basis too: The twin aims of any safety strategy should be safety and productivity, and these two aims are not mutually exclusive. A careful follow-up assessment might reveal ways in which a process could be made more efficient without compromising safety levels. New products and technologies, such as the Safe-Off facility in Allen-Bradley PowerFlex drives, for example, could offer just such an opportunity by delivering both enhanced safety and improved efficiency.
Rely on experience
When embarking on any safety programme, the single most valuable asset a company can have is an experienced partner, well versed in both current legislation and the latest safety techniques and technologies. When choosing their safety partner, designers should consider carefully not just the ability to supply products, but also the expertise available to be able to understand the issues and to make the right recommendations to balance safety effectiveness and cost effectiveness.

The visible damage caused by lightning can be spectacular but the damage caused to sensitive electronic systems can have a far more profound effect on operations and profits says Andy Malinski of Omega Red Group

Most modern companies place heavy reliance on the uninterrupted functioning of electrical systems used to power everything from sophisticated IT networks and telecoms to lighting and heating systems. Yet many buildings, including those only a couple of decades old, were simply not designed with surge protection in mind even though a strong surge can completely disable the electrical system in place. Protecting electronic equipment from the consequences of surge or lightning activity is essential and not just because of the immediate disruption it can cause.
Big increases in insurance claims for surge related damage have led some insurance companies to increase premiums for companies heavily reliant on electronic technology and to impose exclusions of cover pending this particular problem being addressed. Correctly following BS6651:1999 Annex C general advice on protection against lightning of electronic equipment within or on structures should ensure that any site has adequate surge protection and will help to secure the relevant insurance for the site.

The problem
During lightning activity, and switching operations, transient surge voltages are generated. Surges are short duration voltage spikes appearing on a mains power or low voltage signal line such as a computer or telephone line.
The amount of energy contained within the surge is dependant on the magnitude and duration of the event but values up to several thousands of volts lasting for microseconds can be generated. Two main causes of transient over-voltages are:
• Switching. This results from an electrical load being switched on or off with typical loads including motors, transformers, welders, photo copiers etc. These types of surges happen many times every day, are of a short duration and value and do not usually cause major problems.
• Lightning and atmospheric disturbances: Surges generated by lightning activity tend to be of a much higher level and are consequently much more dangerous. They can be generated either by direct or nearby (up to 1.5km away) lightning strikes though most damage tends to be from nearby strikes as the magnitude of discharge current is more concentrated.

How does it get in?
Copper conductors used for electrical, mains, data, computer and telephone wiring are prime routes for transient surges to enter buildings and there are three easy main ways in which these transient voltages affect wiring systems.
• Resistive coupling – a ‘cloud to ground’ lightning strike injects a massive current into the ground. This raises the ground potential in the area of impact to a high level and for the current to dissipate it will seek the path of least resistance to earth. Cables running between buildings are usually connected to different earthing systems at each end and a cable connected to an earth of a lower value forms an ideal route for the induced current to follow.
• Inductive coupling – a lightning discharge causes a huge current to flow. This in turn sets up a massive magnetic field. Any conductor passing through this magnetic field will have a surge voltage induced on the cable, this is the same principle as used by a transformer, and it can happen either above or below ground.
• Capacitive coupling – atmospheric disturbance causes high voltages to be generated. A low voltage conductor in the area of influence of these voltages can be charged to that same voltage, this is the same effect as charging a capacitor.
Dealing with a surge
The British Standard addresses lightning protection for both external structural strikes and internal surges.
Appendix C of the standard looks at internal protection of electrical and electronic systems. More recently the new European standard EN 62305/4 Electrical and Electronic Systems covering surge protection has been released to work in tandem with the British standard but within a few years the national standard will be withdrawn in favour of the new European standard.
For both external and internal protection the first step is to undertake a risk assessment. This is a comprehensive, complex assessment as many factors need to be considered. Internal surge protection takes into account all factors affecting the electronic equipment and systems within the building.
A risk assessment looks at many factors including the number, length and types of cables entering and leaving the building, equipment types, exposure and risk levels, recommended levels of protection, and cable routing, amongst others.
Results taken from the assessment will determine if protection is required and the correct protection methods to use. This can take the form of surge voltage protection devices, the repositioning of cabling, a more effective earthing system or by other means.
Zones of protection are defined, with the corresponding levels of protection required, and co-ordinated protection devices are fitted at zone interfaces. By using a co-ordinated system the high surge current present at the outer zone will be dissipated and the attenuated surge then handled by devices fitted at subsequent zone interfaces – limiting possible damage.

Surge protection in practice
All electronic equipment has a transient safety level, a maximum surge voltage value that can be applied to the equipment without causing damage.
The protection device must reduce the surge voltage to below this value with any excess voltage shunted to earth in the quickest time possible. The let-through voltage of the protection device needs to be as near as possible to the nominal voltage of the line being protected.

Mains power protection
When applying surge protection to a site the first system to protect is the mains power as the large diameter of the cable will allow surges to pass into the system with minimum attenuation. Of greatest concern, the mains is common to all other systems and a surge entering via this route will quickly spread into all other associated systems.
Protection at the main Low Voltage (LV) incoming supply is necessary to control large transients before they enter the distribution system.
Additionally, protection should also be installed locally to important equipment or sub-distribution boards feeding outside equipment.
This is to guard against both internally and externally generated transients, which may be injected back into the distribution system.
Low voltage telephone signal protection devices fit in series with the cables being protected. On older termination systems the Surge Protection Device (SPD) is wired directly into the circuit.
Plug in protection devices are available for LSA Plus (registered trade mark of Krone) termination strips, these come in either single or ten way configurations.

Building Management System
Typically Bulding Management Systems (BMS) consist of a network of separate stand-alone slave controller panels.
Across the site a RS485 backbone cable will interconnect all panels. A drop cable making the network connection to each panel.
On most systems the drop cable is a single twin twisted pair utilising RS485 protocol. It is recommended to fit a surge protection device in series with each drop cable, as near to the interface card as possible.
The input/output (I/O) from each slave controller is usually internal to the building being monitored and controlled. Any I/O external to the building requires an SPD to be fitted.

Fire and Intruder Alarm Systems
These systems tend to be wired using the same type of backbone network as for the BMS.
The main difference being that they will have more I/O cabling connected to outside sensors and alarms. SPD’s are required to be fitted to any cable connected to an outside device.
The positioning and routing of the cables either side of the SPD is of the utmost importance. Incoming and outgoing cables either touching, or closely running in parallel with one another, can cause surge voltages to be induced ‘across’ cables.

Close Circuit Television (CCTV)
The two most popular types of camera are:
• Pan, Tilt and Zoom (PTZ) cameras
• Fixed cameras
Pan, Tilt and Zoom (PTZ) cameras are controlled from a central location with the vertical and horizontal positioning of the camera being via signals sent over the RS485 data loop. As the central processor would consist of expensive monitoring and control equipment, SPD’s would usually be fitted to this end of the system. It is possible to offer protection to the camera by fitting surge protection devices within the camera.
The level of protection required depends on the vulnerability of the camera. Is it located within the zone of protection of the building? Does the building have a lightning protection system fitted? Is the mounting pole correctly earthed?
Usually these would be monitored from a central point, the main difference being that motion control of the camera is not possible. Protection requirements and the methods of achieving them are the same as for the PTZ cameras.
The initial surge assessment will identify which ends of each system require protection.
As a surge can travel both ways in a cable it is important to protect both ends of the system if necessary.

Professional Contractors
Of course, none of the above holds true unless a competent contractor is used to survey, design, specify, install and maintain the surge protection system.
If a surge protection system is to work effectively to prevent equipment failures then a number of other factors need to be taken into account in addition to the technical expertise that is required.
Look for evidence of a proven track record, particularly for major or technically challenging projects.
There are a lot of suppliers in the market but the best will be able to provide customer references attesting to the professionalism of their work and thoroughness of approach.
Make sure the organisation has the resources needed to do your job –?can they handle large scale, multi-site operations across the country using their own staff or will the job be largely subcontracted to organisations that may not have the same quality standards or professionally trained staff?
By combining technical and service factors, the maximum level of protection from induced surges and over-voltages is delivered to the electronic systems within the building, this protects productivity and, ultimately, profits.

While attention has been paid to the Climate Change Levy and the tax allowances for fitting energy saving equipment such as variable speed drives, Richard Walley of Schneider Electric Building Systems and Solutions, argues a strong case for looking at power factor correction first when looking to cut electricity consumption

Energy consumption was brought into focus for industry when the Climate Change Levy (CCL) was introduced. Essentially, the CCL is an additional tax on energy usage by industry and commerce, but the UK Government tried to soften the blow by introducing Enhanced Capital Allowances for capital investment in certain energy saving technologies. What this means is that the full value of the installation can be offset against income tax in the first year. Although this does provide a small amount of relief for such investments, the allowable equipment only extends to the likes of efficient boiler systems and to variable speed drives used to control electric motors. In practicality, the CCL has not impacted as greatly as first imagined and neither has the ECA had anything like the take up expected.
Why power factor correction equipment was not considered in the tax allowances introduced along with the Climate Change Levy is a mystery, since its primary function is to reduce energy losses! By adopting power factor correction measures it is possible to substantially reduce the current taken from the electricity supply. There are major kW losses on the network defined by the I2R law whereby the square of the current is multiplied by the resistance of the cables, transformers and overhead lines that form the national electricity distribution system. These distribution losses vary between Network Distribution Operators (NDOs) and are quoted as high as 11% typically and up to 19% in one instance. These figures do not include the additional losses occurring on the National Grid system.
The national system electrical energy losses represent an enormous CO2 component emitted from the generation of the wasted power. Although these losses cannot be completely eliminated, there is a very strong argument that their reduction should be encouraged.
The effect of power factor correction on system losses can be dramatic. In one case, a 6MW supply with a power factor of 0.68 improved to 0.95 once correction equipment was installed. This improvement of power factor reduced the system losses, for which this user is responsible, by 46%.
Apart from the CCL, most of the NDOs apply a penalty for poor power factor – either in the form of a reactive energy charge and or a supply capacity charge based upon kVA. These charges are part of the ‘Use of System’ charges and are therefore not dependent on the energy supplier but the host.
There might be no special tax relief provided for installing power factor correction equipment, but it remains one of the best ways to reduce both electricity costs and the resultant pollution caused by the generation of subsequently wasted power. To compensate for the increases in energy charges both from the CCL and from general utility price increases, as well as the reactive energy and supply capacity penalties already imposed, users should first examine their power factor. This is the area where real energy cost savings can be made without switching anything off or disturbing production. It is also one of the measures that will benefit the environment.
The energy supplied to industrial consumers is divided into two components: kilowatts, the energy used to perform work; and kVAR, which is the reactive energy used to energise magnetic fields. A combination of both types of energy is taken from the supply network and both contribute to system losses. This combined energy or power taken from the system is referred to as kVA (kilo volt amperes). Power factor is defined as the ratio of useful energy (in kW) and the total energy taken from the system (in kVA). Inductive-reactive energy is negated by the installation of appropriately sized power factor correction equipment.

Power factor is one of the most misunderstood areas of electrical engineering, yet it is really very simple. Plant and equipment most likely to contribute to poor power factor are those requiring the creation of a magnetic field to operate, such as electric motors, induction heaters and fluorescent lighting. All these types of devices draw current that is said to lag behind the voltage, thus producing a lagging power factor.
Capacitors, used in most power factor correction equipment, draw current that is said to lead the voltage – hence producing a leading power factor. If capacitors are connected to a circuit that operates at a nominally lagging power factor, the extent that the circuit lags is reduced proportionately. Circuits having no resultant leading or lagging component are described as operating at unity power factor and therefore the total energy used is equal to the useful energy.
So, let us consider the effect of reactive energy on the system. Reactive energy substantially increases the energy losses on the local and national supply networks, including the users’ own installation. This increased loss also applies to the users’ own transformers if they are high voltage consumers. Reactive energy also has the undesirable effect of reducing the capacity of the network and transformers.
From the environmental point of view – and remember, that is what has driven the Climate Change Levy – the additional losses and the provision of the reactive energy itself, require an unnecessary increase in output from the power stations. This results in higher carbon dioxide (CO2) emissions.

Increased costs
The inefficient use of energy ultimately means increased costs for everyone. Many consumers already have power factor correction equipment installed, but some of this inevitably does not function correctly. Now that reactive charges apply, it is worthwhile getting existing equipment checked, maintained and tested to ensure it is adequately sized to meet the penalty levels now being imposed.
The benefits of installing power factor correction equipment, irrespective of the lack of Enhanced Capital Allowances (ECAs), are very clear. Electricity costs are reduced, sometimes by thousands of pounds each year. Reduced power system losses means a reduction in the emission of greenhouse gases and also the depletion of fossil fuels in the case of coal-fired stations. The reduced electrical burden on cables and electrical components leads to increased service life. Finally, by using power factor correction equipment, additional capacity is created in the users’ systems for other loads to be connected.
In short, despite the seeming short sightedness of the Climate Change Levy and the limitations of the provision of ECAs, the installation of power factor correction equipment can bring users bigger cash savings in the short, medium and long term. The environment will benefit too.

Think of a primary substation and you’ll probably envisage a vast outdoor compound with massive transformers connected to overhead powerlines and bank after bank of circuit breakers and disconnectors. This is not always the case says Stephen Trotter, ABB’s director of power systems projects for the UK

Traditional substations have provided excellent service over many years, and many are still being constructed today. However, when it comes to planning substations in urban areas there is an ever increasing demand from utility customers to minimise the space required, not just because of the cost and availability of land, but also to reduce their visual impact on the local environment. New high voltage technology offers the ideal solution in the form of gas insulated switchgear (GIS) which enables substations to be ‘shrunk’ into about 20 per cent of the space required by a traditional design, and housed indoors or even buried underground.

GIS advantages
Until the 1970s, air insulated switchgear (AIS) was the type most commonly in use for substations. AIS requires large distances between earth and phase conductors and therefore a good deal of space. This means that for higher voltages – typically above 36kV - this type of installation is only feasible outdoors.
The situation changed when SF6 (sulphur hexafluoride) became available as an insulating medium in switchgear enclosures in order to reduce phase to earth distances. The advantages of GIS compared to AIS are as follows:
• Less space requirements, especially in congested city areas, saving on land costs and civil works
• Low visibility buildings can be designed to blend in with local surroundings
• Less sensitivity to pollution, as well as salt, sand or even large amounts of snow
• Increased availability and reduced maintenance costs
• Higher personnel safety due to enclosed high voltage equipment and insignificant electromagnetic (EM) fields.
A direct comparison of the component investment for identical switchgear configurations will suggest that the GIS variant is more costly than the AIS solution. However, this does not necessarily show the true story. The capability to install a GIS substation within a significantly smaller site – typically up to 80 per cent smaller - enables it to be located close to the load centres, providing a far more efficient network structure at both the HV (high voltage) and MV (medium voltage) levels. As a result, both the investment and operating costs are reduced.
Sites large enough for new AIS substations are seldom available, and when they are their cost is usually extremely high. But it is not just the smaller size of the site that can make GIS the lower-cost option: GIS is also the more economic alternative when expanding or replacing existing substations. An inner city site that has been used previously for an AIS installation could be sold or rented out and the income used to finance the new substation. The compact nature of GIS enables an HV transformer substation to be fully integrated in an existing building, which may only have to be increased in height or have a basement added.

Port Ham shrinks from view
Central Networks’ £12 million replacement Port Ham switching station, which has recently been constructed by an ABB and Balfour Beatty consortium on the banks of the River Severn, near Gloucester, provides an ideal example of the advantages of the GIS approach.
Port Ham is a grid supply point. It takes electricity at 132kV from the National Grid substation, a few miles away at Walham, and feeds it into the Central Networks distribution network. Through a network of primary and secondary substations, this network feeds over 240,000 customers in Gloucestershire, Herefordshire and much of south and east Worcestershire.
The original outdoor station, built in the early 1950s, had experienced above average load growth, to a peak load of 672MVA. The AIS equipment had reached the end of its useful life, so in 2002 Central Networks decided to completely rebuild the facility to ensure continued reliability of supply, as well as providing scope for further load growth.
Initially, the project was tendered in the expectation that the AIS would be replaced on a like-for-like basis. However, in consultation with the ABB and Balfour Beatty consortium, Central Networks decided building a new indoor GIS switching station would offer a number of important advantages, at around the same overall cost. A key benefit was that ABB’s state of the art compact ELK-04 (GIS) switchgear solution could be condensed into just one-fifth of the space used by the existing station. Port Ham is in an important nature conservation area. So the smaller switchgear allowed Central Networks to meet planning concerns by housing the station in a low-profile building designed to blend in with the local environment.
In addition to saving space, GIS also offered two further advantages. Firstly, circuit downtime could be reduced, as the new GIS circuits were constructed with the existing units still in service. Downtime was limited to the rerouting of the network connections. This was a crucial factor, because of the critical position of Port Ham in the supply network. Secondly, the GIS was constructed outside the existing live compound, considerably reducing health and safety risks to personnel working on site.
One of the major project challenges was the soft ground – on the flood plain of the River Severn – which required major foundation work before construction could begin. In just over 10 days some 120 cast concrete piles were driven down 15 metres to the bedrock. The building itself has been raised on stilts to ensure that the switchgear is at least one metre above the predicted level of the once in 100 years flood level.
The new indoor switching station comprises 20 bays of GIS switchgear: 12 feeder circuits; four National Grid incomers; two bus couplers; and two bus sections. The size of the investment and the strategic importance of Port Ham made it a flagship project for Central Networks.

NEDL’s Norton substation
A similar approach was adopted when NEDL needed to replace its 132kV substation at Norton, near Stockton on Tees, that interconnects the National Grid and NEDL’s distribution network.
The new indoor GIS substation, completed in 2005, occupies just one sixth of the space of the old AIS substation. It is rated at 540MVA, and features 20 bays of switchgear (four of which have been transferred to National Grid) with four incoming circuits fed by Supergrid transformers and 14 outgoing circuits, two of which feed local grid transformers.

Going underground
The GIS switchgear concept has been taken to its logical conclusion in ABB’s Barbana 132kV/20kV transformer substation in the centre of Orense, Spain. The 132kV switchyard, comprising two cable feeder bays and one transformer bay, has been constructed entirely underground and concealed beneath a park. This design requires forced cooling, which inevitably entails unwanted fan noise. But damping features or low-noise fans can be expensive. Instead a waterfall has been created. This acts as a heat exchanger to dissipate the heat created by the transformer while the sound of the falling water also drowns out the noise from the fans.

Google may have made headlines when it stated energy costs outweigh server costs in its data centres, but a sobering thought, according to Rob Potts at APC, is only a third of datacentre energy is actually used for computing – up to 70% may be taken up by power, cooling and inefficiency losses

It is estimated that worldwide, datacentres consume some 40,000,000,000 kW/ hours of electricity annually•. Because of the need to provide high levels of redundancy in order to maximise uptime and reduce downtime – the goal of most facility operators – a degree of electrical inefficiency in the sector seems to be an acceptable fact of life. However, by increasing electrical efficiency there is also an opportunity to reduce energy use and therefore operating expenses.

How efficient is your physical layer?
For any device or system, efficiency is simply defined as the fraction of its input (ie. the fuel that makes it ‘go’) converted into the desired useful result, in this case computing. If all datacentres were 100% efficient, then all power supplied to the data centre would be utilised by IT equipment. However, energy is consumed by devices other than the IT load because of the practical requirements of keeping it properly housed, powered, cooled and protected. The devices that comprise network-critical physical infrastructure (NCPI) include those in series with the IT load (such as UPS and transformers) and those in parallel with the load (such as lighting and fans).
In simplistic terms, the more energy can be expended on computing and reduced on non-IT devices, the more efficient the facility.
Can data centre efficiency be improved?
Virtually all of the electricity feeding a datacentre will end up as heat emission. From a facilities point of view, efficiency can be improved in a number of ways including:
• Improve the design of NCPI devices so that they consume less power
• Rightsize NCPI components to the IT load
• Develop new technologies which reduce the power consumed by non-IT devices
On the face of it, option two provides the most immediate solution to meet current data centre challenges. At the same time, better power efficiency of servers is being achieved through the introduction of multi core processor architectures, and improved utilisation of IT layer is being brought about through virtualisation.

Real world options
Before setting out to realise available power savings, some common misconceptions need to be corrected:
• Firstly, the efficiency of a facility is not a constant, for example air conditioning units and UPS are far less efficient at low loads (and, conversely, far more efficient at higher loads)
• Secondly, the typical IT load tends to be significantly less than the design capacity of the NCPI components (due, in part, to conservative ‘nameplate’ rating by IT manufacturers)
• Thirdly, the heat output of the power and cooling NCPI components themselves creates a significant energy burden for the whole system and should be included when analysing overall facility efficiency.
An additional factor affecting the efficiency of facilities is that the IT load itself is not constant but dynamic, both operationally and through inventory changes. For instance, as computing throughput increases, electrical consumption is also increased. Also, over the lifetime of facilities, IT inventory is in a constant state of flux as new generations of equipment replace old. Until recently, every increase in server performance has come complete with an increase in electrical demand.

Efficiency is dynamic
Finding an improved model for data centre efficiency depends on how accurately individual components are modelled. However, the use of a single efficiency value is inadequate for real data centres as the efficiency of components such as the UPS are a function of the IT load.
Therefore when the UPS operates with a light load, efficiency drops off substantially. The losses that occur along this curve fall under three categories: no-load loss, proportional loss, and square-law loss.
No-load losses can represent more than 40% of all losses in a UPS and are by far the largest opportunity for improving UPS efficiency. These losses are independent of load and result from the need to power components like transformers, capacitors, and communication cards.
Proportional losses increase as load increases, as a larger amount of power must be ‘processed’ by various components in its power path. As the load increases on the UPS, the electrical current running through its components increases. This causes losses in the UPS with the square of the current sometimes referred to as ‘I-squared R’ losses or square-law losses. Square-law losses become significant (1 to 4%) at higher UPS loads.
The efficiency of a device can be effectively modelled using these three parameters, and a graphical output of efficiency can then be created for any component, as a function of load – understanding that typical datacentres operate well below their design capacity.

Effects of under-loading
If the efficiency of NCPI components such as UPS and cooling equipment decreases significantly at low loads, any analysis of data centre efficiency must properly represent load as a fraction of design capacity. It is a fact that in the average data centre, power and cooling equipment is routinely operated below rated capacity. There are four reasons for this:
• The data centre load is simply less than the system design capacity, in fact, research shows the average facility operates at 65% below its design value.
• Components have been purposely oversized to provide a safety margin – in order to provide high availability, ‘derating’ components by 10% - 20% is common design practice.
• Components operate with other similar components in an N+1 or 2N configuration to improve reliability or facilitate concurrent maintenance of hot components. However, such configurations have an impact on physical layer components, for example in a 2N system the loading on any single component is at best half of its design capacity.
• Components are oversized to handle load diversity, for example PDUs are routinely oversized between 30% and 100% in order to utilise capacity and overcome issues caused by imbalance between PDU loads.
Effects of power and cooling equipment
Heat generated by power and cooling equipment in the data centre is no different to heat generated by the IT load, and must also be removed by the cooling system. This creates additional work for the cooling system, causing it to be over sized, which in turn creates additional efficiency losses.

An improved model for datacentre efficiency
Armed with this knowledge it is possible to create an improved model and therefore make improved estimates of data centre efficiency. Using typical values for equipment losses, derating, load diversity, oversizing and redundancy, an efficiency curve can be developed.
Efficiency is dramatically decreased at lower loads where many data centres operate, e.g., if a facility only reaches 10% of its design capacity, only 10% of the power delivered to the data centre reaches the IT load. A staggering 90% is lost through inefficiencies in the NCPI layer.
Another way to look at this analysis is to consider its financial implications: at 30% capacity utilisation over 70% of the total electricity cost is caused by NCPI inefficiencies in power and cooling equipment. The primary contributor to data centre electrical costs are no-load losses of infrastructure components, which typically exceed IT load power consumption. Many of the losses are avoidable and analysis using the model can help identify and prioritise opportunities for increasing efficiencies. Based on this and the need to gain a quick return, the best solution is to right size facilities using an adaptable and modular architecture.

•Figures quoted from White Paper #113 “Electrical Efficiency Modelling for Data Centres”.

he electrical industry was out in force to celebrate the Electrical Industry Awards and the NICEIC’s 50th Anniversary Gala Dinner at London’s Grosvenor House Hotel on 20 September. The evening brought together over 800 electrical engineers, manufacturers, wholesalers, contractors and industry association members, to celebrate the very best of the industry.
Radio 2 and BBC Newsnight presenter, Jeremy Vine, hosted a memorable evening providing lively commentary as the winners stepped up to claim their well deserved awards.
Feedback from those who attended was very positive. Jack McDavid from the HVCA wrote: “I enjoyed the experience very much indeed, and agreed wholeheartedly with the many fellow guests who pronounced the evening a success.”
The standard of entries was extremely high, but after much deliberation, the judges announced the following winners:

Commercial Electrical Contractor of the Year (sponsored by Schneider Electric)
Winner: Unidata

Best Electrical Health & Safety Initiative (sponsored by NICEIC Insurance Service)
Winner: The Dodd Group
Best Wholesaling Initiative (sponsored by Basec)
Winner: BDC

Customer Service in Wholesaling (sponsored
by ABB)
Winner: WF Electrical

Test & Measurement Product of the Year (sponsored by Electrical Times)
Winner: Kew Technik

Best Lighting Initiative (sponsored by WF Electrical)
Winner: Thorn

Best Product Innovation (sponsored by Electrium)
Winner: Thorn

Wholesaler of the Year (sponsored by Unitrunk)
Winner: Edmundson Electrical

Energy Efficiency Product of the Year (sponsored by ABB)
Winner: Kirklees Metropolitan Council

Outstanding Communications in the Electrical Industry (sponsored by Yell.com)
Winner: Voltimum UK and Ireland

Best Registered Training Provider (sponsored by EDF Energy)
Winner: Electrical Test Services

Automation Project of the Year (sponsored by Electrical Review)
Winner: Schneider Electric

Power Product of the Year (sponsored by Amps)
Winner: Terasaki

Best Electrical Product of the Last 50 Years (sponsored by Professional Electrician Magazine.)
This category asked voters to choose from one of five products that attracted the most requests for information from Professional Electrician Magazine. The winner was Super Rod for its cable rods.

Best Environmental Initiative of the Year (sponsored by Edmundson Electrical)
Winner: ABB

Best Practice in Energy Efficiency (sponsored by Kew Technik)
Winner: The Lowe Group

Domestic Electrical Contractor of the Year (sponsored by Rexel Senate)
Winner: Owen Bowness & Son

Electrical Skills for the Future (sponsored by Mr Electric)
Winner: Clarkson Evans

Best Customer Service Provider for Domestic Installations (sponsored by Domestic & General)
Winner: TBS Adaptations

Outstanding Contribution to Electrical Excellence (sponsored by Megger)
The winner of this special award was Peter Lawson-Smith for his dedication to promoting electrical safety.

Spotlight on Automation Project of the Year

Schneider Electric provided the winning entry in this category, with a £3m product packaging line for a leading pharmaceuticals manufacturer, relying on the expertise of the system designer PES Technology and Schneider Electric products.
The packaging line in question handles diagnostic treatments for individual patients, treatments with a useful life measured in hours. The flasks containing the treatments are despatched all over the world. Product must be at a hospital within 36 hours and used within a further 72 hours. If a production delay occurs, the customer must scrap each order in-process – a truly mission-critical system. Drives, servos, motion controls, sensors, automation controllers, HMIs and software from Telemecanique, a brand of Schneider Electric, were all integrated within the system.
A full review of the Electrical Industry Awards can be found in the Book of the Night Souvenir Issue accompanying this magazine.