Features

Electrical contractors can not and must not take the recycling of fluorescent bulbs lightly says Terry Adby

How many electrical contractors does it take to change a light bulb? It doesn't really matter, because, with the double focus nowadays on health and safety and sustainability, the real question should be: "Do they know what's being done with the old one?"

As efficient electrical waste disposal gets both more complex and more necessary - for financial, operational and legal reasons - those involved in electrical engineering and building services would be unwise not to pay heed to the answer, a fact that one recent prosecution has shed some revealing light on.

The sustainability lobby's continued success in promoting the balancing of successful business with effective environmental protection (not to mention the wellbeing of the immediate workforce) has ever greater ramifications for the industries responsible for creating and managing the built environment.

Sustainability, above all, is an area where electrical contracting, now worth some £8bn per year, has a key role to play, with the opportunity to propose ‘low or no CO2' options. But to play its role successfully the industry must also pay close attention to the matter of the waste the ‘alternative' option creates, and how it is disposed of.  Developments such as the WEEE regulations (Waste Electrical and Electronic Equipment Directive) impose legal obligations on contractors over the management of ‘waste streams' onsite, and in their subsequent disposal. It aims to "improve the environmental performance of businesses that manufacture, supply, use, recycle and recover electrical and electronic equipment" and has put the practical management of sustainability centre stage. For the electrical contractor its implications are unavoidable.

Energy efficient light bulbs (‘end-of-life gas discharge lamps') are covered by the WEEE regulations and present a particular challenge, because they contain mercury and are classified as ‘hazardous waste'. When these lamps are recycled the potential release of mercury into the air at the lamp crushing stage is a threat to both the wider environment and those in the vicinity if the right protective equipment is not in place. Each time a fluorescent bulb is crushed or broken, mercury vapour is released. If the gas is not effectively captured, that vapour will find its way into the atmosphere, the staff and others in the area.

The challenges of lamp recycling made headlines earlier this year when a Glasgow-registered company, Electrical Waste Recycling Group, and one of its directors, were fined a total of £145,000 plus costs after recycling processes being used for gas discharge lamps exposed workers to toxic fumes for a period of up to ten months.

If an electrical contractor is going to propose the likes of optimal lighting configurations or energy efficient lighting units, and if they are tempted to employ energy efficiency as a sales tool, they should be confident that the principles and practice that underpin sustainability and safety are being applied all the way through the supply chain, including what happens to the waste.

Bulb crushing on an industrial scale is a serious undertaking that comes with huge levels of environmental responsibility. Nevertheless, electrical contractors may face the prospect, perhaps even at the tendering stage, of client pressure to commit to deliver such a service. Contractors need to be completely confident of the ability of the suppliers they choose to meet their commitments. They also need to know what is being done in their name further down the supply chain.

In the case of Electrical Waste Recycling Group, it was the failure to ensure the safety of the lamp crushing phase of the recycling process at its Huddersfield plant that let down the company, their workforce and the local environment. EWRG, which runs easyWEEE, WERCS (Waste Electrical Recycling Compliance Scheme) and other recycling schemes, were contracted to handle commercial waste for several Local Authorities, which included light bulbs. While none of these clients were in any way implicated along with their supplier, the judgment in the case suggests others in the chain - such as electrical contractors - could be more vulnerable. It has already been indicated in court that putting a service out to a third party does not absolve an organisation of key responsibilities and, in respect of health & safety, the HSE - which brought the successful prosecution in the EWRG case - has said that "The client must ensure whoever carries out the work is able to do so in a way that controls risks." As this case suggests, sustainability and health and safety responsibilities often go hand in hand.

Some of the details of the EWRG judgment highlight the type of issues any business, including electrical contracting businesses, should take into account to ensure they and their suppliers comply with statutory requirements when dealing with waste. The promises of suppliers, the judge made clear, are no defence in the eyes of the law. They must be effectively monitored.

One of the judge's major criticisms was the lack of an effective risk assessment process at the EWRG recycling centre, not least because issues highlighted - such as excessively high mercury levels for no apparent reason - could have been rectified much earlier had risk assessment been in place. It is, in any case, a legal requirement for an employer in discharging their obligations to keep workers and the public safe as far as "reasonably practicable".

The HSE recommends five steps for effective risk assessment: identification of hazards; establishing who might be harmed and how; evaluation of risk and deciding on precautions; recording and implementing findings and regular review. Suppliers in a business as hazardous and regulated as lamp recycling should certainly be implementing all five. Those employing them to do the work should be equally concerned that they are.

The judge in the EWRG case also stressed the need for competent staff to be involved in the process monitoring, who understand the regulations and have the knowledge and experience to spot a breach or issue. Most successful organisations, he said, have employees who understand why risk assessment and vigilance is important for the company, staff and other groups with an interest, such as the local community. 

However, all responsibility cannot be delegated to one individual or team, he added. Senior managers need to put themselves in the position of being able to interpret and understand the implications of the results of any monitoring which is undertaken.  If they do not understand the implications of results, they cannot just ignore them. In the case of a prosecution it will be the senior managers and directors who will be held responsible. It is clear, above all, when things go awry, buck passing between organisations or individuals is not an option.

EWRG paid a heavy price because it did not read nor heed the warning signs. Those looking for lessons from its prosecution certainly should however. The safe recycling of energy efficient lamps may represent a beacon for a better future but, viewed from both an environmental or health and safety perspective, the message for electrical contractors is clear: the responsibility for a safe and sustainable approach to lighting may not end with the life of the low-energy bulb.

When a power service engineer is called out to deal with a loss of supply on a customer’s HV distribution network, the chances are it will be traced to a faulty underground cable that has caused a device – such as a circuit breaker – to operate and cut-off the power. Danny O’Toole, ABB Power Service, explains

A cable in good condition and installed correctly can last a lifetime - well over 30 years. However, cables can be easily damaged by incorrect installation or poorly  executed jointing, while subsequent third party damage by civils works such as  trenching or curb edging is also another main cause of damage.

Service engineers are usually equipped with a suite of test equipment that enables them to perform an immediate on site check on the key network elements of switchgear, transformers and cables. If the fault is identified in a cable, as it often is, and the network is interconnected, they are then able to sectionalise the problem circuit to restore power to as much of the network as possible, bringing in additional generation if necessary. The next task is to locate the position of the underground cable fault as accurately as possible, since this makes it easier to find and repair so that the full network can be restored quickly.

ABB has developed a fault location regime that has proved very accurate in  locating underground cable faults in both modern XLPE type cables and older PILCSWA (paper insulated lead covered steel wire armoured) designs. Fault location is usually carried out on cable networks up to 11 kV, however the techniques can be applied on cables up to 33 kV.
The main technique employed is the SIM (secondary impulse method) that combines the use of classic high voltage surge generator thumping with low voltage TDR (time domain reflectometry). To see how this works, it is useful to consider the merits of the individual techniques.
 
Cable thumping
The high voltage surge generator, or thumper, is a portable device that is used to inject a high voltage DC pulse (typically up to 30 kV) at the surface termination of the cable to be tested. If the voltage is high enough to cause the underground fault to break down it creates an arc, resulting in a characteristic thumping sound at the exact location of the fault.

Historically, fault location was carried out by various measuring techniques and by setting the surge generator to thump repeatedly, and then walking the cable route until the thump could be heard. At which point ‘x' would mark the spot to start digging. Naturally, the higher the DC voltage applied the louder the resulting thump and the easier it becomes to find the fault. If the cable is long it could take days to locate a fault by this method. During which time the cable is exposed to potentially damaging high voltage thumping. So while the existing fault might be located, other areas of the cable could have been weakened in the process. Statistically, cables that have been thumped tend to fail sooner than would otherwise have been expected.
 
TDR
TDR (time domain reflectometry) uses a pulse echo range finding technique, similar to that used by sonar systems, to measure the distance to changes in the cable structure. It works by transmitting short duration low voltage (up to 50 V) pulses at a high repetition rate into the cable and measuring the time taken for them to reflect  back from areas where the cable has low impedance, such as at a fault. The reflections are traced on a graphical display with amplitude on the y-axis and elapsed time, which can be related to the distance to the position of the fault, on the x-axis.

A cable in perfect condition will not cause any reflections until the very end, when the  pulse encounters an open circuit (high impedance) that results in a high amplitude upward deflection on the trace. If the cable end is grounded ie a short circuit, the trace  will show a high amplitude negative deflection.

Low voltage TDR works very well for the location of open circuit faults and conductor-to conductor shorts. However, for shielded power cables, it becomes very difficult to distinguish faults with a resistance higher than 20 ohms. Unfortunately, the majority of faults in underground distribution cables are high resistance faults in the area of thousands of ohms or even megaohms.
 
SIM
The SIM (secondary impulse method) technique combines low voltage TDR and a thumper in an integrated system that makes the trace easier to interpret, with a clear indication of the fault location on a handheld display.

The process starts by running a TDR test on a healthy core, this is then stored in the SIM system memory. The thumper is then triggered to send a single HV pulse, and while the arc is forming at the fault the TDR sends a further low voltage pulse. The arc acts as a very low impedance point that causes the pulse to reflect in exactly the same way that it would from a short circuit. The handheld display combines the two traces and the fault location is shown as a large negative dip, with its distance easily read off on the x-axis.

SIM enables a fault to be located to within a few metres, even over very long cable runs of several kilometres. Of course, underground cables do not always take the shortest or most direct route between two points, so it is important to have access to the site cable records. In cases where a map of the cable route is not available a radio-detection system can be used to find the cable, but this could add a considerable amount of time to the exercise. ABB would always advise customers to make a detailed record of their underground cable circuits a priority in their maintenance planning.

Once the target area above ground has been identified, the surge generator is turned on to start thumping the cable. The operator then listens for the thump to home in on the precise location of the fault - this approach minimises the amount of time that the cable is thumped, eliminating the risk of further damage. The next step is to bring in the repair team to dig up the cable, make a visual confirmation of the problem and then effect a repair.

The time taken to locate a fault by SIM varies according to each case, but will typically take around half a day.
 
Fast track fault location for Silverstone Circuit
Silverstone Circuit, located on the border between Northants and Buckinghamshire, has its own high voltage power network comprising 17 11kV/433V substations that provide local power supplies at key points around the three-mile track. ABB has a long-standing service contract for the network to provide ongoing maintenance and repair services including a fast call-out response in the event of a fault.

At 6am on 1 July 2008 the ABB duty stand-by engineer fielded an emergency call saying that there was a major outage, with a total loss of power to half the site. In normal circumstances this would be a cause for concern. With the British Grand Prix taking place on the Sunday and hospitality organisers and traders already setting up on site, the loss of power threatened to cause significant disruption.

Within an hour, an engineer was on site. After establishing the fault was on Silverstone's own network they opened discussions with Central Networks, the local DNO (distribution network operator) to organise reinstatement of supply. A thorough test and inspection showed the problem was not due to faulty switchgear, but was cable related. So ABB's specialised cable fault location vehicle was called to the site together with spare cable and joints.

While waiting for the fault location vehicle, the fault was successfully sectionalised so that it was isolated from the rest of the network, ensuring it couldn't cause any further loss of power. This step enabled Central Networks to restore full power to the rest of the site at around 9.00am.

The fault location vehicle arrived at 10am, and in less than two hours the cable fault was located to an area beneath the tarmac base under a hospitality marquee erected for the F1 Paddock Club.

The next stage was to expose the identified section of cable for a visual verification of the damage. A further 10m of trench was then exposed to enable a new section of cable to be jointed into place. By 10pm, the jointing operation was finished, pressure tested, energised and phasing proved so that power could be restored to this local section of the network. All that remained was for the trench to be backfilled and recovered by tarmac. So what might have caused very severe disruption in Silverstone's busiest week of the year effectively became a minor incident.

By Arnaud Piechaczyk, R&D Group Leader, Nexans International Research Center

Fire-resistant cables play a crucial role in applications where it is essential to ensure the integrity and continuity of vital safety circuits during the critical building evacuation and fire fighting periods required by stringent national and international standards. In many cases, the need for cables that can deliver the required levels of performance, reliability and safety has forced designers and installers of electrical systems to compromise on other desirable properties regarding ease of handling and installation.

Now though, a major advance in the materials science related to the properties of the cable insulation – called Infit – is making it possible to produce new families of electrical power and data cables with outstanding fire resistance using classical extrusion methods. The result is a user friendly cable that offers the ‘best of both worlds’.

Eliminating the need to chose between insulation technologies
Until now, the cable industry has mainly relied on two major technologies to insure the integrity of flexible cable insulation during a fire: XLPE/Mica taping and ceramic forming silicone rubbers.

Each of these technologies presents a number of advantages. The classical insulation taping based on Mica, and largely used since the 80s, can easily be implemented on an industrial scale to provide a tough, effective electrical insulation when overlaid with a cross-linked polyethylene (XLPE) coating. It is strong but stiff to handle, so can present some installation difficulties.

On the other hand, silicone rubber insulation can be extruded directly on to the conductors, and offers a good compromise between fire performance and ease of installation thanks to its flexibility. It is, however, vulnerable to cuts and tears.

Increasing customer demands for improved fire performance, together with strippability, ease of installation and connection prompted Nexans to search for a new insulation technology that could offer all these benefits.

Infit transforms into a tough insulating ceramic layer
Infit is a unique, proprietary innovative technology that combines, in a polymeric material, the advantages offered by both the tough mica tapes layer and the extruded silicone insulation layer. This now enables the manufacture of fire-performance cables that are both tough and easy to handle as well as being easy to strip and install.

Infit technology offers enhanced fire-performance because when the insulation is exposed to fire, it transforms from a flexible, plastic covering to a tough insulating ceramic layer, hardening like clay in a potter’s oven to form a protective shield. The key to the success of the new insulation has been in using advanced materials and polymer science to optimize the nanostructure of the primary insulation materials. A combination has been found that reduces the occurrence of cracks or breaks in the insulation to preserve the operational integrity of the circuit i.e. preventing short-circuits.

Extensive laboratory tests have shown that cables with Infit insulation will to continue to deliver power in the event of a fire, long after the plastic sheath and insulation have burnt away. This means, for example, exit lighting, smoke and heat exhaust ventilators, fans or pumps will still function reliably, even in areas directly affected by fire, ensuring safe evacuation.

The science part
The new technology is the result of some ten years of development by the Nexans International Research Centre based in Lyon, France, working in close partnership with the Australian Nexans R&D Centre. The successful outcome of the project has been based on fundamental studies that especially highlight the synergy between ceramic science and the latest polymer science.

In classic ceramic science, a well defined curing process is followed to form a high performance ceramic. Yet, in the case of accidental fire, the temperature increase is sudden and unmanaged. So the first challenge was to develop a ceramic forming system able to react and form an electrical insulating shield in a very short time across a wide range of temperature increases, while also exhibiting a high level of electrical insulation.

The second parallel challenge was to achieve this performance using an extrudable formulated polymeric material, rather than a powder, that was also suitable for the very demanding standards of the cable industry. As a result, Infit technology is principally based on filled copolymers of polyolefins (like polyethylene). This kind of polymeric matrix is well adapted to the extrusion process, and well known in the cable industry - but it is also intrinsically highly combustible. However, Infit uses the synergy between this combustible matrix and a mixture of inorganic fillers to create a new insulating material that offers superior fire-performance.

Infit applications
Depending on the specific cable application, the new insulation material can be offered in either a cross-linked or thermoplastic version. This will enable cables to offer the ideal combination of fire, mechanical, electrical or thermal properties optimized for each application.
Infit is a proprietary Nexans technology, and can be implemented in compliance with many worldwide cable standards, and according to the most rigorous product quality and safety criteria. Nexans cables insulated with Infit can, for example, resist fires reaching temperatures of around 1,000°C, at voltages up to 1kV, exhibiting a high level of char cohesion and electrical insulation.

Infit is gradually being deployed across the Nexans fire resistant product ranges. This will include power, communications, control and LAN cables for use in public building and industrial applications and it is expected to be of particular interest to the marine sector.
 

 

Who would have thought that something as simple looking as a cable cleat could cause so much debate, but in recent years it has been one of the hottest topics in the electrical industry - not least because the recently introduced International standard has elevated its prominence to a whole new level. Unfortunately, the problem with this prominence is that the importance of a cleats role in any electrical installation is still not fully appreciated. Therefore, we've decided to try and rectify things by talking cleats with Richard Shaw, managing director of leading cleat manufacturer, Ellis Patents.

ER: First things, first, why do we need cleats?

RS: For an electrical installation to be deemed safe cables need to be restrained in a manner that can withstand the forces they generate, including those generated during a short circuit, and this is job that cable cleats are specifically designed to do. 

Take them away and the dangers posed by a short circuit are obvious - costly damage to cables and cable management systems, plus the risk to life posed by incorrectly or poorly restrained live cables.

And it's important to bear in mind that it's not just the use of a cleat that is vital, but the use of a correctly specified cleat. Because all an underspecified product would do in a short circuit situation is add to the shrapnel.
 
ER: Well that seems fairly straightforward, where's the problem?

RS: The key issue surrounding cable cleats is that their importance has been, and still is, severely underestimated. Therefore, instead of being treated as a vital element of any cable management installation they are simply lumped in with the electrical sundries. 

What this means in practice is that even if suitable products are specified, they are still seen as fair game for cost-cutting when it comes to companies seeking to keep within tight budgets. This is a potentially dangerous practice that, if allowed to continue, could lead to the wholly unnecessary loss of a life.

ER: Have the International (IEC61914 - 2009) and European (EN50368) standards not helped deliver this level of awareness and education?

RS: Yes, the introduction of the two standards was a huge boost for everyone associated with cable cleats. And yes, they have helped to provide global recognition of the need for secure cleating in electrical installations, which when you consider that as recently as 2003 there wasn't even a European standard for cleats demonstrates just how far we've come in the journey towards the widespread adoption of safe cleating practice. 

But, they still fall some way short of ensuring the cleat is universally understood and used correctly. The main reason being that the standards are advisory rather than regulatory, meaning that the onus is on the manufacturer to self certify their products - a situation that has led to a market awash with a mish-mash of products of differing quality, which in turn means further confusion for specifiers and installers.

ER: What needs to be done then?

RS: Compulsory third party certification really should clear up this confusion, but the problem is that the quoted short circuit withstand, which is seen as the indicator of a cleats suitability for a project, is only valid for a cable diameter equal to or greater than the diameter of the cable used in the test. 

So if the project in question is using smaller cables than those referred to in the test (and the fault level and spacing is the same) then the force between the cables is proportionally greater, meaning the certificate is inappropriate and the cleats will not provide the protection they are installed to give.

What all of this means is that at present the only tried and tested way to ensure correct cleating is through project specific testing - a process that we currently offer customers that means they can install our cleats with complete confidence.

ER: You've talked a lot about short circuits, can you explain what happens to cables during a short circuit situation?

RS: We do a lot of short circuit tests and a good way of explaining what happens to the cables is to look at the difference between those that are correctly restrained and those that aren't. 

In recent tests we did with our American distributor, kVA Strategies, we performed three short circuit tests on 3 x 1/C-777kcmil, 2kV marine cables at 59kARMS in trefoil formation. One test was conducted on cables tied with 1/2" wide stainless steel cable ties, while the other was conducted on cables restrained by our Emperor trefoil cable cleats. During the short circuit the mechanical forces between the cables exceeded 4,500 lbs/ft.

After one short circuit, the cables restrained with the metal cable ties were damaged beyond repair - suffering multiple tears in the cable jackets and insulation, as well as evidence of electrical arcing. In fact, the metal cable ties catastrophically failed before the first quarter cycle current waveform peak, ejecting the ball bearings from the cable tie buckles with sufficient velocity to lodge deeply into the plywood test bay walls. The subsequent cable thrashing also severely damaged the cable tray.

In contrast, the correctly restrained cables were subjected to not one, but two successive short circuits and after careful inspection no damage was found. In fact, the testing lab team stated that the cables still passed the required IEC voltage withstand test and so could continue to be used at full-load.

ER: Aren't electrical cables meant to be fully protected by circuit breakers?

RS: That's a common misconception, but in the event of a fault the forces between cables reach their peak in the first quarter cycle, while circuit breakers typically interrupt the fault after three or even five cycles. And by this stage, if the cleats are underspecified, the cables will be long gone.

ER: What's the best cleat to use?

RS: How long is a piece of string? There are a large variety of cleats available and all of them are designed for different installations. For example, our Emperor cleats are recommended for the highest short circuit fault duty applications. Meanwhile, our Centaur cleats are designed specifically to restrain high voltage cables up to 400kV with a diameter range of 100 to 160mm.

ER: So, is there a rule of thumb for picking the appropriate cleat for an application?

RS: In order to ensure the correct cleat the best idea is to go to a manufacturer with information concerning the installation environment, mounting structure, cable configuration, peak short circuit fault level and cable diameter and they should be able to advise on the most suitable cleat and the spacing at which it should be installed.

ER: And what about a recommended spacing between cable cleats?

RS: Again there's no hard and fast rule to suits all installations. The optimum spacing needs to be determined by engineering calculation to ensure the cable cleats are suitable for the electromechanical forces encountered during the maximum available fault duty of the system.

ER: Finally, the use of multi-core cables, which we are told don't need to be restrained, is growing enormously - what's your view on this?

RS: This is a question we are being asked with increasing regularity and so in order to be able to provide meaningful advice we have carried out some preliminary research and carried out a series of short circuit tests. 

At present we aren't aware of any published data that indicates any preferred particular fixing method, but custom and practice suggests that most users seem to working under the assumption that any forces on the conductors that arise in the event of a short circuit will be restrained within the cable jacket, meaning cable cleats aren't required. 

The tests we carried out were on armoured and unarmoured 3 core, copper conductor, multi-core cables from various cable manufacturers. These cables were tested across a variety of conductor sizes but because of the number of manufacturers, the variety of cable types and the different methods of construction available, it wasn't feasible to carry out exhaustive tests. 

That said the results of the tests, although varied, were certainly interesting. They showed that it is unsafe to presume that the forces between the conductors will always be restrained within the jacket of the cable, whether or not the cable is armoured or tightly helically wound. 

Therefore, our conclusion is that unless the relevant cable manufacturer can give assurances regarding the performance of their specific multi-core cable at the anticipated fault level, then fault rated cable cleats provide the safest option for securing multi-core cables.

For further information about Ellis Patents visit www.ellispatents.co.uk  or call 01944 758589.

The increasing sophistication of emergency lighting systems means specifiers need to take account of many different factors to ensure the end user gets the best value. Stewart Langdown of Tridonic highlights some key issues

While there are many areas where building operators can cut back to save money, emergency lighting is not one of them. Not only do they have to install emergency lighting, they also need to ensure it is regularly tested. Clearly the latter is something that can prove to be a time-consuming and expensive business when carried out manually.

To that end, there are now many more systems available that will automatically test emergency lighting, and it's important to ensure such systems address all of the relevant criteria. These include issues such as the compatibility of the control gear with modern light sources like LEDs, the level of overall controllability and whether the system is stand-alone or integrated with other lighting controls - and there are various sustainability considerations too, as well as issues such as choice of light source and batteries, which can also have an impact on the design of the emergency luminaire. And the fact of the matter is that not all emergency lighting controls are equal; some offer considerably greater functionality and ease of use than others.

Clearly, the fundamental requirement for an emergency lighting control system is to ensure the emergency lighting works when it's needed. Above and beyond this, the majority of end users will now expect a system that incorporates self-commissioning and self-testing features for continuous monitoring, weekly function tests and annual duration testing. Five pole technology to ensure total isolation and compatibility between the ballast, inverter and supply system is another critical factor.

Such self-testing usually represents a worthwhile investment as it reduces the requirement for maintenance staff to walk around the building and carry out a visual inspection - freeing them for other duties. However, different systems offer different levels of functionality so it's useful to be aware of some key points.

For example, the self-testing function needs to be easy for maintenance staff to use, perhaps with a simple combination of different coloured LEDs to indicate correct functioning or to indicate the nature of any fault. The important thing here is the ways that status is indicated are very clear with no room for misinterpretation.

Another factor is convenience. One of the required tests is a weekly 30 second test to establish and confirm the functionality of the unit, battery and lamp. However, this can be inconvenient for the occupier so it's useful to be able to pre-programme each unit to run the test at a different time, to avoid all units testing at once.

Ideally, the unit will delay the test until the normal lighting supply has been switched off for longer than two minutes - minimising the risk of the test being carried out while the occupier is present. In the event that the supply is permanently connected or the lights are left on permanently the unit should ‘force' a function test after a further 21 hours.

Cut out the middle man
As noted above, self-testing takes some of the pressure off the maintenance team but there is still a requirement for a visual check to determine whether the emergency lighting unit has indicated a fault. So it makes sense to take advantage of the recent advances in controls networking, by integrating the emergency lighting testing with the lighting management system. This is a very straightforward process using the popular DALI(Digital Addressable Lighting Interface) protocol.

The DALI system allows luminaires to be addressed individually, so that detailed information can be monitored for each fitting. In addition to standard information such as indicating faults on the lamp, control gear or battery, the system can provide information on, for example, the device status, type of lamp and type of emergency unit and battery.

As a result, with the emergency lighting linked to a DALI lighting management system, information on the operating status can be displayed centrally together with the precise address. Any faults can then be corrected efficiently with no need for maintenance staff to patrol the building, resulting in even greater savings in terms of time and maintenance costs. Crucially, the system should also maintain a complete log of all such events as proof of compliance with emergency lighting regulations.

Furthermore, use of DALI for both emergency and general lighting reduces installation requirements as the overall amount of cabling is reduced, thus saving on site time and raw materials. The result is a more sustainable project, with less embodied carbon, as well as the financial savings on materials.

In addition, use of appropriate control components within a DALI system can facilitate commissioning and increase the likelihood that the system will perform as it was designed to. For instance, the EZ easy addressing feature of Tridonic EM PRO DALI Invertor that uses the indicator light emitting diode (LED) light source) to indicate the DALI address during commissioning.

Where required, the DALI system can also be linked to the building's IT network using an interface between DALI and the TCP/IP protocol used by local area networks and the internet. This makes it easier to access the functions and can be achieved via the organisation's intranet, or across the internet from any location. For organisations with an extensive estate of many buildings across a wide geographical distribution this is a very useful feature, particularly if the facilities management or maintenance management function is located at a single location. This scenario has become increasingly common as organisations seek to rationalise their resources by making better use of technology.

Light sources
Just as importantly, the system needs to be compatible with the latest light sources. For example, many fluorescent lamps now use a mercury amalgam rather than liquid mercury as this is safer. So the system needs to be compatible with amalgam lamps (some aren't). It also needs to work with both cadmium and nickel metal hydride batteries.

Similarly, where linear fluorescent lamps are used, T5 is increasingly the first choice, generally in a low profile fitting that takes advantage of the compact nature of the lamp. Here, it's the important that the emergency lighting control/self-test module has a low profile so it can fit in the luminaire.

One of the characteristics of T5 lamps is they burn at a higher temperature than other linear fluorescent light sources, so for the test to be meaningful the testing module should operate the lamp at twice the normal emergency power level for 55 seconds. This ensures the lamp is correctly heated to ensure maximum lumen output during the most critical switch over phase, achieving greater visibility of potential dangers.

Increasingly though, the light source of choice for emergency lighting is the LED. LEDs offer lower energy consumption, which is important for emergency light fittings such as exit signs that are on most of the time, as well as much longer life, again reducing maintenance requirements. In addition, the use of compact LED light engines facilitates the use of smaller and more discreet luminaires to meet statutory lighting requirements, which can often help with the aesthetic side of the design.

This is further facilitated by the choice of battery, as newer battery designs enable fewer, smaller batteries to be used. This has the added benefit of reducing environmental impact. Of course, integral power control technology should ensure maximum emergency light output for a given duration time with a minimum battery cell count in consideration of LED tolerances.

The choice of control gear is also important for use with any light source and can assist in standardising the type of module across different emergency light fittings. For instance, it is possible to use the same module for testing for one hour,  two hours and three hours duration, operating single or multiple LEDs wired in parallel. Similarly, a 2W module may be used to power a single LED at 600mA or two LEDs at 350mA in series. This level of flexibility helps in minimising the number of different components that need to be specified for a project, while retaining maximum flexibility in choice of emergency lighting fittings.

With fluorescent lighting that's used for both mains and emergency lighting, the choice of control gear can make a significant difference to the life of the lamps. Ballasts that deliver a warm start to the lamp will maximise lamp life and enable high switching frequency applications with very low power losses and enhanced thermal management. Ballasts should also incorporate voltage protection to prevent damage in the event of a mains voltage rise above a pre-defined threshold. In the case of compact fluorescent lighting ballasts with insulation displacement connection can enable automatic wiring, thus saving time.

In fact, these are just some of the many examples of how the choice of system components can make a significant difference to the performance of the system. The important thing is to be aware of these details and keep abreast of the latest developments.

Uninterruptible Power Supplies UK sales manager, Mike Elms, explains how modular  UPS systems can help cut your energy bills

The need for qualifying organisations to reduce their energy usage is highlighted by the Government's Carbon Reduction Commitment Energy Efficiency Scheme, or ‘CRC', which came into effect on 1 April this year. With the scheme rewarding qualifying participants who perform well, while penalising those who do badly, in both financial and publicity terms, it's clear simply finding ways of reducing energy use is not enough; it's essential that these improvements have long term sustainability. Developments in uninterruptible power supply (UPS) technology offer one way of achieving sustainable energy savings.

Maintaining continuous supply power from uninterruptible power supplies (UPS systems) is now considered essential by organisations running financial, healthcare or industrial processes that depend on vulnerable ICT equipment. As UPS units are installed in the critical supply path, any improvement to their efficiency will make an appreciable contribution to their operators' energy management strategies.

Such efficiency improvements are possible, through selection of suitable UPS topology and by carefully sizing the UPS system to match its critical load. One increasingly popular approach is to use systems based on advanced modular topology, which allows UPS capacity to be closely matched, or ‘right sized', to the critical load size. Modular UPS capacity can easily be incremented or decremented to efficiently match changing load requirements throughout the life of the installation - a sustainable efficiency solution.

As well as saving energy and helping to meet CRC targets, modular technology allows significantly smaller, lighter UPS installations with increased power availability. By looking at what modular technology is, we can better understand its benefits and their practical application.

On-line, static double conversion UPS systems first appeared in the seventies and are still in use today. Their principle of operation is to rectify incoming AC mains into DC, which charges a battery before being inverted back to AC to drive the UPS critical load. In the event of AC mains failure, the battery can take over the role of supplying DC to feed the inverter until the incoming AC mains is restored. In early designs the inverter was followed by an output transformer, necessary to restore the output AC voltage to the same level as the mains input. However advances in power semiconductor technology and the introduction of the Insulated Gate Bipolar Transistor (IGBT) have allowed changes to the UPS design which permit elimination of the output transformer. This yields a number of advantages, the most important of which relate to improved efficiency, reduced size, and weight.

Energy efficiency is improved for a number of reasons. With no transformer core to heat there are no iron losses; with no windings there are no copper losses. Both factors contribute to energy savings. Transformerless designs also exhibit lower input current harmonic distortion (THDi) and an improved input power factor, which both reduce energy. Eliminating wasted energy also reduces heating effects, and therefore cooling costs. Further energy savings arise from modular technology which, as we shall see, is made possible by transformerless design.

Eliminating the transformer reduces the UPS's size and weight by something like 66%. This is a large reduction which has had a profound effect on the way UPSs are seen and used. Uninterruptible Power Supplies Ltd (UPSL) realised that a 3-phase UPS rated up to 50 kVA could be implemented as a rackmounting module rather than a large standalone unit. And implementing a UPS as a set of modules in a rack rather than a single standalone unit gives great flexibility as well as space savings. This flexibility allows right sizing, with a UPS solution that's closely matched to its load. The result is less capital and space wasted on unnecessary capacity together with maximised operating efficiency. An example shows the efficiency savings possible:

Let's imagine a site with a load of 96 kW and a power factor of 0.8, which demands a 120 kVA supply. We'll also assume that, for security, N+1 redundancy is required. That is, N UPS units have sufficient capacity to completely support the load, so in an N+1 configuration, one unit's failure would still leave sufficient UPS capacity to support the load. This would typically be implemented in a standalone system using two 120 kVA units, each of which would only be 50% loaded during normal operation. Efficiency with legacy transformer based design would be 90%. By contrast, a modular system could be implemented using four 40 kVA modules, where each module is now 75% loaded. As well as being smaller, lighter and more easily expandable, its efficiency would be 96%, which more than halves the cost due to losses per year. The annual cooling costs are also more than halved. At 7.84 p/kWh, total annual savings would amount to over £5000pa.

If our site load remains at 96 kW throughout its operation life, the annual £5000 savings will continue with no further action needed. In real life however, the load is not only likely to change, but the extent of its change can defy prediction. In a typical scenario a data centre may be expected to be initially loaded to 35% of its capacity, with this load growing steadily to 90% of capacity over a period of 10 years. With a standalone UPS, the response is typically to install a system sized for 90% data centre capacity from the outset, to avoid the difficulties of upgrading or replacing it later. These include finding more floorspace in a crowded data centre, disrupting business operation with building work and installation, and laying or repositioning cabling. However, such an oversized system would spend its operational life greatly under loaded, adding reduced efficiency to unnecessary capital costs and space requirements. This would be exacerbated if the load does not grow to the expected 90%. While the UPS's conservative rating should ensure that the load would always be supported, it's not unknown for the actual load to exceed projections so that new UPS capacity must be supplied after all.

These difficulties can be avoided by using a modular system. Its flexibility means it can easily be expanded or reduced after being initially rightsized to its load. There is no need to oversize it initially because modules can be added without disruption as and when they are needed. This flexible property of modular UPS topology is known as its scalability, and it's a scalability that has two dimensions - vertical scalability and horizontal scalability.

The example above has four 40 kVA modules totalling160 kVA capacity, or 120 kVA with N+1 redundancy. These modules could populate four out of five slots in a single server-style floorstanding rack. Vertical scalability is a reference to the fifth slot, which can be populated to increment capacity at any time. Additionally, a second rack could be provided for an incremental increase in floorspace and cost. The ability to add further racks in parallel is known as horizontal scalability. This adds up to enormous flexibility, with UPS configurations over 1 MVA being possible.

The task of efficiently maintaining right sizing to the critical load, however unpredictably the load grows, becomes simple. The modular approach allows the maximum possible energy efficiency as well as minimising capital and space costs throughout the life of the installation.

Steve Gallon, managing director of Finnish enclosure manufacturer, Fibox, confirms its aim is to offer a range of enclosure products that meet both the exacting standards of an ever changing industry, and the specific needs of their diverse customer base

When a designer is faced with the specification of an enclosure to house and protect a specific device or control system, there is an art to getting the exact enclosure product to fulfil the requirement exactly.

To some specifiers a box is a box is a box. But nothing could be further from the mark. Invariably the enclosure is the ‘nutshell', housing expensive and often critical components which must perform in extremely harsh and demanding conditions. Therefore a number of very important questions must be asked, and answered, before the purchase order can be signed. That is why it is crucially important that the relationship between specifier and manufacturer must be such that all the following points are covered at the design stage. It's a bit like the signs those of us who enjoy a tipple see above the bar stating that change should be checked because alterations can't be made later.

The conundrum facing a lot of specifiers is they have a series of enclosures from various manufacturers to choose from, and on the face of it they all look the same. They are grey, plastic and have hinged or removable covers. What they don't see is the suitability of the different ranges for a particular application, especially when one considers the increasing amount of direct copies using cheap blends of plastic being introduced into the market now from all areas of the globe.

The secret to getting to the nub of what enclosure is fit for purpose is knowing which questions to ask a manufacturer. Then it is equally important for the manufacturer's representative to have the knowledge to be able to offer constructive advice and guidance to ensure the exact enclosure solution is identified quickly and efficiently.

At this crucial stage in a product's introduction cycle it is very important to take into account that there are many individuals in the decision chain. Each link in the chain has a different specification priority. It could be the marketers requirement for aesthetics or the R&D/design wngineers need to download CAD drawings from a manufacturer's website. When pulled together, all these differing facets will make the final decision to purchase a particular enclosure failsafe.

So let's look at the key criteria which when addressed will very quickly narrow the choice down to just a few options:

Where is the enclosure likely to be installed, inside or outside?

This determines the material choice, the IP rating and any specific corrosion risks which need to be addressed at the design stage.


What is being installed in the enclosure?

This solves the size issue and also answers the question of what type of internal fixing points are required. Some manufacturers can offer extra fixing pods to accommodate cover mounted components etc.

How often is access to the interior required?

Should the design include hinged opening doors for regular access? Alternatively should the design include security such as locks or tamper proof cover screws to guarantee total integrity of the enclosure?

Does the enclosure require visual access to the internal components?

By answering this question the option of transparent or opaque covers is addressed. This could include the provision of viewing windows in an opaque cover for example.

Does the enclosure need to delivered, already customised?

This is a very important question often asked by specifiers who do not have in-house machining facilities. Customising includes machining of holes and apertures, special fixing points, graphics and corporate colours, and a multitude of other bespoke services. Some manufacturer's offer a comprehensive customising service and this should always be discussed with them at the design stage.


What is the budget price?

This question has to be addressed at some point so why not cover it at the early stage of the project! Manufacturers can then offer alternatives and compromises to assist the specifier in achieving their cost criteria.

By forming a partnership with a manufacturer who has the capability to provide a ‘one-stop-shop' for all the specifiers various requirements, this not only keeps the project in as few hands as possible from design to delivery, it ensures that cost issues are transparent at every stage. The fewer links in the supply chain, the better the overall control.

 

The election outcome has done nothing to appease our grumpy old man's disdain for politicians when it comes to our energy supplies

Well, we have a new prime minister. Well, sort of at any rate. My fears of a hung parliament were unfortunately fulfilled and when I signed off my last column stating "the jury's still out", little did I know that would remain the case this month!

There is a remarkable dichotomy presented by the coalition of ConLibDem. On the one hand Cameron's pledge to reduce energy usage by 10% within Government buildings will satisfy Clegg's desire for greenery. On the other, new secretary of state for energy and climate change, the LibDem's Chris Huhne, is going to have to compromise on of his fundamental political beliefs in accepting nuclear power, since Cameron intends to pursue the building of 10 new nuclear stations agreed last November.

Resisting the temptation to suggest a very popular way of reducing harmful hot air emissions from government, there is a very serious point to my apparent satire.

Chris Huhne has already outlined his priorities: "Climate change is the greatest threat to our common future. We have a very short period of time to tackle the problem before it becomes irreversible and out of control.

"A lot of progress has been made, but we must now go further, faster and turn targets into real change.

"This is a coalition to provide strong and stable government for this country. The benefits of the low carbon economy are agreed between both parties, this is a priority agenda common to both manifestos.

"Together we have the opportunity to make this the greenest government in our history. And to put energy security, for too long a second order issue, at the heart of the UK's national security strategy.

"I intend to make decisions put off for too long to fundamentally change how we supply and use energy in Britain.

"To make it far easier for people to make their homes more energy efficient to reduce wasted energy and cut their bills.

"To give the power industry the confidence it needs to invest in low carbon energy projects.

"To create jobs and growth right across the low carbon economy.

"And to use every influence we have internationally to get a global deal to tackle climate change."

All are laudable points, all important issues and all are excellent statements. But, population growth, coupled with continually growing demand for power from modern equipment, quickly negates the most ambitious energy savings targets. So how do we cope with that?

The LibDem party will undoubtedly maintain its opposition to nuclear power while permitting the government to pass laws that make new nuclear construction possible. LibDem members have said they will abstain in parliamentary votes.

Despite a brave face from the utility companies, analysts remained unconvinced that the agreement will provide stability for investors.

Oliver Dancel, senior energy and utilities analyst at Datamonitor, told the Daily Telegraph: "The issue is how utilities such as EdF Energy and E.ON will react. Such political uncertainties will inevitably delay the process of delivering the new investment in power generation capacity the UK so badly needs. Investors will be watching the coalition closely for signs of stability."

Far from welcoming a refreshing new political regime and a new breed of politicians, I for one, will be keeping a very close eye on the decision making process and, more importantly, its delivery.

A new harmonised European standard EN 60034-30:2009 is to replace the old  voluntary Eff classes. The first phase is now a year away so machine designers need to be conversant with the regulations now. The good news is the changes need not cost much more, and for the end user and the environment the results are entirely positive

The new regulations apply to 3-phase asynchronous motors in a power range 0.75 to 375kW in 2,4 and 6 pole designs, basically the vast majority of motors and used in the construction of machinery. There are certain exceptions, for example 8 pole motors, motors that are an inseparable part of a machine and those with supply voltage over 1000V.

However, the scope is predicted by the UK Government Department of the Environment to be sufficient to arrest the current increase in energy and used by electric motors. This is no mean feat bearing in mind the massive number of motors in service and the fact they consume nearly 40% of the nation's energy.

Three new energy efficiency bands are defined:
IE1    for motors of Standard Efficiency, equivalent toEff2
IE2    for motors of High Efficiency, equivalent to Eff1
IE3    for motors of Premium Efficiency, no previous equivalent

Looking further ahead it is anticipated an ever higher level of efficiency IE4 will be introduced. The actual limits of these three efficiency bands vary according to motor power. As an example in round figures, the minimum efficiency for a 7.5kW motor is 85% at IE1, 88% at IE2 and 92% at IE3.

There is a phased introduction of the new regulations beginning in 2011:

16 June 2011 -  motors must meet the IE2 efficiency level as a minimum

1 January 2015 -  motors from 7.5 to 375 kW must meet the higher IE3  efficiency level, or must be ‘equipped' with an inverter variable speed drive

1 January 2017 -  the 2015 regulations are extended down to motors of 0.75kW

The regulations are based around the concept of motors that are ‘placed on the market'. This means motors delivered from motor manufacturers and their subsidiaries, including replacements for existing motors. Old stock at independent distributors or at machine manufacturers can still be sold. Repairing and rewinding old motors is permissible.

Thus any new machine or old machine requiring a replacement electric motor will require compliance with the new regulations. For the end user this is almost invariably a benefit. Over the lifetime of an electric motor, energy costs amount to about 97% of the total costs of ownership. Therefore a 2-3% gain in efficiency can achieve big savings in the long term. Based on 8000 hours per year, stepping up an efficiency level can give payback times on the extra investment of about 2 years. As a simple guide, if a motor is used for 2000 hours a year or more, an advice is to buy premium efficiency or high efficiency with inverter drive now.

There are strict requirements for labelling of the motor rating plate. From June 2011 the following information must be shown on the rating plate and the motor documentation: lowest efficiency at 100%, 75% and 50% rated load, the efficiency level (IE2 or IE3) and the year of manufacture.

As stated above, from 2015 IE2 motors equipped with a frequency inverter can be used instead of IE3 premium efficiency motors. This is an attractive alternative and the IE2 + inverter combination will generally yield greater savings compared to IE3 if variable speed is required. There is no expectation the inverter will be integrated into the motor, although that is possible, and it is expected many customers will purchase motors and inverters from different sources. Documentation requirements are not yet defined, but it would seem likely a degree of self-certification will apply. 

As the efficiency levels of motors increase, so does the cost as a result of increased material and manufacturing costs. The increase in costs does depend on frame size. Changing from IE1 to IE2 currently brings in a price premium of 20-30%, less on larger frame sizes, but as production volumes increase this is likely to fall to 10 to 20%. The premium to step up to IE3 is likely to be a little less. However, adding more copper to meet higher efficiency levels can also result in changing dimensions. Often the motor length will increase. In a minority of cases the motor frame size may increase, for example from IEC90 to IEC100. In turn this may cause problems on existing machine designs with replacement motors.

Many people would say the new regulations and efficiency bands are long overdue.  We are playing catch-up with countries like the USA and Australia. With the first phase a year away, we have time to take the necessary steps for the changes. Increases in costs are modest compared with the lifetime costs for motors. The big winners are the end users with lower energy costs and the environment as a whole.

Lenze is well positioned to offer high efficiency products and packages. Right now, as well as IE2 motors there is a range of IE2 geared motors available up to 45kW. As a manufacturer of frequency inverters, Lenze can offer packages of motor/geared motor and inverter that are IE3 equivalent. Other products, such as the MF motor range, can deliver better than IE2 efficiency and 30%  savings at part loads, also regenerative braking units that can return excess energy to the mains.

In any workspace, be it an office or an assembly/production area, lighting needs to be both flexible and efficient if it is to meet all of the criteria of end users. Rodger Henderson of Waldmann Lighting explains why freestanding direct/indirect lighting can provide the ideal solution in many situations

Flexibility in the workplace has now become integral to the majority of businesses and it's important the building services offer a comparable level of flexibility. In this way, the services are able to adapt to change and continue to deliver the required performance - ideally with a high level of energy efficiency.

This flexibility arises through several drivers. On a production line, for example, there may be a change of workflow to suit a new product line or to introduce greater efficiency. In offices, the increasing popularity of flexible working practices such as hot desking and home working have contributed to a more fluid environment where change is the norm.

With inflexible services, as are all too frequently found in many workplaces, the cost and disruption of reconfiguring services often proves prohibitive. The result is that no changes are made and the comfort and efficiency of the workplace are compromised.

For example, when a workplace is first laid out the lighting is often designed to suit that layout, which makes perfect sense. However, when that lighting is a fixed ceiling installation, any reconfiguration can be very disruptive as luminaires may need to be moved to different positions in the ceiling grid. This also leads to re-wiring work and very often the spatial relationship between luminaires and sensors changes so these need to be repositioned and/or reprogrammed. Much of this work will involve access to the ceiling void, so the space either has to be vacated or the works are carried out in evening at weekends. In a production facility, neither option will help to maintain productivity.

Even when the workspace doesn't change, an element of flexibility is required to address the different visual requirements of individuals - either because of natural variation in visual acuity or because they are performing tasks with different visual requirements.

For instance, in a conventional design with ceiling mounted lighting the luminaires are often controlled in groups, so that people are often forced to come to an agreement with their colleagues about light levels, rather than having control of their own lit environment. With the UK's ageing workforce, this is a growing problem as older people will have different lighting requirements from their younger colleagues.

Similarly, even if all of the staff had identical visual characteristics, their visual requirements would vary through the day as they switch between paper and screen based tasks or addressing tasks of varying complexity. For example, assembling enclosures in one section of a production line may be less visually demanding than wiring and soldering components into a printed circuit board.

Of course, some more sophisticated systems do offer a level of personal control within zones but users may be required to master a complex lighting management system, and this is still not as effective as providing control at an individual workstation level.

Failing to address the visual needs of the workforce can also have a negative impact on staff morale and, consequently, productivity. In fact, research carried out at the Rensselaer Polytechnic Institute, a leading establishment in workplace research, has shown that creating a user-friendly lighting configuration can achieve an increase in productivity of as much as 3%.

Addressing this situation requires a combination of flexible lighting at a local level and controls that are easy and intuitive to adjust. In spaces with normal ceiling heights, such as offices and many production/assembly areas, freestanding direct/indirect luminaires can address all of these requirements. And these are particularly effective in ‘cleaner' production/assembly areas with reflective walls and ceiling.

Indeed, in the rest of Europe the benefits of this approach have been recognised for many years but the concept has been slower to take off in the UK. However, thanks to the combination of flexibility and energy efficiency provided by freestanding lighting, interest is growing rapidly.

Such direct/indirect light fittings are positioned at floor level and can be provided as freestanding fittings adjacent to the furniture, or fittings attached to the furniture.

This approach uses the ceiling as an extensive reflector to create a bright and spacious feel in the space, and can therefore be an effective alternative to fixed ceiling lighting. It also corresponds to the greater emphasis placed on uplighting in best practice lighting designs, as determined by the Chartered Institution of Building Services Engineers (CIBSE) in its Lighting Guide 7.

At the same time, the directional component can be controlled to adjust the level and direction of light incident on the work surface. In this way, the users can adjust the lighting to suit their tasks and personal preferences. Because of the location of the fitting, all of the operating controls and power displays are at working height for easy access and visibility.

Alternatively, freestanding lighting can be used in conjunction with separate task lighting, so the benefits of the uplighting are retained while the user has control of their individual lighting from the task lighting. In this case, complementary styles of the different fittings help to retain a consistent ‘family' feel to the lighting throughout the space.

Inevitably, freestanding lighting takes up some floor space and changes the view across an office space so it's also important to choose a compact design with a small footprint. It should also be aesthetically pleasing in its own right and in a style that that will blend with other furniture in the space. So freestanding lighting can not only improve the flexibility of the lighting installation, it can also enhance the aesthetics of the space through both its light distribution and its visual appearance.

In terms of the need for a flexible solution, as described above, clearly freestanding lighting is much easier to move around with the furniture - simply by picking it up and moving it. Similarly, furniture mounted lighting can be removed quickly and easily with just an Allen key and a screwdriver.

From the end user's point of view, it also enables the relocation of the lighting to be included as part of the general move of other ‘office furniture'. With fixed ceiling installations this would need to be set up in the facilities management systems as a separate job, so freestanding lighting also helps to reduce the administrative burden of moving people around.

Also, it is very easy to add freestanding lighting to an existing workspace, to enhance or complement the existing lighting. So, for example, the existing ceiling lighting could be dimmed or switched off when not required and then switched on to provide lower ambient light levels for cleaning.

However, leaving the control of the lighting to individuals has the potential to waste energy, as people will forget to switch the lighting off or down when they no longer need it. Using the latest light source and control technologies, freestanding lighting offers the same levels of control as a fixed lighting installation - such as daylight and occupancy control. These controls can ensure only occupied areas are lit and the lighting levels are automatically adjusted in relation to natural daylight.

As noted earlier, the spatial relationship between the lighting and any sensors used for control is also important and freestanding lighting can also have sensors incorporated into the fitting, with the added advantage that the sensor and the workstation retain their spatial relationship during any relocation. For example, if used with presence detection, the control can be very localised and set to switch lighting off when a single workstation is unoccupied.

In contrast, most occupancy control in open plan offices operates in zones so that the lighting for a group of workstations remains on when only one desk is in use. In addition, any reconfiguration of these controls in a fixed lighting installation can be costly and disruptive.
Freestanding lighting can also be used in conjunction with networked lighting management systems to provide centralised control and monitoring of luminaire performance.

A further cost of ownership benefit is the light sources are located within easy reach, so replacing them does not require specialist access equipment. It's also worth bearing in mind that fewer uplighters are required to light a space, compared to ceiling mounted lighting, resulting in fewer lamps to change, as well as lower power consumption.

When all of these factors are considered, it's clear the freestanding direct/indirect lighting has the potential to deliver the flexibility required for many modern workplaces. Just as importantly, in combining highly efficient light sources and controls with networking capabilities, this innovative lighting also helps to address the sustainability imperatives of most businesses.

For all of these reasons, there are now very strong arguments for considering the use of freestanding lighting from the early design stages of a new build or refurbishment project. It won't be the ideal choice for every situation but it has far greater potential than many people realise.

Most energy efficiency measures have focused on reducing demand at the point of  consumption.  A wealth of new, more efficient drives and motor controllers, low energy lighting and building management systems means the choice for end users seeking to manage their electricity consumption has never been greater explains Alex Rathmell, head of analysis at Power Perfector

Implementing the disparate measures mentioned above and and sustaining the energy savings can be a challenge for energy managers in a complex workplace, with many control-based energy saving measures relying on goodwill or good behaviour from the workforce to keep them effective, and many measures posing the threat of serious disruption to the working day, or complex technical challenges in implementation.

In this context, the effect of poor power quality is often overlooked both as a problem and an opportunity. Until recently, power quality was not considered to be a major source of inefficiency in electrical equipment, as the UK had historically enjoyed a relatively clean power supply compared to many countries. Now though, several factors have conspired to make tackling power quality an attractive energy saving measure, and one of the quickest and easiest wins in the energy manager's tool kit. By cleaning up the supply, energy savings are immediate, permanent and transparent - that is they are achieved without affecting daily operations and without being dependent on the vagaries of human behaviour.

The breakthrough has come with the recognition that the UK's power supply is no longer optimal. Generally when we plug in to the mains supply we don't give a second thought as to how the voltage level might affect the efficiency of the electrical equipment we connect. We know that in the extreme, when the voltage is far too high, light bulbs glow brightly and blow with alarming regularity; whilst with very low voltages, TVs flicker and motors over-heat. Yet there is a broad band in between these extremes where the effects are more subtle, though very significant in terms of energy efficiency and the longevity of equipment.

Following European harmonisation in 1995, the declared electricity supply in the UK became 230V nominal, +10% to -6%, so supply voltage could be anywhere between 216V and 253V depending on local conditions. The European standard covering the UK (EN 50160:2007) now says that the permissible range is 230±10%, making 207V the minimum level at which UK equipment must operate. Most electrical equipment, designed to work for the whole European market, actually has an optimum operating level of 220V, as this was the nominal supply level prior to 1995. Yet in practice over 90% of sites in the UK continue to receive voltage at the historic average level of 242V - and will continue to do so because the design of the supply infrastructure cannot easily be changed.

The grid is designed to accommodate a certain degree of ‘volt drop' across a geographical area, so a small percentage of consumers connected to remote or dense parts of the network are already operating towards the lower end of the permissible range. Unilaterally reducing voltages could therefore take these consumers below the minimum level. The only solution is for the vast majority of users to be supplied at the upper end of the range (an approach that also minimises I2R system losses), so we routinely supply our equipment at over 20V higher than its optimum supply level, wasting huge amounts of energy in operating plant and equipment. In addition to this grid design limitation the connection of various new renewable resources such as wind to comparatively weak points in the grid is expected to have the effect of increasing voltages locally even further. So at a time when an increasing proportion of our equipment needs a lower voltage, our commitment to renewable power may actually elevate voltage  and inadvertently increase energy use.

The only real way to address this issue is to optimise the voltage locally, to ensure site-by-site that energy using equipment is receiving the correct voltage, maximising its efficiency and its lifespan. Of course, customers with their own HV transformer have a certain degree of control - they can typically adjust their voltage by up to ±5% by altering their transformer's tap settings - but this doesn't constitute optimisation in any meaningful sense of the word. Reducing a 242V supply by 5% gives us 230V, still 10V higher than the optimum level so there is a substantial missed opportunity for savings.

Just as average voltage levels remain high or creep upwards, other factors are contributing to a deteriorating power quality picture in the UK to which engineers are having to adjust. As well as potentially increasing average voltage levels, the introduction of diverse but intermittent generating sources into the mix leads to more switching on the grid, increasing incidents of ‘transient' voltages and other distortions that can damage sensitive electronic equipment and cause nuisance tripping. Levels of harmonic distortion are also at historically high levels, not least due to the ongoing replacement of older linear equipment with more efficient non-linear loads such as high frequency electronic ballast lighting, and huge numbers of inverter drives. These technologies should be applauded, but raising levels of harmonic distortion dramatically increases losses in a power system.

However these new realities do not mean tackling the UK's most prevalent power quality issues need be a formidable technical challenge. While serious or unusual problems should always be dealt with using bespoke solutions, a ‘common sense' approach to power quality can be applied to any site where energy savings are needed. This approach is to optimise the voltage at the source of the building's power supply, to ensure it is well matched to the electrical equipment, while taking the opportunity to protect the site from grid-borne transient distortions and reduce losses by filtering harmonic distortion and improving power factor.

Fortunately, these twin pressures of deteriorating power quality and the need to improve energy efficiency have been experienced before - in Japan, nearly twenty years ago.  At the time, Japan had the same pressures on fuel and energy prices that we have today, as they have to import all of their fuel requirements and rely on expensive nuclear generation. As a result attention was focused on the incoming supply to a building, on its voltage level and quality, as a way of achieving further incremental energy savings in a country where the need for efficiency is deeply embedded in working culture. Technology known as ‘Voltage Power Optimisation' (or ‘VPO') was developed specifically to save energy by optimising supply quality, dealing with the most prevalent power quality problems in a device that can be fitted ubiquitously, as a permanent part of every site's electrical supply infrastructure. The technology must add neither risk nor additional energy use to the site, so total reliability and extremely high efficiency are key elements of the design of VPO equipment. These requirements, as well as the need for additional power quality improvements, mean that conventional automatic voltage regulator technology with its reliance on moving parts and electronic control systems is not appropriate for at-source applications - the development of VPO represents a new approach.

There is an even greater opportunity for VPO technology in the UK than in its home country of Japan, because the voltage supply parameters there are 90-110V, which is adjusted by -2%, -4% and -6% by the implementation of VPO. This approach - essentially correcting for local problems on the grid and targeting the voltage at the optimum level for equipment operation - has saved millions over a period of many years. In the UK, meanwhile, our institutional problem with over-voltage means that not only can a higher proportion of sites benefit from optimisation, but the savings delivered in each case are substantially higher than those seen in Japan. Even on an efficient site with modern lighting and inverters fitted to its motors, savings of 4-8% are typical, while a site with a high proportion of older equipment is likely to experience 10-15% savings in many cases. This means the technology will pay for itself in 2-4 years, making it a popular choice for energy managers confronted with CRCEES obligations and the looming threat of widely-predicted energy price increases.
Optimising power supply at source is a way of harvesting a number of different opportunities for savings across a diverse range of plant and equipment. The savings seen on the main meter are an aggregate of a range of different effects - some dramatic, some subtle - across the whole site. Technically, most savings are realised in induction motors and lighting equipment. Optimising the voltage to a motor essentially means giving it a supply that is appropriate for its loading point. An average motor is both over-specified (and therefore lightly loaded) and, in the UK at least, supplied at an over-voltage. So losses can be reduced by bringing the site voltage to a lower level without affecting the motor's operation. Lighting benefits by being returned to its 'design' voltage and brightness, so both current and power is reduced and lamp life is increased substantially. In fact, almost all equipment will benefit from extended lifespan if provided with an optimised power supply, adding a tangible maintenance cost reduction benefit to the available savings.

Increasingly, energy managers are finding that addressing power quality issues at source is one of the highest performing and lowest impact tools at their disposal, and the take up by major retail, governmental and public sector organisations confirm that VPO is among the most attractive energy saving measures on the market. This reflects the growing realisation that poor power quality is wasteful, and its correction presents an invaluable opportunity for energy savings.

Specifiers must assess enclosure construction before specifying products as the incorrect  choice can lead to significant and costly consequences. They should also make sure they fully understand the system of IP ratings, to avoid incorrect choices and spending more money than necessary. here, Darren Hodson from Schneider Electric explains the system of enclosure ratings, discusses the differing materials available and highlights one of today's most common misconceptions surrounding the ratings standards - IP69K

The various enclosure materials available have their strengths and weaknesses and in order to specify the most appropriate material, these must be fully understood. In addition, the importance of the right quality enclosure is critical.  The role of an enclosure is to protect valuable electrical components and personnel and it just doesn't make sense to save a few pounds by purchasing an inferior product to protect high value systems.  A substandard enclosure could result in leaks, damage to equipment, and possibly even become a hazard to the public.  If this happens not only is the user faced with the cost of replacing the enclosure, there is also the cost of changing any damaged components, downtime and possible litigation.

It is critical the same level of time and investment goes into choosing the right quality enclosure, in order to reflect the time and money spent in developing the system it contains and the system(s) it is connected to. Choosing the right material for the job is also an important consideration.  Buying a high quality enclosure, but in the wrong material, can be a costly mistake. 

Depending upon the application and the preference of the customer, there are three common materials which enclosures are manufactured from: mild steel, stainless steel and GRP. But regardless of the material used, each enclosure should be chosen to suit the specific application they are intended for and this includes having the appropriate IP rating. IP ratings are defined in the IEC 60529 standard for degrees of protection provided by enclosures, published in the UK as BS EN 60529.

The degrees of protection are specified by the letters IP, followed by two or more digits. The first digit (1 to 6) depends on the protection given by the enclosure to equipment within it against the ingress of objects, and also the protection of persons against contact with live parts of equipment within the enclosure. The second digit (1 to 8) relates to the protection of equipment against the harmful ingress of water. Either digit can be replaced by ‘X' for an unspecified condition.

Optional supplementary letters can be used to specify only the protection of persons against access to hazardous parts, and to stipulate special conditions, such as use for high-voltage apparatus or under specified weather conditions.

In general, a higher number represents better protection, although specifiers should be aware this isn't always a guarantee, as sometimes an enclosure might, for example, pass the tests for IP67 but not to a lower rating such as IP65.

It is important specifiers fully understand the conditions of use for an enclosure, as simply specifying a high IP rating does not necessarily mean it is right for the job. The designations refer to the ability of the enclosure to pass the tests under controlled conditions, not to its ability to withstand influences such as weather, sunlight, corrosion, or extremes of temperature. A product can meet the highest level for protection against ingress of water, yet be subject to rusting, so customers must make clear what they are actually expecting from an enclosure rather than relying solely on an IP rating.

In addition to IEC (BS EN) 60529 there are two other standards widely used for enclosures; IEC (BS EN) 62262 ‘Degrees of protection provided by enclosures for electrical equipment against external mechanical impacts (IK code)' and IEC (BS EN) 62208 ‘Empty enclosures for low-voltage switchgear and control gear assemblies - general requirements.' BS EN 62262 uses the letters IK followed by the numerals 00 to 10 to specify the enclosure's ability to withstand mechanical shock including direct impact.

These ratings are used across all materials including mild steel -the UK's most popular choice. This type of enclosure is suitable for most indoor applications. With IP ratings up to IP66 and a high IK rating, it is robust and strong in many environments. The fact that it is easily modified is another reason why it has remained a popular choice for so long. However, specifiers are gradually realising its weaknesses. Mild steel has poor anticorrosion properties if the material is not treated, and this treatment is usually expensive. In addition, cut-outs made after painting must also be protected, adding yet a further cost.

As an enclosure material mild steel still has its place. For general purpose enclosures, either indoors or in industrial and commercial premises, it is a cost effective solution but the fact it corrodes so quickly makes it an unsuitable choice for any external applications.

Stainless steel has been a popular material choice for decades, typically used within the food manufacturing, food processing and pharmaceutical industries as well as for most external applications. It provides the same benefits as a mild steel enclosure but with greater longevity in aggressive environments.  It is also rust resistant, however depending on the grade and the environmental conditions, tarnishing and corrosion can occur. Stainless steel also has its own natural finish and so requires no further treatment.

GRP is best suited to outdoor applications as it does not corrode in damp/wet conditions, even when exposed to sea salt. It also offers excellent protection against UV rays and therefore it won't discolour. Being an insulator it offers extra peace of mind on public access sites and so GRP is fast becoming a major competitor to steel with its insulation, strength and corrosion resisting properties over a temperature range from -50oC to 150oC.

GRP enclosures are designed for the wide variety of aggressive applications in which they are used. In addition to the material, which is double insulated, self-extinguishing and halogen free, there are a number of anti-vandal features which make unauthorised access difficult. The list of industries that now accept GRP enclosures is growing and includes security, airports, highways, rail, utilities, telecoms and agriculture.

It is also important to remember, especially when considering harsh environments, high IP levels are not necessarily an indication of a product being weatherproof. Other design features such as canopies also contribute to the enclosure providing the correct level of protection.

IP ratings are invaluable in ensuring enclosures meet the correct standard however it is not always straight forward, as highlighted by one of today's most common misconceptions - requests for enclosures rated IP69K. At first sight, when you consider the rules for IP codes there is no such thing, since this rating is not mentioned in any of the standards mentioned above. In fact it stems from a German national standard developed for use specifically in the automotive industry.

DIN 40050-9 adds to the IEC 60529 rating system with an IP69K rating for high-pressure and high-temperature wash-down applications. The IP69K test specification was initially developed for electronic equipment on road vehicles, but has also been used in other areas such as the food industry, where the use of pressure washers is common.

This standard is purely a German national one and currently has no real meaning in the UK or other countries, as it doesn't feature as part of a British or International standard. A project is now underway to incorporate its requirements into IEC 60529 but initial attempts by various test houses found the test equipment and procedures were not precisely defined by the DIN standard. This means they do not give the same result when performed by different test houses, and so cannot be compared. Some research has resulted in a proposal to modify IEC 60529 to include the designation IPX9, but this is still at an early stage, and needs more work before it can be published as an amendment to the standard.

In the meantime buyers of enclosures should be aware that ‘IP69K' products from different manufacturers may differ, and might not even pass the tests for IPX5. They should also remember that even the IEC 60529 tests are fairly short, up to 30 minutes for IPX7, although longer immersion can be agreed as part of IPX8. As a result they do not define the enclosure's ability to withstand long-term influences such as weather conditions. It is also often forgotten the ‘water ingress' tests do not specify that no water must enter; they allow water to enter but not in quantities that are considered to be ‘hazardous', which of course cannot be determined without knowing what apparatus will be within the enclosure.

Today's enclosures offer a wide choice of materials and the breadth of products available is always expanding but specifiers and designers should remember that correct material specification is vital in achieving product longevity. And it is impossible not just to choose an enclosure with the highest IP rating and expect it to do any job, in any environment. Specifiers need to carefully assess the conditions of use and prescribe the IP rating that is most appropriate and importantly one that is recognised by IEC or British Standards, as well as choosing the appropriate material for their enclosures.