The 2012 Olympics is set to improve the economy of the UK as well as creating hundreds of jobs in the construction industry, and improving the transport infrastructure of the South East. However, it is a very ambitious project and one of the biggest challenges the UK electricial industry has ever faced. Mark Gledhill from the Electrical Markets Division of 3M explains.

The Olympics is one of the biggest and most exciting construction projects the UK has seen for many decades. It is set to be good news for the economy too: according to figures from the Centre of Economic and Business Research (CEBR), the total amount the construction industry will contribute to the GDP figure by 2012 will be £14bn.
The Olympics project also creates massive new job opportunities: forecasts by ConstructionSkills estimate that the 2012 Olympics will create 33,500 additional jobs in the construction sector alone, peaking in 2010. Last but by no means least, let us not forget the likely boost to the tourist industry the games will provide. Yet long before the 2012 Games were awarded to London, there were concerns about the scale of the construction task. David Higgins, head of the Olympic Development Authority (ODA) has himself been recently quoted as saying “The timetable is extremely tight.”
While achievable, and currently several months ahead of schedule, this is a truly ambitious construction project. Moreover, we are not just talking bricks and mortar: the Olympics are going to require many miles of new power cables to be laid, both permanently and temporarily, as well as the installation of hundreds of power sub-stations. For the electrical power industry, the Olympics are both an opportunity and a challenge.
Overview of the project
On July 25 2006, the ODA outlined the key timelines for the next couple of years. The organisation has allocated the first two years – 2006 and 2007 – for planning, design and procurement, four years to build the venues and infrastructure, then one year for test events. That means that the bulk of the construction work is scheduled to take place between 2008-2010.
While that may feel a long way off, and the main construction work cannot start until all the necessary land has been acquired by the ODA, the reality is that the contracts and detailed planning for electrical power projects are going to start from autumn 2006 onwards. In fact, some of the preparatory work has begun, 50 electric pylons have already been removed from the main site.
Power will be fed to the Olympic site via 132kV direct to 11kV connections within the village. This involves taking the existing overhead lines and re-installing them underground. The power requirements created by the Olympics extend beyond the main site in the Lea Valley. Virtually everything to do with the Olympics is going to require a light being switched on, a computer powered or a phone call made. One of the biggest focus areas of the project has been the need to enhance the transport infrastructure in the South East.
Transport infrastructure
The Channel Tunnel high-speed rail-link is perhaps the most publicised, but there are also extensions to several underground lines, including the Jubilee, Northern and Central. The Docklands Light Railway sees what is perhaps the biggest change: its capacity will increase by 50%. New roads, including a new road bridge over the Thames, are proposed. Plus designated existing roads may need upgrading in terms of lighting and illuminated signage. There is also going to be a raft of new buildings that will need power including new hotels, train stations and construction sites.
Beyond the main Olympic village, there are other places in Greater London and the South East that will be involved, such as Eton Dorney in Berkshire for rowing and Waltham Forest in Essex for hockey, cycling and tennis. While some of these facilities exist, others will need to be built from scratch.
Clearly, the pressure on the UK’s already over-stretched power supplies will reach new levels. So, given the scale of the task in hand, where are the challenges and how can they be overcome?
In greenfield environments, installation of new power lines should be relatively straightforward. However, wherever construction work is taking place in areas being regenerated, the situation is quite different. Let us remind ourselves of exactly what is beneath our feet – a mix of different kinds of cable, some polymeric, some paper, other long-forgotten types and, in some cases, cable installed over a century ago. Record keeping in years gone by was not as thorough as it is today, meaning the companies tasked with extending existing cable links or making repairs may not know exactly what they are dealing with until the excavations have been made.
Many of these power projects are going to be taking place beneath roads and other busy areas. Legislation such as the Transport Management Act means utilities and contractors have precious little time in which to carry out construction work in urban areas. Add to the mix the fact these power cables are often sharing trenches with lots of other utilities and the picture becomes even more complicated, because the risk of inadvertent third party damage rises. So how can the electrical power industry address these issues?
Addressing the challenges
Clearly, planning and co-operation is going to be key. Apart from the winning construction consortium, a whole variety of utility companies – not just power, but water and other services – are going to need to collaborate together to ensure work is carried out quickly and efficiently. For example, it would make a lot of sense if once a trench is opened, all the installation by various utilities is carried out, rather than the trench being closed and then re-opened maybe several times. And by working in tandem, the risk of utilities and their contractors inadvertently damaging each other’s cables or pipes would be reduced.
For many years, there have been calls for utilities to work together in this way. In fact, the power utilities tend to communicate closely with one another already, albeit largely on an informal basis. Perhaps there is a need for a more structured approach.
Using the right equipment can make all the difference too. The choice of product can affect how quickly installation takes place, as well as future reliability. One example is jointing and terminating. Considering the many miles of cables that will need to be installed throughout the South East to support various aspects of the Olympics, directly or indirectly, electrical installers are going to be looking at creating thousands of cable joints and terminations between them.
It is widely appreciated jointing and terminating can have a huge impact on the smooth-running of any cable installation project. Areas to watch include the need to connect different kinds of cable, such as new polymeric cable to ageing paper (PILC) cable, or connecting different cross sectional areas and voltage classes. Creating joints like these can be tricky and, depending on the techniques used, can take anywhere from a few hours to a day to complete. The problem is that the installer may well not know what he is dealing with until the joint bay has been excavated.
Cold-applied technique
Using the cold-applied jointing and terminating technique, pioneered by 3M several decades ago and increasingly the preferred method of UK utilities, helps. This is because ranges such as 3M Cold Shrink enable virtually any two cables to be connected together using standard products, and bespoke products are available for the rare eventualities not covered in the main range. The cold-applied approach has other benefits, such as speed of installation and also very small joint bays.
To give an example, creating a medium voltage cold shrink joint, involving connecting paper and polymeric cabling, can take just a couple of hours. As a result, the installer can move on to the next project more quickly, speeding up the overall completion time of the Olympic project. Furthermore, the design of the joint and termination kits ensures a consistent level of installation quality, drastically reducing room for human error, meaning that the chances of faults in the future are lowered. Cold-applied cable jointing and terminating also have health and safety benefits: as no heat is required, gas bottles do not need to be used on site.
The importance of training
While using products such as cold-applied jointing and terminating helps to eliminate mistakes, there is no substitute for proper training. Collectively, the electrical power industry – utilities, manufacturers, installers – needs to focus more on ensuring everyone is trained to a high standard, and refresher training is carried out. This isn’t just for the Olympics, but for the future of the industry as a whole.
The 2012 Olympic Games are a massive opportunity for the UK, including the electrical and construction industries.
As Tessa Jowell was recently quoted as saying, “This is about much more than 29 days of sport in the summer of 2012. This huge and impressive power lines project shows our determination to leave a lasting legacy for generations to come, improving lives and changing the face of London forever. The games are a chance to showcase what we can all do and to forge a new standard in UK power installation.”

The benefits offered by using high-voltage insulation testing as a diagnostic tool are all too often neglected, says Mark Palmer of Megger. He explains why it is needed, what it has to offer and how best to carry it out.
ffective and reliable insulation is essential for the correct and safe operation of virtually every item of electrical equipment. Even in low-voltage systems, regular insulation checks are highly desirable and, in some cases, such as portable appliance testing (PAT), they are a legal requirement.
In medium-voltage systems, insulation testing is even more important, because insulation is often under greater electrical stress and failures are likely to be more costly and, potentially, more dangerous.
Equipment manufacturers, of course, routinely perform insulation tests on their products before supplying them to the end user. Why, then, are further tests needed? The first part of the answer is that it is by no means unknown for insulation damage to occur while equipment is being installed or serviced. More important, however, is that even the best insulation degrades over time.
While this is unavoidable – all insulation starts to deteriorate from the moment it’s put into service – well designed equipment, operated within its ratings, should give many years of reliable service. Nevertheless, the ability to predict accurately when that period of reliable service is coming to an end is an invaluable aid to avoiding costly downtime and unplanned maintenance. Testing is the key but it can only be a reliable indicator if the tests are properly performed, using appropriate equipment.
Dependable results
The first factor to be considered is the test voltage. For dependable results, this needs to be high enough to effectively measure the insulation resistance, but not so high as to overstress the insulation during the test. Some standards relating to specific equipment insulation testing, such as IEEE43:2000, recommend the use of voltages greater than 5kV for an insulation resistance test.
For medium-voltage installations, this means that the usual choice is an insulation tester with a nominal operating voltage of 5kV or 10kV. There is a little more to this choice, however, than simply reading the headline voltage of the tester data sheet, as we shall see.
First, we need to examine briefly what happens during an insulation test and how the test equipment reacts.
When the test voltage is applied to a piece of insulation, a current flows. This current is made up of four major components. The first is the capacitive charging current, which is initially large but in very short time decays exponentially to a value close to zero, provided that the instrument has a sufficient current capability to fully charge the capacitance of the item under test.
The second component is the absorption or polarisation current, which is the result of three effects – a general drift of free electrons through the insulator as a response to the applied electric field, molecular distortion caused by the field and alignment of polarised molecules. This current also decays toward zero but over a very much longer timescale than the capacitive current.
Surface leakage
The third current component is surface leakage, which is present because the surface of the insulation is invariably contaminated to a greater or lesser extent. This current is constant with time but is highly dependent on temperature.
The final current component is conduction current, which is the current that would flow through the insulation if it were fully charged, and full absorption had taken place. This current also remains constant with time. Accurately measuring the conduction current, which is often measured together with the surface leakage current, is the prime objective of insulation testing.
The surface leakage current may, however, be excluded from the measurement if the instrument features a guard connection.
Now let’s examine how an insulation tester behaves when a test is initiated. At first, it must supply the relatively large capacitive charging and polarisation currents.
Portable testers
Portable insulation testers, however, often use high-impedance voltage sources, both for safety and to make the size and weight of instruments manageable. This means that their output voltage falls to a fraction of its nominal value while these currents are flowing.
As we have seen, the charging and polarisation currents reduce with time and, as they do so, the output voltage of the tester will rise. Unfortunately, not all testers are created equal. For those with a good load curve, the output voltage rises quickly as the current decreases and soon reaches a plateau close to the nominal voltage rating of the instrument.
For instruments with a poor load curve, the rate at which the voltage rises as the current decreases is much slower, and the start of the plateau region is poorly defined. In many cases, this means that the desired test voltage is never achieved.
As a result, users of such instruments may be led to believe that the test has reached a steady state condition long before the voltage has actually stabilised. The test may, therefore, indicate acceptable insulation performance when, in fact, that performance has only been checked at voltage which is much lower than that which was intended.
Maximum value
When selecting a high-voltage insulation tester, it is, therefore, beneficial to examine the instrument’s load curve, but this is by no means the only consideration. Another important factor, particularly if diagnostic testing is intended, is the maximum value of insulation resistance that can be measured.
This is often a source of confusion, as many insulation testers will give a “greater than” indication when their measurement range is exceeded. Bear in mind, however, that “greater than” is not a measurement; it’s merely an indication that the result is beyond the measuring range of the instrument. In go/no-go testing, it may be sufficient to know that the insulation resistance is greater than 1 gigohm (1,000 megohm), but the situation is very different with diagnostic testing.
Consider, for example, an item of equipment where the insulation resistance has been recorded over a period of years as relatively steady at around 100 gigohm. A new measurement, however, indicates that this has fallen to 40 gigohm and a further measurement taken a period of time later shows a fall to 10 gigohm.
Potential problems
Clearly, these changes show that there is a potential problem that needs investigation before it can develop into a serious fault. If the same tests had been carried out with an instrument that simply indicated infinity for all resistance values above 1 gigohm, no change at all would have been detected and the insulation would have been given a clean bill of health.
This example illustrates not only the importance of choosing instruments with extended measuring ranges but also of performing insulation tests on a regular basis and recording the results. The database so produced is a powerful tool for initiating preventative maintenance and eliminating the cost and inconvenience of breakdowns.
In this respect, it is worth noting that many of the latest insulation testers have internal facilities for storing test results. These can then be downloaded to a PC for analysis and archiving, a procedure that not only saves time but also eliminates the risk of transcription errors.
One final factor that should be considered when choosing a high-voltage insulation tester is whether a selectable breakdown detector is needed.
With such a detector enabled, the test can be terminated immediately, before a breakdown of insulation and possible damage. If, however, it is an advantage to allow a breakdown to occur – for example, to assist in the location of the weakness in the insulation – the ability to disable the detector is an advantage, as is an instrument with a good short-circuit current capacity.
High-voltage insulation testing, particularly when it is carried out on a regular basis, is an invaluable diagnostic tool and an important aid to predicting and preventing equipment failures.
Maximum benefits can only be achieved, however, if the insulation tester is well chosen. Essentials are a good load curve, an extended measuring range and, for true diagnostic capability, a full battery of pre-programmed tests, including polarisation index, dielectric absorption ratio, step voltage and dielectric discharge. Instruments, like those in the Megger range, which meet all of these requirements, will quickly repay their comparatively modest purchase price.

Mike Thornton of ABB explains how active filtering technology can provide an effective solution to the growing problem of harmonic pollution.
AC electricity supplies with a pure sine wave are now almost a thing of the past. Of course, harmonic pollution is nothing new – the concept has been with us for many years, in large industrial complexes such as the chemical and steel industries, which employed large mercury arc similar rectifiers. The problem, in recent years, has grown considerably due to the proliferation of non-linear devices such as high power inverter drives and static UPS equipment, and has crept down to the building level thanks to the ubiquitous PC, printer and fluorescent light.
An immediate reaction might be to think there is no problem as long as your equipment continues to function correctly. But even when all appears well, un-filtered harmonics can be the ‘invisible killer’ in your system, causing nuisance tripping, mysterious fuse blowing and overheating of cables and transformers, considerably shortening their service life.
Distorted waveforms
A harmonic frequency is simply a frequency that is a multiple of the fundamental frequency. For example, a 250 Hz waveform superimposed on a 50 Hz network is the 5th harmonic, 350 Hz is the 7th and so on. The first effect of harmonic pollution is to increase the RMS and peak value of the distorted waveform. It is possible for a distorted waveform containing harmonics up to the 25th harmonic to have a peak value of more than twice the pure waveform, and an RMS value that is 10 per cent higher.
This increase in RMS value is what leads to the increased heating of electrical equipment. Furthermore, circuit breakers may trip due to higher thermal or instantaneous levels. Additionally, fuses may blow and power factor correction capacitors may be damaged. The winding and iron losses of motors increases and they may experience perturbing shaft currents. Sensitive electronic equipment may be damaged, and equipment using the supply voltage as a reference may not be able to synchronise, and either apply the wrong firing pulses to switching elements or switch off. Interference with electronic communications equipment may occur.
In installations with a neutral, zero-phase sequence harmonics may give rise to excessive neutral currents. This is because they are in phase in the first three phases of the power system and summate in the neutral. Excessive neutral currents are often found at locations where many single phase loads (PCs, faxes, dimmers etc) are in service.
A further motivation for taking action against harmonics is that as well as affecting local systems, they may also disturb equipment in other plants. In order to limit this disturbance, maximum permissible distortion levels have been defined in standards and recommendations such as the recently revised ENA Engineering Recommendation G5/4-1 and BS EN 61000.
All UK consumers have an agreement to connect with their DNO (distribution network operator) and part of any new agreement includes a requirement to meet the conditions of G5/4 -1. Failure to meet these conditions could carry the threat of possible disconnection from the supply should action not be taken to reduce the levels of harmonic distortion generated.
Tackling the problem
Methods employed to limit distortions in the supply network can take a variety of forms. One simple approach has been to move the point of common coupling (PCC), since a load causing a problem at 415V may be trouble-free at 11kV. Phase shifting transformers have also been employed, especially in drive systems where they can eliminate specific harmonic frequencies.
Another common solution is offered by the use of detuned passive filters consisting of a series circuit of reactors and capacitors connected in parallel on the system, with the cut-off frequency fixed at a value just below the lowest order harmonic of concern - typically 210 Hz.
Historically, the primary function of detuned passive filters was to protect the capacitors in PFC (power factor correction) equipment. Whenever standard PFC equipment is applied to an inductive network, there will always be a frequency at which the capacitors are in parallel resonance with the supply. Where harmonic currents are present, this parallel resonant circuit will cause amplification of those currents on the system and in turn contributing to the premature dielectric failure of the PFC capacitors. The passive filter is a simple, cost-effective solution preventing the magnification of the harmonic currents at frequencies above the tuned frequency, whilst contributing to the reduction of harmonic current generated and providing reactive power compensation required in the pursuit of reduced electricity costs.
However, the passive filter does have limitations and its effectiveness in harmonic attenuation relies upon, in general, the availability of lagging vars, the PFC applied and the transformer rating. Where there are a number of harmonic frequencies that need to be reduced to specific levels to meet G5/4-1 requirements, the effectiveness of the passive filter is limited by being tuned to a single frequency.
Active filters
In order to overcome the problems associated with traditional passive filters, ABB has, over the past seven years, developed and launched the PQF (Power Quality Filter) range of active filters for low voltage applications – see Figure 1. The basic concept of the active filter is very simple. If you add two currents, identical in magnitude and frequency, but exactly opposite in phase by 180° so that the peak in one coincides with a trough in the other - then they cancel each other out. The PQF, which utilises advances in DSP (digital signal processing) and power electronic switches ie the insulated gate bipolar transistor (IGBT), does this by continuously monitoring the line current in real time (at 40ms intervals) to determine which harmonics are present and then actively generating a harmonic current spectrum with exactly the opposite phase to the components that are selected to be filtered. The two out of phase harmonic signals effectively cancel each other out so that the supply transformer sees a clean sine wave – see Figure 2.
The PQF family covers a wide range of RMS current ratings from 30A up to 3600A with direct connection at voltages up to 690V. Higher voltages are possible, up to approximately 30kV, using special couplings. A key advantage of the PQF design is that it provides a compact solution with a small installation footprint that can be positioned for global or local compensation.
A PQF utilizes closed-loop measurement for greater accuracy and can be programmed to filter up to 20 individual harmonics for three-phase systems (15 harmonics in four-wire systems) from the 2nd to the 50th harmonic. It can filter the selected harmonics either until their magnitudes are close to zero (maximum filtering) or until their magnitudes reach a pre-set level (filtering to curve).
All PQF filters have a selectable facility of load current balancing across the phases, between phases and neutral (PQFS) and reactive power compensation. However power factor correction is not one of its main priorities, and since reactive power consumes the filter’s current capability we recommend that, where possible, a de-tuned capacitor bank is used to provide the power factor correction as well as some harmonic cancellation. This then allows the use of a smaller active filter, resulting in lower overall costs.
PQF applications
ABB PQF active filters have been installed in a wide range of industrial and non-industrial applications world-wide including: electrolysis equipment; induction heating; printing press operations; variable speed drives; commercial buildings; banks; dealer floors; call centres.
A typical installation of an active filter was carried out recently at South West Water’s foreshore pumping station in the traditional Cornish fishing village of Padstow. Western Power Distribution, the local DNO, was concerned the pumping station’s drives could give rise to unacceptable harmonic distribution on the local network. So ABB was called in to investigate and implement a solution.
The traces from Padstow in Figure 3 provide a dramatic illustration of active filtering at work. In trace 1 we see the unfiltered 400V AC three-phase supply with serious distortion of the current waveform – instead of two crossover points per cycle there are six. Trace 2 shows what happened when the 70A ABB active filter type PQFL was energized – there is a considerable improvement to the waveform and a 60 per cent reduction in current distortion.
Designed-in solutions
As well as carrying out on site test analysis to solve existing harmonic problems, incorporating active filtration at the project design stage can prevent them from occurring in the first place
There is in fact, a considerable portfolio of harmonic information already in existence regarding the type and magnitude of the harmonic spectrum that can arise in a wide range of load situations. ABB has developed a specific harmonic system design program, and by utilizing this portfolio of harmonic data and experience gained over many years, cost- effective solutions can be incorporated at the design stage.
Harmonics and how to deal with the effects of harmonic distortion has often been seen as a ‘black art’. But thanks to the advent of active filtering technology that reputation is disappearing rapidly as it emerges as a scientific, cost-effective approach to improving network reliability, preventing unexplained outages and prolonging the service life of electrical installations

When considering which enclosure is suitable for an application, the basic enclosure is only the starting point. Manufacturers in industry are now choosing their suppliers on the basis of quality, operational life and added features such as service packages says Sando Selchow of Rittal.
Suppliers today need to offer quality enclosures that are ‘fit for purpose’ for a wide and varied array of applications from the banking world, food processing, ship building or rail and air travel through to the information kiosk, the car assembly line or the workstation in the R&D department. Within these applications, enclosures surround us and in many instances we are not even aware of their existence, but enclosures today need to be more than just an empty box.
Hostile Environments
Electrical enclosures, which are used in hostile environments such as the manufacturing process or refuse collection vehicles, are usually hosed down at the end of the day. Often high pressure cleaning equipment is used and the mechanical equipment or controls are included in this washdown. The enclosure to house these controls will need to withstand the effects of water being used in the cleaning process. Stainless steel enclosures are the ideal solution as they are corrosion proof but is enough consideration given to the protection rating of the enclosure?
food manufacturing
More stringent cleaning methods are required within food processing plants where strong high velocity hoses are directed at machinery and their controls to prevent harmful bacteria from harbouring. The temperature in a process plant within a poultry factory could be around 10ºC ambient when in use, but when the cleaning process takes place, refrigeration is turned off, which allows the temperature to rise to 20ºC. The water temperature for cleaning is often higher than 50ºC and can have a water pressure of 100 bar. If the enclosure to house the controls was only rated at IP56 or 66 the control enclosure could fill with water, as the water temperature is far higher than the enclosure and the water pressure is greater than the design standard of the gaskets.
In these types of environments where enclosures need to be protected against any form of water ingress it is recommended that a sealed enclosure be used which meets the demanding standards of IP69K.
The German DIN 44050 standard originated from requirements found in the automotive industry where spray water and high pressure cleaning resistance was required. The IP69K protection category certifies the enclosure to be water ingress proof when tested on a turntable.
The IP69K to DIN EN 40 050 part 9/5.93 lists the IP (international protection) rating for road vehicles. The IP rating is described in a combination of the two numbers and an additional letter behind the last characteristic numeral. In this case this creates for example the IP69K, where the letter “K” provides further information. The letter “K” refers to a special case for road vehicles which describes the protection of electrical devices in road vehicles with regards to foreign bodies, dust and, in particular, with regard to the penetration of water. The use of the additional letter “K” is, however, no longer used exclusively in vehicular applications, but also in the food and beverage processing industries. As this test procedure differs considerably from the other IP-tests, enclosures with IP69K test certificates are currently the highest protection standard available against water ingress.
The IP69K to DIN40 050-9/5.93 lists the protection category test consisting of the following parameters: Water pressure up to 100 bar; 14-16 litres per minute flow rate; temperature up to 80°C; distance min 100 to max 150mm. Duration of the test calls for four directions and test jet time of 30 sec each at the angles of 0°, 30°, 60° and finally 90°. To achieve the 4 directional test, the test object is placed on a rotating turntable.
Protection class IP69K is therefore an important standard for enclosure systems used in the food industries. In addition to the water ingress properties, the food industry also requires hygienic standards on the surface properties. Standard 304-grade stainless steel achieves the required levels, but what about the gasket material? Special properties are required, such as a non-bacteria harbouring surface and a smooth transgression between the stainless steel parts and the gasket materials. The German Institute, “Frauenhofer-Institute IPA”, certified the highest standards in the food and hygiene sector for the IP69K enclosure range.
Corrosion resistance
Standard sheet steel enclosures are easy to work with but will not always suit the environment where they are situated. Finishes used are aluminium-zinc primer coating and powder painted topcoat. Other material choices are GRP, aluminium or stainless steel. Stainless steel 1.4301 (grade 304), for example, would be suitable for a rural site such as a food and dairy processing plant or an industrial site such as a chemical or a pharmaceutical plant. The more aggressive environments situated near the coast or marine sites should consider 1.4401 (Grade 316) for better corrosion protection.
Outdoor enclosures should also have a high corrosion protection as well as a system in place to prevent condensation. A typical example would have a roof plate that is screw-fastened at the correct distance to allow for permanent air circulation.
Hazardous Areas
The new Atex directives address the use of equipment in potentially explosive atmospheres to ensure the equipment supplied is appropriate for the zones in which it is to be used. Atex 100a is the product directive, which facilitates the movement of goods across the EU by harmonising this product standard.
Enclosures that meet the new Atex100a certification are imperative where explosions are possible, due to gas which can be ignited at low temperatures, along with the EN50014 labelling of explosion protected electrical equipment that are to be used in Zones 1 and 2.
These enclosure systems are necessary in zones of the world where seismic occurrences take place naturally and equipment needs to be housed in enclosures that protect against such an effect. The specification Zone 4 (Bellcore) Telcordia GR 63-CORE details the requirements to withstand a certain force of shock and vibration and the duration of time.
Electromagnetic Compatibility (EMC)
EMC means equipment used in an enclosure is protected from electromagnetic waves entering the enclosure and damaging the equipment. It also works in reverse: electromagnetic waves produced by equipment inside an enclosure are kept inside. An enclosure with doors or panels on all openings already provides a good start for EMC as it forms a basic Faraday cage. Effectiveness can be improved by using an EMC shielding enclosure range with special seal arrangements and contact clamps which guarantee continuity between the enclosure and mating parts.
When products are used in factories and applications around the world, common safety standards are set to make sure all the health and safety criteria are met. Enclosures are no exception; meeting the requirements of the EN60529 ingress protection and EN60439 low voltage control gear assembly standards.
In eight out of 10 applications standard-cooling systems (mild or stainless steel) that meet a protection rating of IP54 and provide closed circulation will suffice, however in all other instances the situation will call for more robust equipment incorporating features such as NEMA 4X protection or even an anti-corrosion internal finish.
In hostile environments such as hose down areas, coastal or offshore installations the requirement for protection against water in excess of IP56 is a usual prerequisite. NEMA4X offers users of enclosures in such environments the ideal solution as it offers high levels of protection against heavy water jets. The UL approved stainless steel NEMA 4X cooling systems provide this rugged, long-term solution due to a unique design that prevents heavy water jets (from any angle) from entering the condenser housing. The units also utilise refrigerant R134a allowing operation all over the world in temperatures as high as 55°C.
NANO & Corrosion Coating
In dusty environments the NANO coating protects the condenser fins of a cooling system from unwanted airborne deposits. The coating is offered by the leading suppliers and has been tried and tested in the automotive industry with excellent results. In environments where contaminants in the air can overtime destroy the cooling system’s internal parts, a treated cooling unit is the recommended solution. This treatment simply coats all ambient air-side components with an anti-corrosion coating which makes these units ideal for cooling enclosures in industries such as petro-chemical, pharmaceutical, oil & gas, food, and marine specific industries.

Amber Villegas-Williamson from MGE UPS Systems examines the pros and cons of scalable, modular UPS systems and their fully-rated counterparts and how IT Managers can choose the best option for their specific requirements.
Technological developments within the UPS industry have been instigated in response to changing consumer environments. This has lead to increasingly more scalable UPS solutions emerging onto the market, which is having an impact on the traditional, fully rated UPS offering. The physical, electrical and financial characteristics of both types of UPS are detailed below, along with a hypothetical case study that will illustrate the comparison.
The first step in deciding which type of UPS system to choose is to look closely at your company and ask questions such as

An integrated approach to process control can have immediate tangible benefits with the potential to enhance efficiency and the bottom line, says Mark Daniels of Rockwell Automation.
As well as facing up to more demanding market forces, many modern manufacturing operations also have the additional, but necessary, burdens of enhanced and more stringent security, safety and legislation with which to contend.
With these extra commitments in mind, the management of processes and procedures has leapt to the top of many priority lists as companies seek to address the myriad of external factors pushing and pulling their internal operations in different directions. By taking a step back and looking at plant operations from a machine or process point of view – it becomes obvious where enhancements can be made to the overall enterprise that will not only result in significant savings and efficiency improvements, but also help address more stringent legislation.
Business guru Charles Handy is quoted as saying: “The market is a mechanism for sorting the efficient from the inefficient.” Only by adopting a completely integrated approach to process control – from shop floor to top floor – can companies hope to remain competitive and, as a result, also be agile enough to cater for the fluctuating needs of these increasingly demanding markets and the possibilities of even tougher and more demanding legislation in the future.
Rockwell Automation’s Integrated Architecture philosophy provides the foundations for this type of approach to strategic operational and process management. It is an industrial automation infrastructure that provides scalable solutions for the full range of automation disciplines, including – and in addition to process control – sequential motion, drive control, safety and information.
By utilising a single control infrastructure for the entire range of factory and process automation applications, and by supplying all of the tools that address the internal factors, Integrated Architecture provides a modular and flexible remedy to the challenges created by external market forces and legislative issues. In short, this approach gives companies the potential to enhance process control by reducing lead times, improving productivity and optimising plant assets and availability, while also reducing the total cost of ownership.
As well as sharing, controlling and auditing information from the top of the factory to the bottom and vice versa, this integrated approach to process control also helps to manage the enterprise from side to side, helping to maintain optimum operational conditions across the enterprise – from goods-in to final despatch.
The integrated approach to process control can result in substantial benefits, many of which have a direct impact on the bottom line. With a single integrated architecture, the sharing of actionable information will help improve response time and decision-making. Time and money can also be saved thanks to improved efficiencies such as reductions in engineering and implementation time, operator response times, maintenance costs and downtime.
Increasing productivity is also a realistic expectation, resulting from improved asset management. Increased speed and cost savings can also be realised when implementing or making batch changes using standards-based batch production management embedded in the system.
With Integrated Architecture, data can also be accessed throughout the system – regardless of its source. But Integrated Architecture does more than simply provide access to data; it collects and stores it for presentation as secure, actionable information. System-wide access to both historical and real-time data provides critical information to those responsible for making decisions and taking action. Freed of the delays and confusion associated with accessing isolated pieces of data through multiple databases or gateways, responses can be planned and implemented quickly.
Using the Integrated Architecture, process operators, engineers and maintenance personnel can view and analyse information in whatever format they prefer. Standard templates and customised analysis tools make data mining simple. The system can also interact with external production planning and scheduling functions to reduce errors and time delays, share equipment, personnel or materials availability; update expected batch or production run completion times; and share projected resource availability. These functions can be local to the site, or at a business level, as with Oracle, Microsoft, or SAP ERP functions.
To help maximise uptime, the system enables maintenance personnel to plan repair activities and reduce potential process downtime by monitoring critical production equipment and performing predictive diagnostics that automatically alert them to issues. It also provides them with the information and tools they need to identify and locate faults easily while reducing the risk of damage to equipment, and it assists them to perform those repairs more quickly. Redundant controller and power supply options further improve availability and reduce risk.
To maintain batch-to-batch quality within process environments including food, pharmaceutical and chemical processing, automation and production management systems must follow defined procedures and provide early detection of off-quality production, before downstream equipment is committed. By combining automated process control with easily managed procedures, you gain production reproducibility.
One example of a unified approach, being adopted across a variety of process-based industries, is the adoption of common batch standards.
Rockwell Automation’s batch solutions are based on standard ANSI/ISA-S88.01-1995, commonly known as S88, and its IEC equivalent IEC 61512-01. These have become two of the most widely adapted standards for batch processing in Europe and the United States. They define a common set of models and terminology to be used with batch processing systems. The models emphasise good practices in the design and operations of these systems and a methodology called Modular Batch Automation has been developed to codify these practices. Rockwell Automation’s batch solutions provide repeatability by automating procedural settings and integrating manual activities with in-line verification of operator actions.
Many companies realise there are significant benefits to be gained from an integrated approach to process automation and factory control; it is the reticence to take the first step that prevents many of them becoming adopters. Rockwell Automation is well placed to help companies take that tentative first step and then support them through the adoption and installation process. With worldwide service operations, continued support through the entire process lifecycles is another option helping with maximum uptime and optimum operating conditions

Since hundreds of thousands of skilled and unskilled workers were lost from the construction industry in the recession of the early to mid-nineties, only the most progressive companies have continued to make a significant investment in their workforce. Mike Henshaw of Omega Red Group investigates.
Unfortunately, too many organisations are content to poach well-trained staff from better-run organisations. Whilst this may be of short-term benefit to the organisation in question it simultaneously undermines attempts to boost skills, expertise and levels of professionalism across the industry as a whole.
The earthing and lightning protection sector, largely as a result of the actions of a few leading organisations, has however, grasped the need to promote best practice – the vehicle for this is the Lightning Conductor Engineering Apprenticeship scheme run by the Construction Industry Training Board (CITB). The apprenticeship, takes a minimum of 2 years to reach NVQ level 2 and has been running since the early 1990’s. Each year the scheme brings a greater number of properly qualified lightning protection and earthing engineers into the industry.
Omega has been one of the advocates of the apprenticeship course as we believe it is essential to ensure well-trained and skilled people are available to design, build and maintain high-quality earthing and lightning protection systems across the country. The apprenticeship scheme, ensures the knowledge vital to the industry in 21st century is being properly taught and understood.
Of course, it is important to spend time recruiting the right type of person to fit the role. We generate interest in our apprenticeship scheme by advertising in local newspapers, posting information on our website, using the CITB network and through contacts from our current workforce.
Potentially suitable applicants are interviewed for their ability to meet our training requirements and the needs of the job and also to assess whether they will fit within our organisation. The latter point is particularly important for long-term retention but is one that is often overlooked by employers. Candidates who successfully negotiate the initial stage go on to take the CITB assessments at the National Construction College (NCC) at Bircham Newton near Kings Lynn. Those who pass the rigorous interviews and assessments are offered places on the apprenticeship programme based on the needs of the business.
A structured training plan to meet the needs of the business is essential
Generally, apprentices are recruited in early autumn so they can gain some experience of life in the industry before they start their training at the NCC.
The apprenticeship’s academic and off-the-job practical training usually starts in January and comprises a total of 24 weeks residential training at the NCC over a 2-year period. Off-the-job training covers all technical and practical aspects of the trade.
The off-the-job training is augmented by on-the-job training during which apprentices are skilled in a wide range of disciplines such as health & safety, safe accessing, earthing and lightning protection design. The apprentice must successfully complete each module needed for the completion of their individual NVQ portfolios and eventual achievement of their apprenticeship.
The approximate direct cost to the company of employing an apprentice is in the order of £20,000. Some of this is offset by the contribution the apprentice makes to various projects whilst gaining on-the-job experience. Considering the costs of recruiting and training over the two-year period, it is vital to ensure appropriate retention policies are in place. That means regular meetings with apprentices, visiting them at their college and working closely with training board staff, designed to provide individuals with support, guidance and encouragement.
Having made the investment in time and money it is important to continue to offer the kind of positive career development and appropriate rewards that will ensure the long-term commitment of the individual. Most people are not motivated purely by salary but recognise and value the investment an organisation makes in them. Our approach led to 90% retention rate through to the end of the apprenticeship and 12 years after the completion of the first Lightning Conductor Engineering apprenticeship course, we still have an overall retention of around 60% of all apprentices ever employed.
As with any business asset, success can be measured based upon the return on investment. This is a vital concept. Companies should not recruit apprentices simply because it’s ‘the current thing to do’ but in order to resource the business’s current and future growth.
The real success of an apprentice scheme, for my organisation, is in ensuring it fulfils the needs of our customers in a safe, professional, efficient and cost effective manner. It follows that if the company is able to achieve this aim then it benefits from a more loyal customer base, reduced long-term recruitment costs, lower churn and the development of a more effective workforce.
The construction industry, sadly, has a reputation for harbouring many unprofessional operators. Properly recruited and trained apprentices developing into competent trades people will go some way to achieving a reputation for trust and professionalism.
Without investment in apprentices customers and the industry will suffer from a lack of competent, safe trades people and ultimately that will mean lower long-term turnover and profitability.
In answer to the question ‘are apprentices a short term cost or long term benefit’? Whilst apprentices involve a short term cost, without them the industry will never achieve the respect, and profitability it needs

Unless organisations instigate a proper testing and maintenance programme, they will only know if their lightning protection system is working properly when they suffer a strike. By that time they could have suffered catastrophic damage to their buildings and business, says Mike Henshaw, managing director at Omega Red Group.

Most building services would simply not function correctly if faults or defects were present but the correct operation of a lightning protection system only becomes obvious when it is called upon to protect a structure. For this reason it is even more vital to ensure that fully trained and accredited engineers undertake regular testing and maintenance works on vulnerable structures and sites. The current in a lightning strike is likely to be in the range of 2,000 - 200,000A and so an effective operational system is vital to ensure the protection of assets.
The vast majority of structures in the UK use BS6651 to inform their design, testing and maintenance works in relation to lightning protection. This standard states a “competent person” should carry out inspections so a good rule of thumb is to look for contractors with third-party accreditation of their ability to design and report on lightning protections systems, accreditation such as that provided by Atlas (Association of Technical Lightning and Access Specialists).
BS6651 covers all aspects of Lightning Protection but sections 31-34 are of particular relevance for testing and maintenance.
As large parts of the lightning protection system may be hidden or inaccessible after completion, it is particularly important, and indeed a requirement of the code, that each component of a lightning protection system should be inspected during the construction stages of an installation. Special attention must be given to any part of the system that will be concealed upon completion. These components may be hidden for aesthetic reasons or the component may be an integral part of the structure.
Inspections should be carried out not only during the installation process but also upon completion and at regular intervals thereafter. Figure one shows damage that has been identified through regular inspections. The first picture shows the conductor has been bent into an ‘s’ shape next to the clamp. This ‘s’ would create inductance during any further lightning current flow and may result in a flashover from the conductor to adjacent conductive parts, which could cause fire or other undesirable mechanical effects.
The second picture shows loose tapes, probably caused by the mechanical effects of a lightning strike dislodging poorly fitted fixings. Further strikes would cause a whiplash effect on the tape and may damage further fixings or rip the conductor away from the system completely, thus leaving it incomplete.
Visual inspection of an installation should take into account the following key points and observations recorded in the detailed inspection report:
• inspections should be repeated at fixed intervals, preferably not exceeding 12 months. If the intervals are fixed at 11 months, the system will have been inspected throughout every season of the year over a period of 11 years
• the mechanical condition of all conductors, bonds, joints and earth electrodes should be checked and any observations noted
• if a part is unable to be inspected, this should be noted
• the bonding of any recently installed/added services should be checked.
This section deals with testing the earth electrodes on the system, although reference is made to a visual or measured test of any joints or bonds. In practice, it is usual for inspections of components to be undertaken rather than for testing to be carried out.
Electrode testing requires experience and expertise to ensure that any test carried out is meaningful and reflects the resistance of the electrode under test. Too frequently, Omega is handed client information presenting resistance readings that are obviously continuity tests and not true earth-resistance tests.
There are two appropriate methods of testing lightning protection earths: ‘Fall of Potential/the 61.8% method’ and ‘Dead Earth’.
‘Fall of Potential’ is the recommended method and involves the electrode under test; two reference electrodes, a set of leads and a four-pole test meter. The electrode under test is isolated and connected to the meter as shown in figure two for the ‘Fall of Potential’ or figure three for the ‘61.8%’ method. In turn, the test meter is connected to the two reference electrodes, which are driven approximately 300mm into the ground and located typically 25 and 50 metres away from the electrode under test.
A test is made and the direct resistance of the electrode under test is recorded on the meter. This method, however, is only practical if the electrode to be tested is located adjacent to virgin ground where test electrodes can be driven. In reality, in town and city centres for example, this is very often not the case. The presence of buried services and pipes may also have an influence on the test current and the final test value may be corrupted as a result of these external influences. Reference electrodes should therefore be sited away from such potential disturbances.
Where practical conditions dictate that the ‘Fall of Potential’ method cannot be used, the ‘Dead/Known Earth’ method is really the only practical alternative. However, it is important to be aware this method is open to error and misrepresentation if the test engineer is not competent to determine an appropriate dead earth or interpret the readings, which is why it is essential to use an Atlas accredited engineer to undertake tests of this nature.
The ‘dead earth’ can be any low-resistance earth not directly or fortuitously connected to the earth under test. A connection is made from a suitable earth to the test meter, which is in turn connected to the electrode under test as in figure four, which shows the lightning protection system acting as the known ‘dead/known’ earth. A reading is then taken and the ohmic value achieved is effectively the series resistance of the electrode under test and the dead earth.
The ‘Dead Earth’ method has some advantages when using the lightning protection system as the low-resistance ‘dead/known’ earth, as, due to the equipotential bonding required to other incoming services, it should provide a low-resistance earth path. Test clamps, or the clamp to the rod in the inspection pit, should be opened and the meter connected to the rod/rod side of the test clamp and the other side of the test meter connected to the system side of the test clamp.
A reading can then be taken, which will show the series resistance of the electrode under test and the rest of the system together with other connected parallel electrical and other earth paths.
As these other parallel paths usually have a relatively low combined resistance, the meter reading is effectively the resistance of the electrode under test as, if correctly selected, the ‘dead’ earth that is used is normally of such low value that it has little impact on the final result.
In addition to providing an ohmic value for the electrode under test, this method also verifies the circuit to the dead earth source and by virtue of this, the electrical condition of the joints in the system. If the connections from the top of the test clamp to the air termination through to the other earths on the system and other parallel paths were loose or damaged, they would provide a high resistance, which the meter reading would reflect. This situation should then be investigated so that any high-resistance joints can be addressed.
Where no access to an electrode is possible and, for example, the pile foundations have been utilised as the earth termination, it is recommended that individual reference rods are installed around the structure and tested upon completion. These do not necessarily form a part of the installation but may be used as comparisons against the original pile foundation test results. In short, if the reference rod values have not increased year on year then it can be assumed neither has the resistance of the pile foundations.
The ‘Dead/known earth’ test method also applies to clamp-on CT type testers where disconnection is not required, although this type of testing is not always practical.
At least two types of test are recommended, one for each of the individual electrodes in isolation and a second for a combined value. The requirements of BS6651 are an overall system resistance (excluding bonding to any services) of 10? and each electrode not exceeding 10 times the number of earth electrodes on the system.
Any disconnection of the system should be preceded with a test to ensure that it is not ‘live’ and no testing should be carried out during storm conditions.
Failure to keep up to date, accurate records can result in hidden parts of a system not being adequately attended to and potentially unnecessary remedial works being proposed and executed, as a full assessment of the installation has not been made. At the time of the annual test and inspection, the following records are needed either on site or in an accessible place.
BS6651 states that the following records should be kept:
• drawings of the lightning protection system
• details of the geology (nature of the soil and details of any special earthing arrangements)
• type and position of the earth electrodes
• test conditions and results obtained
• details of any alterations to the system, including additions and repairs
• the name of the person responsible for the system.
In order to comply with the Construction Design and Management Regulations, these records should be provided at the completion of the original installation for inclusion in the project Health and Safety file. The person responsible for the upkeep of the building should recover the lightning protection system records from this file and present them to the engineer undertaking the first post-installation inspection and test. Details of the inspections should be recorded so that the required information can be updated and maintained. The programme of tests and inspections will identify what, if any, maintenance is needed. BS6651 states that attention should be given to the following:
• earthing
• evidence of corrosion or conditions likely to lead to corrosion
• alterations and additions to the structure that may affect the lightning protection system (e.g. changes in the use of the building, the installation of crane tracks, erection of radio and television aerials).
Statistics show the UK alone is subjected to around two million strikes per year and, in order to ensure your lightning protection system is operational when called upon, bearing in mind you have no way of determining when that may be, any maintenance work should be carried out with appropriate expediency.
In the hands of experienced engineers, proper testing and maintenance of lightning protection systems can become a routine, but very necessary, part of a comprehensive safety programme. At the very least the consequences of not taking a thorough approach could incur unnecessary costs but, given the destructive potential of a lightning strike, those consequences could be much worse.

Andy Nesling, electronic systems engineer at Lucy Switchgear, examines how it is possible for local automation schemes to perform essential switching functions without the need for human intervention and with little or no communications bandwidth.

In the last five years, automation of secondary distribution networks has grown rapidly in the UK. With the benefits of rapid network fault diagnosis and reconfiguration, DNOs have invested heavily in SCADA control systems, Remote Terminal Units (RTU) and reliable communications infrastructures to facilitate network automation. As the number of automated points on the distribution networks grow, communications bandwidth and control engineers’ resources are becoming more of an issue. Combine with this the continued requirement to reduce customer minutes lost (CML) and it becomes evident that remote control of medium voltage (MV) switchgear in isolation cannot provide the whole solution for optimum network automation.
A local automation scheme comprises the following fundamental building blocks:
• MV switchgear rated for intended mode of operation
• actuation for MV switchgear
• intelligent actuation control electronics (RTU / IED / PLC)
• a communications module (optional).
The function of a local automation scheme is to perform a MV switching operation, or series of operations, based on a set of criteria being met and without the need for human intervention. There are effectively two main types of local automated switching:
• protection switching – performed by intelligent relays and circuit breakers
• network configuration switching – performed by ring main units (RMU) and pole-mounted switchgear
For the purpose of this discussion we will primarily cover the advantages and issues associated with ‘network configuration’ switching as the issues associated with ‘protection switching’ are well known and documented. It is worth noting that there are many parallels between these two modes of switching, which are generally considered to be quite different.
Figure one shows a typical switching point on a MV network that can benefit from the implementation of a local automation scheme.
The figure shows a RMU at a system open point, running with the LHS normally closed and the RHS open. The RTU controlling the actuators on the RMU would be programmed with the following simplified scheme.
• On permanent loss of LV – check that the fault passage indicator (FPI) is not set
• Check that the RHS has full voltage present (MV)
• Open the LH switch and close the RH switch
• Check that the LV has been restored
The above scheme can restore the customer’s LV supply within 30s, providing the control engineer with valuable time to analyse the network and isolate the fault. In an actual scheme, there are many more validations, including actuators, low gas and user intervention, which all ensure safety and reliability of operation.
There are a number of advantages of de-centralising the monitoring and control of MV switching.
• Reducing customer minutes lost: a local automation scheme is able to carry out all the pre-switching checks and control of the switches in a fraction of the time it takes a control engineer to perform the same operations via a central SCADA system. This is especially true where communications to outstations is via a GSM, GPRS or PSTN connection (dial-up). The result of this is that customers are without power for seconds rather than minutes.
• Communications: a local automation scheme can be set-up to report only essential MV network data and automation scheme status, greatly reducing the amount of data that must be reported back to the central SCADA system. This is of particular benefit to systems using scanning radio modems. Alternatively, local control schemes can be setup with no communications at all (or just SMS message reporting). This can benefit DNOs with limited current SCADA capabilities that require all the advantages of network automation but with a clear upgrade path to future remote monitoring and control.
• Availability: as local automation schemes can be configured for operation without any remote dependence, they can continue to function even during a communications failure, greatly increasing the availability.
• Control engineer involvement: adverse weather conditions can result in large fault incidents on the MV network. A number of carefully planned, local automation schemes can free up control engineers to locate and isolate faults quickly, while the minimum
Although it is clear that there are many advantages of local automation schemes, there are a number of factors that must be given careful consideration when planning and implementing such a scheme:
• MV network complexity: when planning the location for a local automation scheme, careful thought must be given to all the possible modes of operation around that part of the network. On many projects, provision for ‘remote disabling’ of an automation scheme is essential. This facility then allows the control engineer full control over part or the entire network during maintenance or extreme circumstances.
• Commissioning time: with all of the extra functionality that a local automation scheme can offer, there is the additional time required to install and test it on-site. Switchgear manufacturers are increasingly working towards minimising this cost to the end customer by providing a full turnkey solution from the MV switchgear through to the RTU and communications device. The complete system can be delivered to site fully-tested, greatly reducing the time required to install and commission the system.
• Modes of failure: as any system becomes more complex, the number of ways in which it can fail inevitably increase. When considering local automation of an MV switch, the addition of a rapid mechanical disconnect mechanism is one way to mitigate this risk, allowing the user to continue operating the MV switchgear manually in the event of an automation failure.
In October 2004, Lucy Switchgear, in partnership with a UK DNO, successfully commissioned a local auto-changeover scheme for the MV supply to a sewerage pumping station. The scheme consisted of two Sabre RMUs with circuit breaker relays, motorised actuators and a Gemini-RTU. A key requirement of the client, a privatised water company, was to be able to restore supplies within 60s as the overflow tanks could reach the maximum capacity beyond this time (figures two, three and four illustrate the commissioned system).
The scheme programmed into the RTU was developed in close consultation with the DNO to cater for all the different operating scenarios and maintenance procedures. Initially, a communications link between the main SCADA system and the pumping station did not exist so a local control panel was provided, allowing an engineer to enable and disable the scheme locally. Following the successful commissioning of the system a high-powered radio link was added to the site allowing the DNO to monitor the RTU via the DNP3 communications protocol.successful implementation of local automation schemes
Over the past few years we have found that the key points to consider when planning and implementing a local automation scheme include gaining a clear understanding of all the different operating scenarios that must be considered and ensuring that the MV switchgear and actuation are designed to operate under all the expected modes of operation.
In addition, selecting a robust RTU / PLC that has the ability for future communications expansion using standard industry protocols is important, as is working closely with the system integrator to develop a scheme with high reliability and availability.
Finally, thorough testing prior to on site commissioning is advisable. Systems can go weeks or months without operating and so all scenarios need to be fully tested prior to commissioning. This ensures that issues with schemes do not arise years later or, worse still, after many more identical schemes have been installed.
In summary, local automation schemes can be an excellent first step into secondary network automation, with a clear upgrade path to future SCADA control and monitoring. They can be used as a method of reducing the communications load on an existing automated network and in critical applications to return supply with minimum downtime.

The difference between the price of something and what it costs can be enormous. This is equally true with building control, where even the simplest premise can be difficult to enact, as Richard Hipkiss of Schneider Electric’s Transparent Building Integrated Systems argues.

Competitive tendering and price pressures in new building continue to focus attention on capital costs. This is in spite of the evidence that operating costs typically amount to almost three times the capital required in acquiring the building. Moreover, the figures do not reflect the maintenance costs that can also be twice the capital build costs. The time to invest in equipment and systems that reduce operational costs is clearly at the time the building is commissioned. This can be achieved without increasing the capital expenditure only if a convergent approach is adopted from the beginning.
A further consideration, which is never considered in any value chain, is that for most commercial buildings, the staff that reside in it make up about 80% of the operational costs. Intelligent building schemes can significantly improve the performance of this massive operational investment through an improved environment and greater efficiency within the workplace infrastructure.
Taking into consideration the primary systems within all buildings, there are numerous opportunities to reduce running costs by judicious specification and installation of equipment in the mechanical plant – for example, selection of energy-efficient boilers, chillers, heat pumps or combined heat and power plant is a primary consideration. Anywhere electric motors are encountered, variable speed control brings huge energy savings. It is a fact that some 65% of electrical energy consumed is in powering motors.
Electrical plant, while fundamental to any building, is often ignored. The electrical network can cloak hidden costs. By managing the utility bill, a realisation of up to 5% savings can be achieved – but this is just the beginning. An additional 5% can be saved on increased equipment utilisation, avoiding unnecessary capital purchases, while improving system reliability can save a further 10%. Other considerations include avoiding under and over voltages; supply losses; harmonic issues (for which there is now the need to comply with G5/4, which forbids harmonic pollution); maintenance savings; maximum demand avoidance; and load shedding. All of these factors require controls.
So what prevents better planning at the first and second fix stages? One clear impediment – especially through its impact on price – is the fact that most building systems are discrete. In other words, there are separate security and access controls, lighting and heating controls and electrical infrastructure. Even in systems running under some form of building management system (BMS), there is a distinct lack of joined up thinking. What we describe does not require integration, for true integration of all these disparate systems is still mythical, but rather it is convergence. By enabling common cabling, protocols and data sharing, costs can be slashed at the same time as controls and, hence, costs can be instated.
Only by including installers and manufacturers on the top table alongside the design team can convergence be achieved. The focus can then be energy efficiency; future maintainability; ease of installation; and the de-skilling of the installation. In short, all those areas with the greatest impact on operating expenditure can be addressed.
Considering the typical building, there are distinct areas of control. These include: electrical distribution; centralised heating and cooling plant; distributed heating and cooling equipment; access control; CCTV; security alarms; and data infrastructure. The aim is to achieve: an energy-efficient building by ensuring cost effectiveness through its operation; a common information portal that is accessible from multiple locations by multiple disciplines and which covers all information in a similar format. The aim is also to share functions wherever possible, resulting in a lower installed cost.
A fundamental aspect of a convergent and intelligent system is its transparency. In other words, if the convergent, rather than integrated, system is to have the necessary coherence to maintain control and reduce costs, it must have a common platform. While this does not preclude sophisticated BMS, a simple, readily understood interface, such as Explorer, can deliver a user-friendly platform from which to realise a powerful system. Moreover, adopting such an approach allows the fundamental structured cabling infrastructure to be fully utilised in a broad variety of ways – removing the need for many proprietary cabling or datacomms installations.
Schneider Electric’s Transparent Building Integrated Systems (T-BIS) show that, even by substantially increasing control technology, significant cost reductions can be made. In a recent model where the company’s engineering team was invited to participate from the design stage, some 40% of the capital costs were removed, while achieving an intelligent and energy-efficient building.
The bulk of the saving was made by using integrated communications cabling. In this scheme, segregation of services was carried out at several levels: individual patch panels for each service were installed, patched into a discrete switch for each service; common patch panels for all horizontal and backbone cables were installed where the segregation was made in the patching process; and common patch panels for all horizontal and riser cables were used with no segregation. The corporate data LAN was used for all the services.
The building in question has an overall scheme, plus zones under local control and with local buses. In the zoned areas, an energy controller was specified for controlling heating and cooling valves, lighting, alarms and access control, with a link to the main network via a simple network interface to the main building controller in the incident room.
As a result of working as part of the design team and taking ownership of the control at installation stage, the reduction in overall installation costs amounted to 40%. There was also an energy-efficient platform achieved, controlling the building in line with ambient and outside conditions. Combined infrastructure, sensors and information systems were attained. Importantly, a framework for ongoing planned maintenance, resulting in minimum unforeseen costs, was achieved.
There are a number of factors that influence the approach that is taken when considering the operational expenditure at the design, build and commissioning stage of a project. These include government initiatives and regulations, end-user specification, consultant specification and manufacturer co-operation. Of these, the one that most imagine would drive a more considered approach to building controls is the imposition of legislation. Paradoxically, this has so far been shown to have little effect.
The Climate Change Levy has had little impact and the Inland Revenue’s associated Enhanced Capital Allowance scheme has had nothing like the uptake expected. The Building Regulations Part L were amended as long ago as October 2001, when the new L1 and L2 covering conservation of fuel and power were published. But, while these are excellent documents, there is no power that exercises actionable authority.
As recently as July 2004, Lord Rooker, the minister of state in the Office of the Deputy Prime Minister, said in a written Ministerial Statement: “Energy used in buildings is responsible for roughly half the UK’s carbon dioxide emissions.” He then added that “driving up the energy efficiency of our buildings is critical to our success in achieving the carbon emission reduction targets”. He concluded later that performance standards must be raised and that requirements for efficiency must be introduced. However laudable the words and the will of the ODPM, the real drive towards better building control will come elsewhere.
It is the designers and the end users that will ultimately make a difference. At present, about 80% of building specification is brokered but the end users of a building are becoming increasingly aware of the need for efficient and intelligent buildings. This is one area where government intervention is beginning to have an effect. Since the introduction of CIBSE’s TM31 logbooks, facilities managers have been more focused on energy and building performance. The knock on from this is that eventually greater demands will be placed on the designers to create means by which the TM31 logs add value to the property.
Naturally, the end users will resist additional costs and designers will be placed with the dilemma of how to deliver better building performance without increasing the capital expenditure. This is where the manufacturers new role will exert itself, for only by adopting early consultation will the designers achieve their objectives at no additional cost, or, as our earlier example showed, at a reduced cost.

The networking of drives and controllers is an accepted practice these days. Numerous network protocols are routinely employed in industrial environments across the world. Among these, Ethernet TCP/IP, perhaps by virtue of its being more synonymous with office applications, is less familiar on the factory floor than might be expected, explains Mark Daniels of Rockwell Automation.

Unlike decidedly blue-collar protocols like DeviceNet, ControlNet and ProfiBus, Ethernet TCP/IP was not conceived primarily as an industrial protocol. But while its apparent lack of industrial street-credibility might make the ubiquitous Ethernet seem a little too much of a ‘one-size-fits-all’ solution for industrial control applications, EtherNet/IP – the industrial ‘flavour’ of Ethernet – offers enough advantages over the native industrial protocols to make it a serious candidate worthy of consideration in industrial applications.
Ethernet as a network has been around for a long time and over that time has evolved to become a very efficient way of making information widely and easily accessible across a variety of platforms. It is, after all, the technology that forms the bedrock of the internet. Its long history and broad usage has made it a cost-effective and reliable networking solution that is readily understood by non-specialist engineering support personnel. Ethernet’s ability to move information around networks and across network boundaries also makes it a powerful tool in the industrial environment.
It’s 2am: are your drives still running? If they’re networked using EtherNet/IP, you could know the answer in seconds, no matter where you are. Production line problems invariably happen when it’s most inconvenient. The strength of EtherNet/IP networking is that it can effectively deliver drive information and control when and where it is required. What it lacks in robustness compared to ControlNet or scalability compared to DeviceNet, EtherNet/IP more than makes up for in connectivity. But connecting drives is just the start: Because of Ethernet’s plug-and-play connectivity with the wider network infrastructure, drives connected over an Ethernet IP network are able to deliver diagnostic information via a standard web browser, just like any website. If a drive triggered an alarm or became faulty, text messages to pagers or mobile phones can be delivered via the same medium.
The ability for information – be it diagnostic data or commands – to flow effortlessly internally and externally across network boundaries is a great strength of Ethernet connectivity. But that’s not all. In purely practical terms, there is a significant labour and material cost saving from not having racks of I/O modules and wires running everywhere just to communicate with drives.
Drives wired using discrete or point-to-point wiring require multiple discrete and analogue I/O modules, along with the associated wires snaking through panels and panel interconnects. Users also have to take the time to buzz out wiring during installation, or again if there are problems with I/O operations later on. All of this results in increased labour time that can be avoided with EtherNet/IP networking.
With Ethernet, a single cable removes the need to run a multitude of wires through conduit and trunking. Dozens of drives can be networked over just one cable and, once the Ethernet connection is made, functionality testing is simply a matter of ‘pinging’ the required device from any Windows PC.
Using readily available software tools, EtherNet/IP allows facility managers to have a virtual ‘window’ into the drive, providing complete access to metering data, diagnostic values, configuration parameters and fault information. Viewing this information gives staff time to correct production issues before they can impact the rest of the process. In addition, if a drive faults, the maintenance staff will have a significant amount of information available to enable them to troubleshoot the problem.
In a situation where a drive or motor is beginning to operate outside of established parameters, users will know because the drive or controller software can be set to inform them directly. Drive configuration options are available that can e-mail or page someone with an alarm before a problem occurs, after a problem or faults occur and after an alarm or fault has been cleared. A hyperlink included in the e-mail message can be used to launch a web browser connected directly with the drive that sent out the message.
Additional software tools can be launched from the browser to allow complete access to the drive’s information. Not only is this a very useful facility for front-line support, but EtherNet/IP also delivers a very cost effective way for OEM’s to troubleshoot when offering support contracts. Cost savings from being able to troubleshoot online are significant because often an engineer does not need to be dispatched over long distances to repair or adjust a system.
Readily available wireless Ethernet technology makes life even easier by allowing maintenance personnel to roam freely to wherever a problem occurs. A wireless Ethernet card in a laptop or hand-held device and wireless access points on the factory floor are all that’s required to keep ‘connected’ to the network. The laptop becomes both the means of alerting that a problem has occurred and the tool to fix it. That makes it a very powerful tool indeed.

Kevin Beavan of PowerVar wonders why, despite the fact computers and other critical electronic systems can easily be damaged or destroyed by power fluctuations, UK businesses often choose ineffective solutions for limited cost savings

Many UK businesses will select surge protectors as their sole defence against power fluctuations without appreciating they can only provide rudimentary protection; without adequate protection there is a high likelihood of costly damage to computers and other sensitive electronic equipment. It is not just the damage to equipment that costs money, the more serious threat is the financial losses that can result from not being able to function normally.
There is one good reason surge protectors remain the prime choice; they are markedly cheaper than an effective solution. The savings in opting just for surge protectors are easily outweighed by the potentially massive costs resulting from any damage in terms of replacing or restoring any damaged components.
Part of the problem is a failure to understand the inherent limitations of the surge protector that make it so poor for protecting sensitive electronic equipment.
Technically even the term surge protector is misleading since it implies the device will protect against all surges. A more accurate term for these devices is surge diverters since they divert high voltage transients or impulses away from the electronic system they are supposed to protect.
surge diverters
Surge diverters work by utilising a threshold voltage, often called the clamping voltage, and may also have a clamping response or delay. When the clamping voltage is exceeded, any transient voltages are shunted to an alternative path and away from the protected equipment. Once the surge voltage drops below the threshold the diverter stops operating and the normal electrical conducting path is restored allowing current through to the equipment. The clamping time simply acts as a delay to the diverter so that relatively short surges are not diverted. Every surge diverter is designed to handle a maximum amount of energy without being destroyed.

Different technologies
To achieve their basic function, surge diverter products are manufactured with a variety of different electronic components including metal oxide varistors (MOVs), silicon avalanche diodes (SADs), and gas tubes. Each of these components functions differently and can affect the surge diverter’s performance. Yet it is rare for suppliers to actually state which technology is used in their surge diverter.
Characteristically MOVs have a high clamping voltage of between 300 and 500 volts and a slow response time. This means that voltage impulses of less than 500 volts will not be diverted and can enter the computer system unimpeded. The slow response time means that very fast, high voltages can pass through the MOV before it responds leading to additional damage. MOVs do have an advantage because they can handle significant amounts of energy yet they physically degrade each time they clamp which adversely affects future performance and ultimately leads to their failure.
To overcome these disadvantages, manufacturers turned to silicon avalanche diodes (SADs) that have a faster response time and improved long term performance compared with MOVs. However, SADs have a much lower energy handling capability and impulses that merely degrade an MOV may destroy a SAD. Surge diverter manufacturers will either use multiple SADs in parallel or use the components in conjunction with MOVs to improve the diverter’s energy handling capability.
The final type of component that can be found in surge diverters is the gas tube. These act like bouncers at a nightclub because they can handle almost unlimited amounts of energy although they are comparatively slow and have a high clamp voltage. This capability means they can protect all other surge diverter components during a catastrophic power disturbance.

Combining components
The limitations of the different component types means that some surge diverter manufacturers include all three technologies in an effort to improve performance by combining their relative strengths. Yet since MOVs and SADs are both electronic components, both are subject to failure from a high-energy impulse whether they are used singly or in combination with one another. The high probability of total failure is the reason why many surge diverters include an indicator light to show that the protective elements are no longer functioning. In most cases, surge diverter components operate ‘naked’ on the power line and it is almost certain that they will eventually fail.

The modern power supply
Another factor that limits the effectiveness of surge diverters as the sole line of defence is the characteristics of the modern power supply. Older generation computers used linear power supplies that required voltage regulation compared with modern systems which are powered by switch mode power supplies (SMPS). While SMPS are immune to voltage regulation problems they do require protection from impulses, power line noise, and, most importantly, common mode (neutral to ground) voltage.
A computer’s microprocessor makes logic decisions with reference to a clean, quiet ground that is destroyed by common mode voltage resulting in lockups, lost data, and unexplained system failures. Common mode voltage is very disruptive to a computer’s operation. Because surge diverters divert disturbing energy to ground they create common mode voltage. Effectively they convert a destructive disturbance into a disruptive one. And since surge diverters still allow substantial energy to pass through to the computer, damage may still occur.
To avoid these problems businesses need to move beyond simple reliance on power diverters and enhance their systems by fitting a power conditioner. A power conditioner is any device that provides all the power protection elements needed by the technology it’s protecting. Although the surge diverter remains a key element in the circuit’s protection for sensitive equipment its performance can be enhanced by the addition of an isolation transformer and a power line noise filter to provide a power conditioner.

Elegant power tools
Transformers are elegant power quality tools with unchanging secondary impedance. The benefit of using an isolation transformer is that bonding of neutral to ground on the transformer secondary is permitted and eliminates common mode voltages.
This enables the surge diverter to divert surge energy to ground without creating any common mode disturbances. Noise filters function in a similar manner by diverting EMI and RFI to ground and, when combined with an isolation transformer, their performance is enhanced.
This is of particular importance at the end of long-branch circuits where most computer equipment is installed. Wiring in buildings contains significant inductive and capacitive reactance giving rise to a unique frequency at which the system oscillates for every location in the building. This makes power line transients unpredictable.
Although a lot of research has been done by the IEEE to characterize a branch circuit impulse’s typical characteristics the reality varies greatly for every different location throughout the building. When a surge diverter is installed on a branch circuit it becomes part of the wiring system and the circuit impedance resulting from the wiring reactance affects the surge diverter’s performance.
This means the performance of surge diverters is often unpredictable since the varying electrical characteristics affect the frequency, wave form, and impulse’s rise time at different places throughout the building. This explains why a power diverter, when used as the sole form of protection, may reduce the frequency of hardware failures, while still making the system behave unreliably at times.
However, by fitting an isolation transformer and a noise filter, the surge diverter’s performance becomes predictable, controllable and repeatable.
On their own, surge diverters will limit transient impulses to hundreds of volts but by fitting an integrated power conditioner the same transient impulses are limited to typically ten volts or less. By fitting a transformer-based power conditioner far less of any disturbance is allowed to reach the critical electronic equipment ensuring improved functionality, greater reliability and enhanced longevity.