Recently the trade and industry secretary, Alistair Darling, released the energy white paper, which sets out the details of the government’s energy strategy for the years - and decades - to come. Among the issues discussed are security of energy supply and the environmental impact of energy policy, but not apparently whether future government ministers will be given a fleet of Jags to take their wives 100 yards down the road. Open Circuit looks at some of the less well-known elements of the strategy…

Badly lit tunnels and incomplete diagrams were among the obstacles encountered by Geoffrey Lilleker an engineer working on a hydropower plant in the Himalayas’ foothills as he told Electrical Review

Located on the river Satluj in the north east of India this is a run of the river scheme to provide up to 1500MW of power from 6 machines at the village of Jhakri. The water is directed from the dam site at Nathpa via a 26km tunnel including 3 silt flushing chambers all dug into the mountains that are a part of the foot hills of the Himalayas in that part of Himachel Pradesh.

His brief initially, was to provide the software package for a number of Allen Bradley PLC systems that were to control the silt flushing gates and the guard valves in a tunnel with a length of 26km leading from the Naptha dam to the village of Jhakri where six machines were due to deliver up to 1500MW of electric power. His brief also comprised the electrical commissioning of the ports at the dam including the radial ports as well as various gate hoists that cut off individual areas off the plant for maintenance work and in case of an emergency.

“Unfortunately the drawings I was provided with locally were causing me many problems! They were out of date which meant there were many instances where they did not match the equipment supplied. Because of a shortage of relay contacts extra relays had been added which were not even shown on the drawings. Many device terminal numbers were missing making it difficult to follow them. Perhaps worst of all, the drawings had been done on a package that allowed more and more devices to be added to the same sheet by scaling the existing drawing down, which meant when they were printed out they could only be read with a magnifying glass. When a lot of the job involved working in poorly lit tunnels underground this was obviously an impossible situation. By the very nature of commissioning many circuit modifications had to be made, which also needed incorporating accurately into the drawings for final issue to the client and included into the of maintenance manuals,” Lilleker said.

“The nights were long in that remote part of the world,” he continued, “Using Elcad I could create all electric drawings from scratch in a short time. Thus, to the immense relief of the customer, the huge project was saved in time, and with new complete and safe documentation.”

Thermal images of electrical systems can rapidly indicate the operating condition of electrical equipment. In fact, since the beginning of thermography more than four decades ago, the principal commercial application for thermal imaging has been electrical system inspection says Ken West of Fluke (UK)

Thermal imaging has typically been the exclusive domain of specialists in the field, and has required the use of costly and difficult to operate equipment. With recent advances in sensor technology, this non-contact, versatile measurement technique has become available to a wider audience through dramatic reductions in price and in developments of user-friendliness.
Essentially, thermal imagers locate potential problems by detecting temperature differentials between one location and another. This can be achieved with instant displays of data using today’s handheld imagers which employ ‘point and shoot’ technology from an ‘electrically safe’ distance.
By presenting a rich visual image, using colours to represent a temperature, thermal imagers facilitate a quick visual check of surface temperature and easy identification of hot spots, which are often an early indication of impending failure. Maintenance programs can be developed and re-traced periodically using in-built routing instructions on the best products now on the market.

Warning signs
New electrical components begin to deteriorate as soon as they are installed. Whatever the loading on a circuit, vibration, fatigue and age cause the loosening of electrical connections, while environmental conditions can hasten their corrosion. All electrical connections will, over time, follow a path toward failure. If not found and repaired, these failing connections lead to circuit faults.
The reason thermography is so applicable to the monitoring of electrical systems is that a loose, over-tight or corroded connection increases resistance at the connection and since increased electrical resistance results in an increase in heat produced, a thermal image will detect the developing fault before it fails. An unbalanced or overloaded phase will also show up as hotter compared to the other two phases.

Preventative maintenance
Detecting and correcting failing connections by comparing the temperatures of connections within panels before a fault occurs averts impending failures. The best solution is to create a regular inspection route that includes all key electrical panels and any other high-load connections. The latest thermal imagers enable capture of data which can be uploaded and stored on a computer at the end of an inspection and then compared with other measurements over time. The ideal imager will allow the images from the previous inspection to be downloaded, so that previous and current images can be compared side-by-side at the point of capture. This will help to determine whether a hot spot is unusual or not, and also help to verify that repairs have been successful.
Such predictive actions are important because when a critical system does fail, it inevitably increases costs, threatens a client’s profitability and may impact on safety.

What to look for?
In general, connections that are hotter than others should be looked for. This signals high resistance possibly due to looseness, tightness or corrosion. Connection-related hot spots usually (but not always) appear warmest at the spot of high-resistance, cooling with distance from that spot. Overheating connections can, with additional loosening or corrosion, lead to a failure and should be corrected. Equipment conditions that pose a safety risk should obviously take the highest repair priority.

3-phase problems
Thermal images are also an easy way to identify apparent temperature differences in three-phase electrical circuits, compared to their normal operating conditions. By inspecting the thermal gradients of all three phases side-by-side, engineers can quickly spot performance anomalies on individual legs due to unbalance or overloading. Even a small voltage unbalance can cause connections to deteriorate, reducing the amount of voltage supplied. A severe unbalance can blow a fuse, reducing operations down to a single phase. Overloading or unbalance should then be investigated using other types of measuring instruments such as a clamp meter or power quality analyser.

Reporting procedures
Analysis and reporting software is provided with the best thermal imagers. Whenever a problem is discovered using a thermal imager, this software should be used to document the findings in a report, accompanied by each thermal image and a digital image of the equipment. This is the best way to communicate the problems to the client along with the suggested repairs.

Keeping ahead of the game
Thermal imagers at affordable prices offer one more tool in the arsenal of the maintenance engineer who wants to keep one step ahead of the game by offering his clients even better preventative maintenance.

Rising costs and a very real danger of future energy rationing are pushing the need for UPS systems to be correctly sized further up the agenda says Robin Koffler of Riello UPS (right)

Thermal imaging has typically been the exclusive domain of specialists in the field, and has required the use of costly and difficult to operate equipment. With recent advances in sensor technology, this non-contact, versatile measurement technique has become available to a wider audience through dramatic reductions in price and in developments of user-friendliness.
UPS customers are being urged to be energy efficient and play their part in tackling climate change. They cannot afford to have equipment, including UPS systems, running inefficiently nor can they risk an overload situation from ‘undersizing’ that would render equipment unprotected.
This is particularly true in power-hungry data centres where energy consumption in the business world is at an all time high. Not only do new, high-end servers require more power to operate, they also demand greater cooling resources. Every megawatt required to power hardware takes another 1.5Mwatts to cool it according to some reports. The need to prepare for the future is vital as electricity costs over the next five years are set to double, say analysts: In a recent report, consultancy BroadGroup found that the average energy bill to run a corporate data centre in the UK is around £5.3m/year. The company predicted that this would double to £11m over five years.
But is sizing UPS really down to simple maths? There has been a tendency, historically, to oversize UPS to ensure that, when everything is working at full capacity, the system itself is not overloaded (a situation to which some UPS respond by shutting down after a specified period of time, if it’s a double-conversion design, or at best by switching into bypass mode until someone notices, thus leaving critical computer systems vulnerable to cuts in the power supply or problems associated with raw, unfiltered mains energy).
Oversizing UPS leads to higher initial installation and on-going maintenance costs. Undersizing, especially in a busy data centre where new equipment is being continuously added and fluctuation in usage runs up and down the scale like fingers on a Cello, a customer will very soon be in trouble if they attempt to save costs in this way.
Whilst an On-Line UPS has a built-in automatic bypass, running close to its design limits with regular overloads is never considered good practice. The bypass is for emergency situations only and if overloads are a frequent and regular occurrence, it is always best to oversize the system slightly.
According to business continuity supplier Sungard, the proportion of business continuity plans invoked by companies experiencing power failures jumped from 7% in 2005 to 26% in 2006. The reasons for this may be three-fold: either energy supply is becoming less reliable, or existing UPS installations are unsatisfactory or - worse still – non-existent!
So, what needs to be done to ‘rightsize’ UPS? Firstly, the equipment that is being protected (defined as the ‘load’) has to be categorised into critical, essential and non-essential loads.

Critical Loads
Are defined as all the IT and electrical components and equipment that make up the business architecture and without which business continuity would be lost. Servers, routers, computers, storage devices, telecommunications equipment, security and building management systems usually fall into this category.
In this instance, UPS protection will probably require some form of extended runtime to keep equipment running continuously, more often than not it will also require redundancy so that if a power failure occurs and one UPS goes out of action, the other would take over powering the load.

Essential Loads
These are essential to the business but in their absence some semblance of functionality can exist. Essential loads are things like lighting (other than emergency lighting), air conditioning and heating. Some essential loads may need some form of redundancy built into the UPS system but in many instances back-up does not need to be as robust as with critical loads.

Non-essential loads
Are those that the business can survive without for the time it takes to reinstate power, such as printers and canteen facilities for example.
There can be significant differences between the power ratings recorded on rear panel labels and in operating manuals, and the true values drawn by electrical equipment. This is because hardware manufacturers use power supplies rated for maximum, worst-case conditions which are often far in excess of the actual power drawn. Loads can typically be seen running at only 50-60% of this total capacity. In addition, any ratings given may be in amps or watts to further complicate matters and there can be quite a difference between actual in-rush (start-up) and running power requirements.
Factors that must be considered when sizing UPS loads include:
• Apparent power (VA)
• Active power (W)
• Power Factor (pf)
Other factors that need to be considered if the right UPS is to be installed include expected response to overloads, which should only be intermittent, battery runtime required, fault tolerance (resilience) levels to be reached, the type of electrical installation in terms of the supply and load voltage and frequency requirements and the potential for future system expansion.

Apparent Power
Volt-ampere (VA) is a unit of measure for apparent power drawn by an electrical device. Once known, this figure can be matched to an appropriately-sized UPS. VA is calculated by multiplying the RMS source voltage (V) by the current drawn in amps (A). apparent power (VA) = volts (V) x amps (A)
For example, if an electrical device is connected to a 230Vac single-phase supply and the current drawn by this device is 10 Amps, the resulting VA value would be: 10 x 230 = 2300VA or 2.3kVA.
For a three-phase load, the calculation is slightly different. A 15kVA three-phase UPS will supply a maximum of 5kVA per phase (15

A sensible leave of absence

My congratulations to Dr Timothy Stone. Who is Dr Stone, you ask? He has been the head of global infrastructure at consultancy firm KPMG. And is the man chosen by (shortly to be former) DTI Secretary Alistair Darling, to look after the entire nuclear clean-up.
Or to give the task its more formal title, to oversee “arrangements for the costs of new build, decommissioning and waste management.” And with clean-up costs alone now estimated to be well over £70,000m, he will have quite a task before him.
This month’s much delayed Energy White Paper repeats the government’s volte face on nuclear. Having spent the previous nine years bad-mouthing anything to do with the Great God Atom, last summer New Labour suddenly got religion, and declared itself in favour of lots of new nukes.
But with one crucial condition. Any new power stations must not only be built and run by the private sector, these private operators must be prepared to pick up the tab for handling all the consequent costs.
That is the brief Dr Stone has. Were he to fulfil it to the letter, it would undoubtedly infuriate his former colleagues at KPMG. At present, the consultancy makes millions from advising the different parts of the nuclear industry.
For instance, last year the firm won an award for its work on the sale by British Nuclear Fuels of its construction subsidiary, Westinghouse to Toshiba. Working it has to be said for the purchasers. In the judges’ words, “KPMG Corporate Finance used its contacts with the UK Government and BNFL to market (sic) the Japanese player.”
Wisely Dr Stone has not become a full-time civil servant to carry out his new duties. Instead, he has simply taken leave of absence from his former employers. Really, the last thing a consultancy like KPMG wants is somebody intervening on behalf of the UK government, trying to ensure all the relevant nuclear costs are carried by the private sector. I am sure that Dr Stone will bear that particular concern in mind in his new, temporary, role.

Gordon keeps it in the family

In his current seven week round tour of the UK, prior to becoming our prime minister, Gordon Brown has endlessly stressed how ‘family friendly’ his new administration would be. That will come as excellent news for the nuclear industry.
Take for instance Andrew Brown, younger brother of the aforesaid Gordon, who is chief spin doctor (whoops, press officer) for EDF Energy. Which has long been the only electricity company prepared to openly champion new nuclear stations. Possibly to do with being French-owned, and therefore heavily subsidised by the French state?
Or take Tony Cooper, the father-in-law of Brown’s right hand man Ed Balls, effectively our new deputy prime minister. A former general secretary of Prospect, covering employees in nuclear power stations, Cooper is still very active within the TUC, plugging away on the pro-nuke cause. And who should the present construction minister be? Step forward Yvette Copper, Tony’s daughter.
All that is needed now is for Sir Bernard Ingham, the one person member of SONE, Supporters of Nuclear Energy, to declare himself as Brown’s long-lost uncle, and the Happy Family pack will be complete.

Planning for the future

When a Minister creates a public body to award lucrative contracts to the private sector, he or she will of course never be giving any thought as to whether they might be able to benefit from such contacts when they leave office.
Take Brian Wilson, for instance. When, as energy minister in 2000, he set up the Nuclear Decommissioning Authority, it will never have crossed his mind that it might be advantageous to any private sector company bidding for such contracts to have an ex- energy minister on its board.
So it is of course as complete a surprise to him, as to everybody else, to learn the construction company Amec is bidding for clean-up contracts from the Nuclear Decommissioning Authority. Well, perhaps not quite as much of a surprise to Wilson as everybody else. Just a few months before the bid was made, Wilson joined the Amec board of directors.

Gas suppliers ignored by Ofgem compensation scheme

Any electricity consumer who spots a billing mistake, is entitled to £20 compensation from their supplier, if the matter is not dealt with speedily. The compensation kicks in if the supplier has not made a “substantive response”, known as an ESG10, to a query on charges and payments within five working days.
At least, that is true if you have remained with your initial electricity supplier. Quaintly, Ofgem only insists upon such compensation being available to those who reside within the suppliers’ home market. At a time when Ofgem seems to believe the main value of its drive to deliver ‘competitive markets’ is the number of households who switch suppliers, this restriction seems utterly bizarre.
But not as bizarre as the compensation levels required for those who find they have been incorrectly billed for their gas consumption. There is absolutely no requirement for Centrica, or indeed anybody else, to provide any such recompense at all.
I do not understand why this occurs. I do think Ofgem should explain.

Mike Henshaw of Omega Red Group, a company specialising in earthing and lightning protection, investigates

During the recession in the early 1990’s, hundreds of thousands of skilled and unskilled workers were lost from the construction industry. Since then only the most progressive companies have continued to make a significant training investment in their workforce and ensure that they have a balanced, skilled team.
The earthing and lightning protection sector has identified the need for professional training to ensure companies can properly resource the industry’s growth. The vehicle for this training is the Lightning Conductor Engineering Apprenticeship run by the Construction Industry Training Board (CITB). Running since the early 1990’s, it continues to deliver quality, professionally trained engineers into the industry.
Omega has grown steadily over recent years and, in order to properly resource growth, has been a strong advocate of industry apprenticeships. We believe this method is the most effective way to guarantee properly trained and skilled workers are available to meet the growth plans of our business and the industry sector overall. With skilled trades people being effectively ‘home grown’ through the apprenticeship scheme, companies are able to ensure a good knowledge of health & safety together with best practice job related skills are developed to the highest standard.
The apprentice route does not come without a need for some crucial added input from the recruiting company.The recruitment and retention process used by Omega follows a professional model:-
Good recruitment practices are the key
Our involvement is vital so we spend time ensuring we recruit the right type of person to fit the role. We have built a good reputation for training and so soliciting interest in our apprenticeship scheme is not difficult.
Selected applicants are interviewed to assess suitability and determine whether they would fit in to the organisation - this is a particularly important factor in ensuring long-term retention. Candidates who successfully negotiate through the first round interviews attend the CITB National Construction College (NCC) at Bircham Newton near Kings Lynn for further suitability assessments. Those passing the rigorous rounds of interviews and assessments are then offered places on the apprenticeship programme and are based at one of five regional offices.

Structured training is essential
Apprentices are recruited in early autumn to provide an introduction to working life, the construction industry and the company before they start their training at NCC.
The academic and off-the-job practical training usually starts in January and comprises 24 weeks residential training at NCC over a two-year period. Off-the-job training covers all technical and practical aspects of the job. The College also provides welfare, sport and other activities which help our apprentices successfully complete their academic training.
Off-the-job training is augmented by on-the-job training. Apprentices are taught in a wide range of disciplines including health and safety, safe accessing, earthing and lightning protection design in order to satisfy evidencing criteria needed for the successful completion of the course.
It is important to celebrate the success of the trainees and this is done internally by updating company notice boards, several other communication methods and by senior managers and directors of the company attending the annual prize giving at NCC.
Ensuring retention
The direct cost of employing an apprentice is roughly £20,000 over the two year period, after the CITB grant. Considering the costs of recruiting and training over the two-year period, our key driver is to ensure appropriate retention policies are in place. These include regular meetings with apprentices, visiting them whilst at the college and working closely with CITB staff. It is important to monitor the progress of apprentices and provide them with support, guidance and encouragement.
Omega’s recruitment plan together with ongoing mentoring means it retains in excess of 90% of all apprentices through to the end of their apprenticeship. More than 13 years after the first lightning conductor engineering apprenticeship course was run, we still have an overall retention of 65% of all the apprentices recruited.
Success of Apprenticeship Training Scheme
As with any business asset, the measure of success is based upon the return on investment. We believe recruiting apprentices is vital to the current and future growth plans of our business. The real success is in ensuring the company has the right number of competent people to fulfil the needs of our customers in a safe, professional and efficient manner. It follows that if the company is able to achieve this aim then it benefits from a more loyal customer base, reduced long-term recruitment costs and a more effective workforce. In the end this whole process aims for and achieves enhanced profitability.
Benefits to Industry
The construction industry, sadly, has a reputation for harbouring many unprofessional operators. Properly recruited and trained apprentices developing into competent trades people will go some way to achieving a reputation for trust and professionalism.
Omega Red Group has heavily supported industry apprenticeship training since the course started 12 years ago. As a result of this investment, almost 100 apprentices over the last 13 years, the company has doubled turnover with a much lower proportional increase in its labour force.
Without investment in apprentices, customers and the industry will suffer from a lack of competent, safe tradespeople. Whilst apprentices involve a short term cost, without them the industry will never achieve the respect, and profitability, it needs.


PAT scheme and training from NICEIC

To support its national registration scheme for enterprises conducting portable appliance testing (PAT), NICEIC has developed a PAT training course which will provide enterprises with the key competence qualifications required. An NICEIC Guide to Electrical Equipment Maintenance is also available as a reference.
Many deaths and injuries result from poorly maintained electrical equipment and fires started by faulty electrical appliances. Around 1,000 electrical accidents at work are reported to the Health and Safety Executive each year, of these, 30 people die of their injuries.
All electrical equipment should be maintained and checked regularly to ensure it is safe and in good repair. Managers responsible for electrical equipment maintenance should ensure equipment is maintained in a safe condition, information is available to equipment users to ensure safety, safe procedures for inspection and testing are used and records of inspection and testing are maintained.
The NICEIC scheme and register provides companies with a straightforward route to registration, formal recognition of competence to undertake PAT testing services and a reference for purchasers of PAT testing services.
Safety First, a specialist PAT testing company operating in Northern Ireland assisted NICEIC to pilot the scheme. Mervyn Portis, Director, explained: “We are delighted to be recognised by the NICEIC solely for our PAT testing services. We have seen a growing trend in our clients realising the benefits of only using NICEIC contractors to undertake PAT work. Our clients have the added assurance that our standard of work will be continually monitored.”
Existing NICEIC registrants need do nothing further as they will be contacted separately with details of the scheme. NICEIC approved contractors with full approval will automatically meet the scheme requirements for inclusion in the PAT register.

Variable speed drives bring great advantages in controlling motors but care needs to be taken to match the characteristics of the drive to the motor to ensure the combination is a winning one. Geoff Brown, drive applications consultant for ABB investigates

Of the approximately 10m motors installed in UK industry, only some 3% are controlled by variable speed drives. Despite the huge energy savings to be gained, often in excess of 50%, many companies are still not making use of variable speed drives to run their motors.
Yet, process operators cannot simply connect a drive to any old motor and expect huge energy savings overnight or even a successful motor and drive match.
To minimize the risk of selected motor failing, users need to understand the required operating and environmental characteristics of the application. Motors have to cope with all sorts of environments, from high ambient temperatures, to being immersed in sewage, to operating in dust or gas hazards.
Special designs exist for all of these cases and the user must ensure he follows the motor manufacturer’s instructions. Getting all the help you can from motor and drive manufacturers is also a good idea in general; their experience with motors and drives will help find the most compatible motor and drive combination. Many will have local service representatives who can assist with setting up the drive. Users installing their own drives need to read up about the issues that exist when connecting AC motors and drives.
Drives and their effects on motors
Variable speed drives come in standard voltage ratings, which must be chosen to match your line voltage. In general, the lower the voltage, the easier it is on the motor.
The high switching rates of inverter power devices can place a rain of high switching voltage pulses at the motor terminals, which will cause an electrical stress on the windings, which is partly dependent on the length of the cable connecting the inverter to the motor. The drive manufacturer will usually advise on the maximum practical cable lengths between 15m and 300m depending on the power rating. In some cases long cable runs may also require additional drive components such as du/dt filters. Long cable runs can also lead to EMC issues.
Because a higher carrier frequency means more frequent pulses, a useful feature of the drive is an adjustable carrier frequency. Lower carrier frequencies place lower stress on the motor insulation system and reduce the incidence of damage due to bearing currents. However, higher carrier frequencies have a positive effect on reducing motor noise levels. Some switching strategies such as direct torque control have no fixed carrier frequency, which can also help, while ensuring a low noise spectrum.
Frequency converters with non-sinusoidal current can also cause additional losses in the motor and an increase in motor losses of up to 15% was not uncommon in early PWM inverter designs, which translates into an overall reduction in motor efficiency of up to 1%.
Modern inverter designs still increase motor losses, beyond those of a true sinusoidal supply, but in practice the effect is less than that caused by connecting to the supply network.
Major factors causing an apparent reduction in output with modern drives is the fact that the output voltage is lower than the input voltage, due mainly to the presence of chokes and other components used to limit harmonics, and the improved switching patterns in the inverter. The reduction in voltage can often be compensated by using a low harmonic “active rectifier” drive solution.
Choosing a motor for
drive operation
Given these points, how do you go about choosing a suitable drive for a motor? Firstly, always choose a good quality motor. High quality materials will extend the life of a motor, as well as improve efficiency. Look for thinner core plates giving lower iron losses, good slot fill giving improved stator performance, good bearings reducing rolling resistance. Reduced losses make for smaller fans, cutting noise and windage losses.
Another important quality factor is the level of insulation of the windings. Voltage stress acting on microscopic air bubbles in the winding varnish can cause ionization flash-over, known as a partial discharge, breaking down the insulation. Different insulation materials can withstand different levels known as the partial discharge inception voltage (PDIV), so you need to make sure the insulation level is adequate. Standard motors commonly have a PDIV in the region of 1350 to 1600V. A higher withstand voltage is better in variable speed drive applications. Unfortunately as yet there is no common visible classification on a motor nameplate, the use of Class B, or F or H materials does not in itself confer a specific PDIV withstand level.
Inverters also have common mode voltages in their outputs, which can give rise to induced voltages in the rotor, and if the path is not blocked can give rise to circulating currents, which can destroy bearings. This problem is solved by breaking the circuit by using insulated bearings.
Choose the right combination for the environment
A particular concern is the use of variable speed drives to power motors in hazardous areas. The main sources of risk are high surface temperature and sparks in either the winding or the bearings. This can result in increased temperature rises and higher voltage stresses on the motor insulation. These increase when self-cooled motors are used, as the speed of the cooling fan is reduced along with the motor speed.
These factors can combine to create a source powerful enough to ignite an explosion. The best way to reduce this risk is to choose a combined Atex package, which gives end users the assurance that the motor and drive combination is optimised for their application.
Note that the application of a drive with an existing, pre Atex motor is at the owners risk, and possible only in a Zone 2 area. In any case the product certification is the responsibility of the motor manufacturer.
This practice of supplying matched drive and motor pairs is a growing trend and one that progressive vendors have adopted to help cut users’ workload to a minimum.
Choose high efficiency
The efficiency of the motor is always a major factor in the choice. Although a VSD will bring system efficiency gains, it will not compensate for a poor or inefficient motor. Always use the highest efficiency motor possible. Ideally, the motor should have a good efficiency across the load range.
Motor power plays a major part because AC motors work at their peak efficiency over a limited range of their power output. Modern EFF1 electric motors usually produce peak efficiency at around 75 per cent of rated load. By contrast, older designs often have peak efficiency in a very narrow band around full load.
This is important in energy saving installations because the object of a drive is to vary the speed of the load, especially with centrifugal fans or pumps. The time spent running at full load will therefore normally be limited to emergency situations, such as extracting smoke in the event of a fire.
A new high efficiency EFF1 motor rated at 90kW with 95.2% efficiency, will cost around £5,900 and will use electricity costing around £37,250 per year, but will save nearly £9,000 compared to a standard efficiency EFF3 motor with 93% efficiency, over a 10-year service life. For companies operating large industrial complexes with many motor driven machines, such savings can mean tens of thousands of pounds, and tonnes of CO2 emissions annually.
Although an existing motor already in place can usually be used with a drive, it may not be known how well the motor has been treated and higher efficiency may be gained by using a new motor.
Choose the right speed profile
It is important when designing a system to consider the motor as a source of torque. Torque equates both power and speed, and with variable speed it is the torque profile which is of importance.
The two most common profiles are variable torque and constant torque. The first is used for centrifugal fans and pumps while the second is used for conveyors, extruders, positive displacement pumps, and similar loads.
Variable torque loads are the easiest applications for motors and drives because load power is governed by the cube of the shaft rpm for centrifugal loads acting with little static head.
It is also worth considering most load machinery is designed for sale in both 50 Hz locations such as Europe, and 60 Hz locations such as the US. Due to this the best efficiency is often between 50 Hz and 60 Hz nominal speeds, i.e. between 1500 and 1800 r/min. A variable speed drive allows this to be exploited. The freedom to select the output shaft speed can also be used to advantage to eliminate inefficiencies in belt drives.
Constant torque can pose problems because in order to maintain a constant torque at low speeds, the motor needs to be supplied with a relatively constant current throughout its speed range. This mode of operation will continually produce more heat, which will need to be dissipated at low speeds.
The current ratings of the inverter must also match the motor’s current requirements both at full load and during acceleration. The drive’s current rating and its suitability for the motor needs to be checked with the motor manufacturer, especially on motors operating below 30Hz and whenever acceleration torque is critical.

By Nick Guite, director, Utilities, Construction and Professional Services at BT

For companies with a large number of mobile field workers, to say it is a challenge to keep in touch with their workforce whilst keeping them employed is probably an understatement. Yet, remarkably, many industries – not least those for whom the challenge is most acute, such as utilities, remain unaware of the rapid pace of technological change in this area, and the benefits it can bring.
Traditionally, or at least since the early ’90s, the communication and work scheduling challenge has been tackled by deploying a standard mobile phone coupled with a manual job allocation process back at headquarters. Perhaps not surprisingly, the results over the years have been mixed. Marked variations in customer service experiences and costly internal resource overheads make the traditional solution an inefficient and unsatisfactory one – and therefore no ‘solution’ at all.
Meanwhile, the evolution of technology designed specifically for mobile working – GPS (Global Positioning System) locators, mobile communications, automated work scheduling – has continued apace. Real solutions are out there. The problem has been that, even where they have deployed the technology, the pace of change may have outstripped businesses’ capacity to exploit it and reap the full benefits. Automating a force of field workers, essentially, is about improving service. That’s it. Or is it?
Certainly, getting the right engineer with the right skills to the right job at the right time is about making sure your army of people-on-the-ground are giving the best possible service to customers. As is improving productivity and responsiveness of service contact centres or keeping promises with customers by adhering to well-defined appointment slots.
But is it all about customer service? Is it just the customer that you should be thinking of? Or is the field force automation strategy, which companies like Northumbrian Water are embarking upon, where they are planning to roll-out field force automation (FFA) across the company’s fleet of approximately 900 vehicles and 1,100 field operatives over the next 12 months, slightly more complex?
As you’d expect in the 21st century, technology’s role in that strategy is increasingly critical, but is not in itself where the complexity lies. Behind the jargon and the esoteric acronyms, the technological process is actually rather straightforward. No matter the type of device being used, put plainly, it’s about connecting all the dots to reveal the – until now – hidden picture. The dots, of course, are a business’s field engineers, service representatives, or indeed any workforce that spends a large part of the working day out of the office and physically isolated from colleagues. And today, in 2007, that last aspect of the job is where the complexity of the issue lies. Giving HQ a full picture of where vehicles are, whether their engines are running, where workers are and what they are working on is vital not only for efficiency but also for duty-of-care to staff. The dangers of being ‘physically isolated’ whilst at work represent one of the most compelling drivers for adopting the type of field force technology which BT has developed for a number of utilities companies and fleet operators.
Working in remote or isolated locations or, particularly, working alone, carries inherent risks. Frequently, the areas in which service engineers, carrying money and valuable equipment, have to go to conduct critical repair or maintenance work are secluded and potentially threatening. Working in the dark, in bad weather, or in any unwelcoming environment can and does make field workers feel vulnerable. BT itself allocates 18 million jobs a year to 24,000 engineers and the reality is that as their employer it has a responsibility to ensure their safety.
The technology utilised to monitor field worker whereabouts and improve customer response times can equally be used to increase worker safety. One new solution is a round-the-neck ID card-sized device with an inbuilt GPRS SIM card – exactly like that which makes our mobile phones work. If entering a vulnerable area or situation, lone workers can put themselves on amber alert by sending an instant message that will flash up on HQ or customer screen – “I’m arriving at the back door” – or they can put themselves on red alert, whilst remaining discreet, and open a one-way voice channel that enables HQ to monitor them, and initiate support if necessary.
The value of such a straightforward piece of technology to a worker operating in a threatening environment is incalculable. The technology itself is, as it always should be, relatively simple – but sometimes the simple solutions are the best. And it is the responsibility of the companies developing the technology to explain it in simple terms.
Organisations are recognising the benefits of technology for their field workers. It’s worth remembering that those benefits do not only help customers, but extend to your own workers too.

Is the worldwide emphasis on the conservation of energy and raw materials actually leading to a better-lit environment?

According to Lou Bedocs three ‘drivers’ will heavily impact on future lighting practice: energy efficiency and the impending European Energy Performance of Buildings Directive (EPBD); environmental directives over and above those we have today (RoHS and WEEE), and eco-design in its many parts in the Ecodesign requirements for Energy-Using Products Directive (EuP). “The biggest impacts will be from environmental and sustainable developments,” he says. “This is all about efficient design, raw materials, recycling, waste management and take back. The WEEE directive will be implemented on 1st July, and the EuP directive later this year will lead to new materials, practices and tasks.”

Bedocs is concerned about energy efficiency. Not its necessity, but the repercussions it could have on loss of lighting comfort and quality, because people will be tempted to meet efficiency criteria at the expense of reduced comfort. A classic example is the local authority that is planning to achieve the energy efficiency targets by simply switching off the streetlights. “Our biggest task in future years will be defending our hard fought lighting needs in this environmental/energy constraining world And it’s going to be important, because if the rate of increase in energy costs continues at 10 per cent a year then obviously we need more and more efficient and reliable electric lighting solutions or we will have to change our way of working and way of life because daylight is only available at certain times.”

According to Bedocs, the answer is not to shut everything down, but to campaign for efficient use of energy rather than reduction, and second, rather than being frivolous about the criteria, to focus on appropriateness. “This will change the outlook in lighting scheme design.” Task-orientated lighting requires different optics, different light-package sizes, distributions and, above all, a different philosophy in application design. “Interestingly, we have moved from local to generalised lighting over the past 100 years and now we’re moving back.”

Bedocs argues lighting is the single most important aspect of the environment because of its immediate impact on people. Some 80% of the signals our brain processes are received through our eyes. It is now known there is an extra detector in our head that receives signals via the eye and controls our body clock. “Clearly people, wherever they are, need good lighting for visibility and the right light for wellness. Furthermore, the responsiveness to the space and people’s reactions must not be overlooked, which is why I believe in Thorn’s PEC – Performance, Efficiency and Comfort programme so strongly,” he says.

“We have achieved the task of visibility through adequate illumination and we have the knowledge to balance the brightness in the space. Our next step needs to be to provide a sustainable stimulating visual environment with effective controls that will satisfy our physiological and psychological needs at work, at play or even when driving home.”

Lou Bedocs is lighting applications director at Thorn Lighting, based in Spennymoor, County Durham.

The transmission and distribution business has seen a significant shift in technology over the last quarter century with the decline in oil and air insulated switchgear in favour of the newer vacuum and sulphur hexafluoride (SF6) technologies says Philip Dingle, utility segment manager at Eaton.

SF6 is unchallenged for transmission voltages but for distribution systems from 3kV to 38kV vacuum circuit-breakers have become the dominant technology (See Fig. 1)
However, vacuum interrupters are frequently used in gas insulated switchgear (GIS), which uses the greenhouse gas SF6 as an insulant. Despite worldwide concern over the environmental effects of SF6, manufacturers continue to promote the GIS concept for ring main units and packaged substations in the face of technically sound solid-insulated alternatives.
In terms of size and cost there is little to choose between the two circuit interruption technologies at distribution voltages. At one time solid insulation tended to be more bulky than gas but advances in technology have overcome this objection. Both vacuum and SF6 offer good load switching and short-circuit protection capabilities but vacuum interruption excels under the more onerous short-circuit duties and offers long life under frequent switching duty.
Vacuum interruption
The first vacuum interruptors were introduced 40 years ago and, since then, have proved remarkably reliable. Modern units retain their vacuum for at least 20 years, thereby exceeding the mechanical life of the circuit-breakers of which they form a part. Operation is maintenance-free, eliminating the need for regular inspection and costly leak monitoring equipment.
Performance is excellent over a wide range of applications including transformer secondary protection, short-line fault switching, capacitor and motor switching. The rated a.c. power frequency withstand voltage is typically two to four times normal operating voltage and lightning impulse withstand voltage voltage is four to 12 times operating voltage.
Vacuum interruptors are environmentally benign. They do not contain greenhouse gases, or present a health risk due to decomposition products caused by arcing. No special measures are needed to protect the environment from the results of leakage or at the end of life. The constituent materials can be recovered safely and recycled.
Solid insulation
Historically, one of the reasons for using SF6 gas insulation with vacuum insulators was size – solid insulation resulted in much larger units. This is no longer the case. The use of modern potting compounds such as polyurethane and epoxy to encase the vacuum interruptor, together with a contoured profile similar to the sheds used on overhead line insulators, has made it possible to increase the basic insulation level (BIL) of the vacuum interruptor to the same order as GIS.
Solid insulation means there are no greenhouse gases involved and there is no need for special gas monitoring systems and other precautions to protect personnel from the risk of leakage. The switchgear can be installed inside buildings with confidence there is no danger of a build-up of heavier-than-air gas.
Anybody who has been involved with the disposal problems created by the past use of asbestos, polychlorinated biphenyls (PCBs) or chlorofluorocarbons (CFCs) should take warning. The current use of SF6 gas in switchgear could be creating a similar legacy for industry and utilities in twenty or thirty years’ time. The very fact literature on SF6 technology devotes so much space to defending its environmental reputation should be enough to sound warning bells.
SF6 and global warming
Sulphur hexafluoride does not occur in nature. At normal temperatures it is a stable, inert gas – harmless to people and animals. However, it is heavier than air so precautions are necessary to avoid the possibility of high concentrations in confined spaces.
The principal concern is that SF6 is a potent greenhouse gas (See Table 1). The United Nations Framework Convention on Climate Change in Kyoto in December 1997 identified a basket of six major greenhouse gases:
• Carbon dioxide (CO2)
• Nitrous oxide (N2O)
• Methane (CH4)
• Chlorinated fluorocarbons (CFCs)
• Hydrated fluorcarbons (HFCs)
• Sulphur Hexafluoride (SF6)
The signatories agreed to restrict emissions of these gases to specified amounts and, furthermore, to reduce overall emissions by at least 5.2% below 1990 levels in the commitment period 2008 to 2012.
The European Climate Change Programme has set out proposals to enable the European Community to meet its Kyoto Protocol targets for fluorinated greenhouse gases, including SF6. The quantities of these gases are measured in equivalent tonnes of carbon dioxide. At 1995 it estimated the total emissions of SF6 gas as 65.2 tonnes, of which electrical switchgear contributed five tonnes.
While the concentration of SF6 may be low compared with some other greenhouse gases, SF6 has a global warming potential (GWP) 23,000 times that of CO2 and an atmospheric lifetime estimated at up to 3200 years compared with 50-200 years for CO2. The continuous build-up of SF6 in the environment therefore represents a serious long-term threat.
Furthermore, recent research has revealed a new, highly active greenhouse gas, SF5CF3 that is thought to be a product of the breakdown of SF6. Although it occurs in relatively small concentrations, its contribution per molecule to the greenhouse effect is much greater than any previously known greenhouse gas.
A report by the Department for Environment, Food and Rural Affairs (Defra) gives 1995 emissions of greenhouse gases in the UK, expressed in equivalent tonnes of CO2 as shown in Table 2. It estimated that total use of SF6 over the previous decade had remained roughly constant at 160 tonnes, equivalent to 1,200,000 tonnes of CO2 per year. Four main uses for SF6 were identified:-
• Electrical installations
• Electronics
• Magnesium smelting
• Training shoes
In switchgear, leakage may occur at the mechanical and electrical seals and even within the pressure monitoring equipment itself. Consequently, regular monitoring is necessary and procedures should be in place to ensure that monitoring takes place regularly but also that appropriate steps are taken if it reveals evidence of leakage.
Serious concerns also centre on the disposal of the SF6 at the end of its useful life. SF6 is manufactured in industrialised countries under carefully-monitored conditions. It is used in enclosed conditions with every effort taken to minimise the risk of leakage. But SF6 switchgear is being sold and installed worldwide. What guarantees are there for responsible disposal in 20-30 years’ time?
The United Kingdom is committed to reducing emissions by 12.5% from 1990 levels by 2008-2012. Other countries have already taken steps to deal with problems created by the use of SF6 in switchgear including Denmark where its use as an insulating medium in new circuit-breakers was prohibited from January 1 2002.
In Germany the following steps have been taken:
• Manufacturers guarantee a minimum leakage rate of approximately 0.5% a year
• All gas-filled enclosures are continuously monitored to detect leaks
• Used SF6 is either purified and reused in a closed system or re-use directly
• SF6 manufacturers guarantee to take back used gas for re-use or disposal by environmentally compatible means.
• All personnel handling SF6 receive regular information and training
• Only properly qualified staff carry out maintenance work
• SF6 producers keep records of quantities produced and equipment manufacturers and users keep records of gas consumption and inventories.
SF6 under operating and fault conditions
While it is stable at room temperature, SF6 breaks down into toxic substances on combustion, at high temperature, or when subjected to arcing. In the event of a major short-circuit that the system cannot handle, SF6 gas and these toxic products of combustion will be released into the atmosphere. Even under normal operating conditions, whenever an arc is suppressed, there will be toxic residues within the enclosure. This calls for special precautions when dismantling at the end of life.
At temperatures above 300°C SF6 starts to decompose, forming free sulphur and fluor ions which combine with hydrogen and oxygen ions in the air to form a number of dangerous products including hydrogen fluoride (HF) an extremely corrosive fuming liquid, thionyl fluoride (SOF2) a very stable and poisonous gas, and sulphur tetrafluoride (SF4) a poisonous gas that combines with water to form HF and SOF2. Among these latter effects, SF4 reacts with moisture in the eye to form hydrogen fluoride, which has a strong etching effect on the cornea. HF also impairs the lungs. A number of other toxic substances are also produced.
During arc interruption, these same decomposition products are produced and, in addition, metal fluorides, mostly in the form of dust. Special measures are necessary when handling this dust. Only skilled and well-trained personnel should carry out maintenance and other work. Protective clothing should be worn, including tight-fitting gloves, goggles and masks to prevent skin contact. Special measures are also necessary to ensure that dust does not come into contact with the surrounding environment. The problems of decommissioning at the end of life are comparable with those associated with PCBs in transformers.
If an internal fault should occur in gas insulated switchgear, the enclosure may burn through or the arcing energy may cause a rise in the temperature and pressure in the enclosure leading to bursting of the enclosure or opening of a pressure relief valve. As a result of any of these events, the surrounding environment will be filled rapidly with the toxic and aggressive products of decomposition. This could present a major risk with GIS substations or ring main units situated on street corners or in commercial or industrial buildings.
At high voltage there is little choice today but to use SF6 switchgear for circuit interruption and steps can be taken to minimise the risk of decomposition products presenting a danger to the public. Furthermore the number of installations is relatively low so utilities can support the small number of trained personnel needed to handle high voltage SF6 products.
At medium voltage, the large number of products in service make it impractical to maintain the staffing levels needed to look after equipment. The availability of compact vacuum interrupters which generally offer superior electrical characteristics, together with compact solid insulation techniques, make it practically, economically and environmentally preferable to use solid-insulated vacuum switchgear.

For further information on the consequences of using SF6 gas see www.greenswitching.com

How will the new British Standard, BSEN 62305 Protection Against Lightning effect you? Mike Henshaw, managing director of Omega Red, summarises the key requirements of the new standard.

There are now hundreds of manufacturers around the world with unique measurement methodologies. With so much variability between instruments, technicians must often spend time trying to understand the instrument’s capabilities and measurement algorithms instead of concentrating on the quality of the power itself.
The new Standard will run in parallel with existing standard for a transitional period and will eventually replace the existing Standard (BS6651:1999) at the end of August 2008.
There are four parts to the new Standard covering General Principles, Risk Management, Physical Damage to Structures and Life Hazard and finally Electrical and Electronic Systems Within Structures.
There are four separate risks that can be addressed depending upon the clients requirements:

• Risk of loss of human life
• Risk of loss of service to the public
• Risk of loss of cultural heritage
• Risk of loss of economic value

The starting point is to establish which risk the client wishes to protect against. Risk 1 is addressed under the existing Standard, Risk 2 is only partially addressed under the existing standard and then only in an informative appendix and not a formal part of the standard. Risk 3 and Risk 4 are considered for the first time within the new Standard.
A risk assessment must be undertaken for each of the risks the client wishes to address in order to determine what level of protection is required, if any, and what protection measures need to be applied in order to reduce the risk to a tolerable level. This is a new practice.
The risk assessments require certain information in addition to that already required under the current standard:

Power quality measurement is still a relatively new and quickly evolving field. Whereas basic electrical measurements like RMS voltage and current were defined long ago, many power quality parameters have not been previously defined, forcing manufacturers to develop their own algorithms.

There are now hundreds of manufacturers around the world with unique measurement methodologies. With so much variability between instruments, technicians must often spend time trying to understand the instrument’s capabilities and measurement algorithms instead of concentrating on the quality of the power itself.
The IEC 61000-4-30 CLASS A standard defines the measurement methods for each power quality parameter to obtain reliable, repeatable and comparable results. It also defines the accuracy, bandwidth, and minimum set of parameters. Going forward, manufacturers can begin designing to Class A standards, giving technicians a level playing field to choose from and increasing their measurement accuracy, reliability, and efficiency on the job.
IEC 6100-4-30 Class A standardises measurements of:
• Power frequency
• Supply voltage magnitude
• Flicker, harmonics, and inter-harmonics (by reference)
• Dips/sags and swells
• Interruptions
• Supply voltage unbalance
• Mains signalling
• Rapid voltage changes.
Examples of Class A requirements:
• Measurement uncertainty is set at 0.1% of declared input voltage. Low cost power quality measurement systems with uncertainties greater than 1% can erroneously detect dips at -9% when the threshold is set at -10%. With a Class A certified instrument, a technician can confidently classify events with internationally accepted uncertainty. This is important when verifying compliance to regulations or comparing results between instruments or parties.
Dips, swells and interruptions must be measured on a full cycle and updated every half cycle, enabling the instrument to combine the high resolution of half-cycle sampled data points with the accuracy of full-cycle RMS calculations.
• Aggregation windows – A power quality instrument compresses acquired data at specified periods which are called aggregation windows. A Class A instrument must provide data in the following aggregation windows:
- 10/12 cycle (200ms) at 50/60Hz, the interval time varies with actual frequency
- 150/180 cycles (3s) at 50/60Hz, the interval time varies with actual frequency
Harmonics must be measured with 200ms intervals according to the new standard, IEC 61000-4-7 / 2002. The old standard allowed 320ms intervals which cannot be synchronised with the 200ms aggregation windows of other Class A measurements.
Using 200ms intervals allows harmonic calculations to be synchronous to all the other values like RMS, THD, and unbalance.
The Harmonics FFT algorithm is specified exactly such that all Class A instruments will arrive at the same harmonic magnitudes. The FFT methodology allows for infinite algorithms that can result in vastly different harmonic magnitudes. By standardising on 5Hz bins and summing the harmonics and inter-harmonics according to specific rules, Class A instruments will be consistent and comparable.
• External time synchronisation is required to achieve accurate timestamps, enabling accurate correlation of data between different instruments. Accuracy is specified with ±20 ms for 50Hz and ± 16.7ms for 60Hz instruments.
• 10 min interval sync to clock
• 2 h interval sync to clock.
Latest product developments
There have been a number of significant introductions to the market in the past 12 months of power quality analysers offering compliance with IEC 61000-4-30 CLASS A. These new products include both handheld devices and those designed for leaving in a fixed location for a time period set by the user. They will log a large number of parameters at user chosen time intervals for later analysis by a PC. Thus there is a choice of products, offering different capabilities, from which a technician can choose the most appropriate tool for the job.
These new tools are designed for ease-of-use to uncover intermittent and hard-to-find power quality issues. Suitable handheld analysers will provide on-screen display of trends and captured events even while background recording continues. Some can be used to analyse disturbances, to validate incoming power compliance, for capacity verification before adding loads, and for energy and power quality assessment before and after improvements. The best tools provide powerful reporting software to enable rapid assessment of the quality of power at the service entrance, a substation or at the load according to EN50160 standards. The software can quickly analyse trends, create statistical summaries and generate detailed graphs and tables.