Features

With the growth of standby, prime and peaking power installations in highly populated areas, design engineers have focused their attention on understanding how generator set noise is propagated and controlled. The high cost of Retrofitting a site for noise reduction makes it imperative to assess noise performance requirements early in the on-site power system design stage. By applying the principles outlined in this paper, power system designers and end users alike will be able to more easily control unwanted noise from their on-site power system.
*Part two of this feature will appear in Electrical Review May 09 and can be found at
www.electricalreview.co.uk

Like many types of rotating machinery, reciprocating engine-powered generator sets produce noise and vibration. Whether these generator sets run continuously in prime power applications or only occasionally in standby applications, their operating sound levels often must be reduced to comply with local, state or federal ordinances. In North America, maximum permitted overall noise levels range from 45 dB(A) to 72 dB(A), depending on location and zoning. In fact, recently, some states and communities have begun to specify property line noise restrictions using octave band frequencies to reduce the amount of low-frequency noise that reaches community neighborhoods. Since untreated generator set noise levels can approach 100 dB(A) or more, both the location of the generator set and noise mitigation take on great importance.

In general, two forms of regulations affect the volume of noise to which individuals or the public may be exposed state or municipal noise ordinances and Occupational Safety and Health Administration (OSHA) federal safety regulations. The former regulations address noise that may migrate beyond property lines and disturb the public but that is seldom sufficiently loud to  constitute a safety hazard. The latter addresses standards for noise exposure in the workplace to protect the health of workers. OSHA regulations normally only apply to workers who may be exposed to generator set noise that is above 80 dB(A) for any appreciable time. Workers can limit exposure by wearing proper hearing protection when working around operating generator sets. Europe and Japan, as well as numerous other countries, have also set standards to control noise in the workplace and in the environment at large.

Defining noise

Sound is what the human ear hears; noise is simply unwanted sound. Sound is produced by vibrating objects and reaches the listener's ear as pressure waves in the air or other media. Sound is technically a variation in pressure in the region adjacent to the ear. When the amount of sound becomes uncomfortable or annoying, it means that the variations in air pressure near the ear have reached too high an amplitude. The human ear has such a wide dynamic range that the decibel (dB) scale was devised to express sound levels. The dB scale is logarithmic because the ratio between the softest sound the ear can detect and the loudest sound it can experience without damage is roughly a million to one or 1:106. By using a base-10 logarithmic scale, the whole range of human hearing can be described by a more convenient number that ranges from 0 dB (threshold of normal hearing) to 140 dB (the threshold of pain). There are two dB scales:
A and L.

- The dB(L) unit is a linear scale that treats all audible frequencies as having equal value. However, the human ear does not experience all sound frequencies as equally loud. The ear is particularly sensitive to frequencies in the range of 1,000 to 4,000 Hertz (cycles per second), and not as sensitive to sounds in the lower or higher frequencies.
- Therefore, the "A-weighting filter," which is an approximation of loudness, is used to correct the sound pressure levels to more accurately reflect what the human ear perceives. This frequency-weighting results in the dB(A) scale, which was adopted by OSHA in 1972 as the official regulated sound level descriptor. Figure 1 shows typical noise levels associated with various surroundings and noise sources.

Sources of generator set noise
Generator set noise is produced by six major sources (see Figure 2):
- Engine noise - This is mainly caused by mechanical and combustion forces and typically ranges from 100 dB(A) to 121 dB(A), measured at one meter, depending on the size of the engine.
- Cooling fan noise - This results from the sound of air being moved at high speed across the engine and through the radiator. Its level ranges from 100 dB(A) to 105 (A) dB at one meter.
- Alternator noise - This is caused by cooling air and brush friction and ranges from approximately 80 dB(A) to 90 dB(A) at one meter.
- Induction noise - This is caused by fluctuations in current in the alternator windings that give rise to mechanical noise that ranges from 80 dB(A) to 90 dB(A) at one meter.
- Engine exhaust - Without an exhaust silencer, this ranges from 120 dB(A) to 130 dB(A) or more and is usually reduced by a minimum of 15 dB(A) with a standard silencer.
- Structural/mechanical noise - This is caused by mechanical vibration of various structural parts and components that is radiated as sound.

Measuring noise
Before you can begin to determine what mitigation might be required, you have to collect accurate sound measurements of both the existing ambient noise and the noise contributed by the generator set. Accurate and meaningful generator set sound-level data should
be measured in a "free field environment." A free field, as distinguished from a "reverberant field," is a sound field in which there are negligible effects from sound being reflected from obstacles or boundaries. Noise measurements should be made using a sound level meter and an octave band filter set, at a minimum to allow for more detailed analysis by acoustical consultants.

When measuring sound levels from a distance of 7 meters, microphones are placed in a circular array with measurement locations at 45-degree increments around the generator set. The measurement array is 7 meters from an imaginary parallelepiped that just encloses the generator set, which is usually defined by the footprint dimensions of the skid base or chassis.

When measuring sound power levels for European applications, a parallelepiped microphone array is typically used, as defined in International Standards Organization standard ISO 3744.

Sound performance data for generator sets from Cummins Power Generation Inc. are available on the company's design software CD (called "Power Suite").
Sound performance data is also available in the Power Suite Library on the company's Web site: www.cumminspower.com.

Initial noise measurements are usually made in eight octave bands from 63 Hertz to 8000 Hertz, although the highest sound power generated is typically in the range of 1,000 Hertz to 4,000 Hertz - the range of sound to which the human ear is most sensitive.
While measurements are taken across a spectrum of frequencies, the logarithmic sum of all the frequencies is the most important reading. However, when the overall sound level exceeds the allowable level for a project, frequency band data is used to determine what design changes are necessary to lower the overall sound level to comply with requirements.

Totaling all the sources of noise
The total noise level from a generator set is the sum of all the individual sources, regardless of frequency. However, because the dB(A) scale is logarithmic, the individual dB(A) readings cannot be added or subtracted in the usual arithmetic way. For example, if one noise source produces 90 dB(A) and a second noise source also produces 90 dB(A), the total amount of noise produced is 93 dB(A) - not 180 dB(A). An increase of 3 dB(A) represents a doubling of the sound power; yet, this increase is barely perceptible to the human ear.

Figure 3 illustrates how to add decibels based on the numerical difference between two noise levels. As in the example above, if there is no difference between noise source 1 and noise source 2, their combined dB(A) measurement would only increase by 3 dB(A) - from 90 dB(A) to 93 dB(A). If source 1 was 100 dB(A) and source 2 was 95 dB(A), their combined dB(A) measurement would be 101 dB(A).

Noise laws and regulations
In North America, state and local codes establish maximum noise levels that are permitted at the property line. Figure 4 shows some representative outdoor noise level regulations. Compliance with these noise regulations requires an understanding of the existing ambient noise level at the property line without the generator set running and what the resultant noise level will ultimately be with the generator set running at full load.

In Europe, regulation of generator noise is governed by the 2000/14/EC (Stage II) legislation that has been in place since 2006. For generators with a prime power rating less than 400 kW, the maximum sound power level permitted is calculated using the formula:
95 + Log Pel = dB(A)
(where Pel is the generator's prime power rating)

Generators of 400 kW prime rating and above are only required to be labeled with the LWA figure (European measurement of "acoustic power level") calculated from the manufacturer's developmental test results. For the European market, most generators from 11 kVA to 550 kVA are packaged in standard enclosures that make the units compliant with most legislation. Standard enclosures typically reduce radiated noise by a minimum of 10 dB(A).

Each month, Electrical Review's resident grumpy old man, writer and industry commentator John Houston, explores a hot topic of the day and lets us know his views in no uncertain terms

Renewable energy is a perennial topic, but since opinion on green energy issues changes, both figuratively and literally, with the wind, it is always worthy of comment.

Output from wind farms is notoriously inefficient, there remain many technical impediments still to be properly resolved and while onshore farms are relatively quick to build, the more effective offshore plant is expensive and much more time consuming to install. These are a few reasons why certain projects are on currently on hold - notably the £1.5bn London Array wind farm (originally proposed to have 270 turbines spread over 245 sq km).

However, near my own home the biggest wind farm in the South East is close to completion, so I thought I would focus on that project as a microcosm for UK wind power issues.

At a price of just over £60m the 26 turbines at npower's Little Cheyne Court wind farm in Kent's Romney Marsh, could be operational by October this year providing a claimed 59.8 MW capacity.  This output is stated by npower on its website to be the equivalent of providing power to three quarters of Shepway District Council's households.

The prima facea evidence for the efficacy of the Little Cheyne Court wind farm is strong.  At an investment of just £800 per capita, green power can provide for an albeit sparsely populated area of about 30,000 households. Extrapolate that for the rest of Britain and it appears that domestic green power could be created for the entire country at a cost of about £48billion.

However, if it takes 26 large modern turbines to cater for 75,000 people, that also means the country as a whole would require some 21,000 turbines. A figure that leads us to the most emotive of objections to wind farm technology for while most agree green energy is a good idea, it is equally true that most don't want it in their backyards.

During the consultation process on Little Cheyne Court, a small but vociferous lobby opposed its construction. The usual ecology luminaries lent credence to the negative arguments, including, predictably, Professor David Bellamy together with a variety of other naturalists and guardians of the countryside. Outraged ornithologist and BBC TV wildlife broadcaster Chris Packham went as far as to state: "I cannot see the justification for the probable execution of large numbers of Bewick Swans".

I too am a countryside lover, or I wouldn't suffer the lack of a decent Thai restaurant where I live. But, one of the attractions for me in moving from London's cosmopolitan climes to become a Romney Marshian (sic) was that two thirds of every species of animal, fish, bird, insect, fauna and flora could be found on the Marsh. But guess what? Five minutes drive south of Little Cheyne Court in the very heartland of South Kent's prime conservation area benignly sits Dungeness B nuclear power station!

Now, were the objectors to be objective, we might be better convinced. Far from orchestrating genocidal mass executions of hapless Bewick Swans, npower consumed a mere 1.5 square miles of land.  While for many the 100m high turbines are an eyesore, much local opinion views them as a new landmark - the aesthetic is, as always, highly subjective.
However, if the arguments were about the concrete foundations for the turbines at about 30m deep requiring over 46,000 lorry loads of spoil to be removed, there would have been a further environmental point to consider. That over 15,000m3 of concrete was used in construction and that the concrete manufacturing industry is one of the greatest emitters of industrial CO2 on the planet raises questions. The 6.5 miles of totally new roads that have been built on virgin soil needed over 50,000 tonnes of roadstone being transported by lorry to the site. While the excavation of this material undoubtedly caused environmental damage elsewhere is also an argument in which facts can be discussed.

Fascinatingly, where in the recent past planning consent for wind farms was scarce and lengthy, there are now hundreds of wind farms being planned, adding to the 198 onshore and offshore farms - a total of 2,389 turbines - in operation by the end of 2008. Apart from Little Cheyne Court, there are another 40 farms currently under construction.

The point is that while emotive points raise heckles among the public, technical and quantifiable facts carry the weight of any argument. That's the only way the wind will be taken from the sails of green energy progress.
 

 

Lightning can cause significant damage to sensitive, mission-critical systems within a building if lightning protection measures are not adequate. Paul Considine of Wieland Electric explains how the risk can be aligned to the cost of protection

With the increasing use of, and dependence on, technology in just about every business, protecting sensitive equipment is becoming ever more important. In a manufacturing or logistic operation, for example, disruption to processing or handling systems can have a catastrophic effect on productivity. Similarly, in the financial sector, server rooms are mission-critical and any failure can lead to losses of millions of pounds every hour.

Interference-free operation, therefore, is crucial and the integrity of these systems has to be maintained even under relatively extreme circumstances such as thunderstorms. And while safety devices such as fuses will protect against excess current, they are ineffective against the high voltage transients and short-duration spikes that lightning can generate on power supply lines.

This, of course, is all fairly obvious and forms the basis of the lightning protection systems that are incorporated into many buildings. However, despite the clear dangers and the critical nature of many systems, the high volume of insurance claims for lightning damage indicates that many building operators are failing to ensure appropriate lightning measures are taken.

This situation is becoming of increasing concern because climatologists are predicting that climate change will bring about more extreme weather conditions, with an anticipated increase in the frequency and intensity of thunder storms in the UK and other areas of northern Europe.

It's fairly reasonable to assume that the key reason for the lack of protection on many buildings is that businesses often seek to achieve a balance between the cost of installing lightning protection and the risk of suffering lightning damage. There may also be a feeling that any damage will be covered by insurance. However, while insurance may cover the tangible damage it can't restore the reputation of a company that has let its customers down by not safeguarding its systems adequately.

A sensible compromise is to adopt a zone concept for lightning protection - as described in IEC 62305-4 (DIN EN 62305-4, DIN 0185-305-4). This enables planners, builders and owners to align the protective measures they adopt with the risk levels to the business of damage occurring. In this way, all relevant devices, plants and systems are afforded a level of protection commensurate with their importance to the business.

For all of these reasons electrical engineers need to be aware of the options open to them and the requirements of the relevant standards. To that end, lightning strikes can be divided into two key types - direct strikes and remote strikes.

Direct or close-up lightning strikes are lightning strikes into the lightning protection system of a building, in close proximity to it, or into the electrically conductive systems implemented in the building (e.g. low-voltage supply, telecommunications, control lines).

Remote lightning strikes are lightning strikes that occur far away from the object to be protected as well as lightning strikes into the medium voltage overhead system or in close proximity to it, or lightning discharge from cloud to cloud.

In addition to a lightning protection system in the building, additional measures for an overvoltage protection of electrical and electronic systems are required in order to safeguard the continuous availability of complex power engineering and IT systems even in the case of a direct lightning strike. It is important to consider all the causes for overvoltages.
In terms of lightning protection, BS EN 62305:2006 (Protection against lightning) advises provision of a conventional or Faraday Cage lightning protection system and these systems can be divided into external and internal types.

An external lightning protection system will typically comprise an air termination system, down conductors and an earth termination system. Clearly, all of these elements need to connected effectively so that if lightning strikes the building the current discharge is conveyed safely away and damage to the building is minimised. This is achieved by ensuring that connection components comply with BS EN 50164.

An internal lightning protection system is designed to eliminate the risk of dangerous sparks inside the building or structure, following a lightning strike. Such sparking could be caused by current flowing in the external lightning protection scheme and sparking over to metallic elements inside the building. Or this could happen if current flows through any conductive elements on the outside of the building.

The danger of sparking, therefore, is minimised by creating a sufficient distance between metallic parts or by carrying out appropriate equipotential bonding measures. Equipotential bonding will ensure that no metallic parts are at different voltage potentials, so there is no risk of sparking between them. This can be realised either through bonding between conductive elements or the use of surge protection devices. The latter is particularly appropriate where direct connection would not be appropriate, such as between power and communication lines.

As noted above, the commercial reality is that these measures need to be introduced in relation to the level of risk and the criticality of the processes or systems to the business. This is the basis of the zone concept for lightning protection, as it effectively divides a building into different risk zones. The zones for lightning protection are defined in Table 1
In our experience, this zoned approach strikes the right balance between capital outlay and operational risk and proves of great benefit to building operators who are trying to strike the right balance. It is also a very effective way for electrical designers to add value for their customers.

With just three core product groups you could be forgiven for believing  innovation and cable management don't go hand-in-hand but, according to  Nigel Leaver of legrand, you'd be wrong. He talks to Electrical Review about why cable management is one of the most dynamic and innovative elements of the electrical product market

I suppose it's fair to say that someone looking at the cable management market from the outside could view it as relatively stagnant arena on the basis its core product groups - overhead, perimeter and floor systems - were developed years ago and have pretty much dealt with everything thrown at them ever since.

And, to an extent, they are right - but only on one point. The cable management market does have three core product groups that have dealt with everything ever thrown at them. But, stagnant it most certainly is not.

In fact, the cable management market is one of, if not, the most competitive in the electrical industry and as such has witnessed a truly impressive array of innovative developments over the years.

The reason for this is easily explained. In order to remain competitive the various manufacturers have to remain on a continuous path of product development and reinvention. The aim being to produce a product or system with benefits that outweigh anything the competition can offer. And although this outcome is likely to remain forever unachievable, it does mean that manufacturers are continually delivering enhanced solutions that incorporate the most sought after features and benefits.

As a result, the vast majority of innovation has been based around the goal of delivering cable management products that are quicker, easier and, subsequently cheaper to install and maintain. This though is where the similarities end. These innovations have covered virtually every aspect of the market, from brand new products through to the fixtures and fittings used to secure them.

Fantastic Plastic
When it was first introduced, perimeter trunking was made exclusively from metal, but as time moved on many manufacturers began to realise that plastic (PVC-U) products could deliver a workable, and potentially better, alternative.

After an extensive and exhaustive process, it was proven to be easier, quicker and cheaper to install, while also having the additional benefit of allowing for a flush finish on both flat and uneven walls.

What followed was a sea change from steel and aluminium to plastic trunking, but, as so often is the case, this shift wasn't accepted by everyone. In fact, the level of debate surrounding this development has been such that arguments still rage as to which provides the better solution.

For example, some will argue that because plastic trunking is quicker and easier to install it will always be the best option, but others will point to the fact that aluminium trunking is longer lasting, stronger and provides good EMC protection so therefore has to be the number one choice.  Add to that the fact steel is tamper and vandal resistant and that both metal options can be powder coated in a wide range of colours to compliment the design aspects of any environment and it's easy to see why the argument continues unabated.
In this situation, as with all the best long-running arguments, there really is no right or wrong. The only clear cut answer being that the right solution can only really be chosen once the location and conditions of a specific installation are known.

Tray time
Another great example of innovation in action was the development of steel wire cable tray as an alternative to traditional perforated tray. In fact, such has been its success many now suggest it can be used instead of perforated tray in all scenarios.

Again, this is very much open to debate, because it is very much the installation that dictates the required solution. Take for example an installation in a confined space with numerous twists, turns and obstacles to cope with. In this situation wire mesh tray is the best option due to the fact it is manufactured with the aim of being easily configured on site. Of course, installations do vary and if faced with one that requires numerous straight lengths then perforated tray is still the best option. The reason being that it's generally stronger and so only needs supports every 2 to 2.5 metres, rather than the 1 to 1.5 metre intervals that wire mesh requires. Therefore, with fewer cantilever arms or trapeze hangers to fit, the installation time, and subsequently cost, is significantly reduced.

Adding to the tray mix, has been the recent development and introduction of PVC-U cable tray - a product that could well prove to be a viable alternative to metal because of its excellent corrosion resistance in aggressive environments and the fact the cost of raw material for metal products continues to rise.

A quicker fit
Not all the innovations that have made a major impact on the market have involved entire product systems. In fact, some of the most notable have involved the development of new fixtures and fittings that have allowed installation to become quicker and easier without wholesale changes taking place.

Developments such as the quick bolt coupler that enabled installers to secure perforated tray using just one hand, and click-fit systems on busbar ranges, which simply required connectors to be inserted and a cover slid over the joint to create a secure coupling that could only be released by inserting a screwdriver into the locking point, have helped installers shave valuable time off installation

Something new
A final element to be taken into account when considering innovation is that inspired by changing market demands. This has recently been seen in the commercial market, where the demand has grown for cost effective, environmentally friendly solutions that provide maximum design freedom.

The reason for this new demand is that simply offering a set floorplan, which very much dictated the use of the space, went out of fashion. Businesses today want space they can then use in their own way, and because so many now operate on a short-term lease basis, a growing number of property owners are coming to the conclusion that it's imperative they can return their property to its original state as easily as possible in order to let to new clients as quickly as possible.

The cable management industry's response to this has been swift and impressive, with raised floor cable management systems being developed with the aim of delivering ultimate flexibility in terms of installation and reconfiguration.

In a nutshell, these systems house cables under a raised floor that is as easy to lay as it is to dismantle, meaning access to cabling is easier than ever before. As a result, systems can be laid, altered and even removed with minimum fuss.

Demonstrating the full scope of this innovation is the fact the industry has also tackled another burning issue during its development - sustainability. Certain systems have been manufactured using fully-recyclable steel and recycled polypropylene in the construction of the floor tiles and supports respectively, while others require neither glue nor screws during installation. All of which combines to makes such systems the greenest yet.

While this level of product development and innovation ensures the cable management market is amongst the most buoyant in the electrical industry, it does produce one issue that could well be construed as a negative, and that is the potential for confusion. The reason being that with so many different solutions available, the choice of which is the best can sometimes become blurred. And with some companies intent on pushing their solution, and their solution alone, it becomes even harder for specifiers and contractors to be sure who to trust when it comes to determining the benefits of the various different solutions. In such situations, my advice would be to contact a company that manufactures a whole spectrum of cable management products so that you know the solution you're getting is the best available.

The project to construct the new Queen Elizabeth Hospital, Birmingham - one of the new super hospitals - is the largest of its kind in the UK outside of the Olympics development. Here, the fire detection and alarm systems are state-of-the art intelligent networks which pave the way for the next generation of designs.

The site of the £580m Queen Elizabeth Hospital for University Hospitals Birmingham NHS Foundation Trust has become something of a builders' village. The Private Finance Initiative for Consort Healthcare's state-of-the art teaching hospital by Birmingham New Hospital Joint Venture North is a testament to new ways of working. Balfour Beatty is working with Haden Young to construct the first new hospital for the city in 70 years.

The cabling for all the fire detection, voice evacuation and fire alarm systems have been supplied by AEI Cables and installation is being managed by Protec Fire Detection Plc who specialise in the design and installation of these systems.

The Firetec Standard and Firetec Enhanced cables have been adapted to configure a network for the Protec Algo-Tec System which controls smoke damper systems and works as part of the sophisticated overall fire detection system.

The integration of the smoke damper system into the fire alarm system means that via the graphics on a pc, operators can see all areas and the dampers can be closed to prevent any smoke and subsequent toxicity from spreading from any fire enabling any people in other areas to safely leave.

A voice evacuation public address system has also been linked to the fire detection system to make this a truly integrated network.

Robert Cash, Protec Fire Detection plc project manager said: "We have focused on build quality, the products and the systems being installed. "As a new build, we have been able to design and install the ideal network systems to make this a future-proof landmark development.

"The Joint Venture concept really is just that and each team is focused within the wider overall build to achieve their own targets as a team with no compromise on quality at any stage. We are all proud to be working on the project and look forward to the day when we can say we have successfully delivered what we said we would deliver on time and within budget."

The cables provided by AEI Cables for this prestige project are Firetec Standard and Firetec Enhanced cabling specified to meet all requirements of British Standards.

They provide continuity of power when systems need to operate to ensure alarm systems function properly and can be used by fire safety officers managing a safe evacuation of people.

Jim Duffy, chief cxecutive of AEI Cables, said: "This is a specialist installation and we have drawn on our wide experience in this field from similar partnership arrangements with international clients to ensure we are making informed decisions throughout the process.

"We have been working closely with Protec about the most appropriate cables for this particular system and the environments in which they are being applied.

"This is a hospital and there will be large numbers of people moving around in unfamiliar environments so there can be no compromise on the quality and safety of the products installed."

Logistically, the West Bromwich branch of Edmundson's Electrical wholesalers provided the day-to-day supply of cabling to ensure a seamless provision to meet all requirements.

Edmundson's manager Eddie Featherstone said: "In this instance we were able to combine our presence on the ground locally with our specialist cable management experience in this field."

Len Dickins, a project manager for the BNHJV, pointed out the merits of the modular building design with, pipework, cable trays, mechanical and electrical containment being constructed off site as modules, meaning no welding is necessary on site and a workshop quality being maintained throughout.

The toilet pods are constructed off site at the HY workshop and dropped into position for final connection. All of the ward walls are pre-fabricated off site with all the necessary components including gas, electrical outlets and bedhead lighting.

In all, there will be 3,800 car parking spaces, up to 25% of heat energy will be recycled, there will be 5,000 rooms, 10,000 doors and 182,407 units of equipment ranging from soap dispensers to high-tech imaging equipment.

Len Dickins added:  "This is partnership working at its best. The development to this stage proves that and we are well on schedule to meet our targets but nobody will take anything for granted until we are complete and we see this hospital doing what it has been built for which is looking after patients and delivering the highest quality care possible."

There are a number of drivers affecting energy performance in modern buildings, some of which impact upon the work of the electrical design engineer, challenging him to produce cost-effective solutions but also presenting new business opportunities. Mike Lawrence, product line team leader - commercial assemblies at Eaton, explains

Principal among the drivers affecting energy performance is the Building Regulations Part L2: Conservation of fuel and power in buildings other than dwellings. This calls for sub-metering so that at least 90% of the estimated annual consumption can be attributed to specific end-use categories.

Some energy metering systems offered by manufacturers are complex and costly. However, the solution does not necessarily have to be so complex. Sometimes it is possible to install a relatively simple, cost-effective system that is future-proofed to allow more advanced automatic metering and trending (AM&T) systems to be introduced later.

Energy Performance Certificates
Since October 2008 an Energy Performance Certificate (EPC) has been required by law for any new building or any building sold or rented. First introduced for domestic premises, the requirement was extended in April 2008 to cover large commercial properties. Then in October it became applicable to all buildings, or parts of buildings, when they are "built, sold or rented". In addition, since October a Display Energy Certificate (DEC) has been required for prominent display in larger public buildings.

The EPC and DEC are among a number of interrelated requirements of the European Energy Performance of Buildings Directive (EPBD).

Energy Performance Certificates must be issued by accredited energy assessors. They will give the property an energy efficiency rating on a scale of A to G, similar to the ratings used for domestic appliances. The assessors will also give recommendations for improvement.
While the requirements for Energy Performance Certificates do not impose any direct requirements for metering, a carefully-planned sub-metering strategy will enable building owners or occupiers to monitor energy usage, identify significant trends and assess the effectiveness of measures taken to implement the energy assessors' recommendations.

Building Regulations L2
The UK Building Regulations Part L2 was also driven by the Energy Performance of Buildings Directive. It is published as two documents, L2A covering new buildings and L2B covering existing buildings.
The key requirements affecting sub-metering are:-
- Energy meters should be installed so that at least 90% of the estimated annual energy consumption of each fuel (electricity, gas, LPG, oil etc.) can be assigned to various end-use categories such as lighting, heating, ventilation, pumps and fans.
- Reasonable provision of energy meters in existing buildings can be achieved by following the recommendations of Cibse Technical Memorandum TM39:Building Energy Metering (A Guide to energy sub-metering in non-domestic buildings.)
- Reasonable provision of energy meters would be to install sub-meters in any building greater than 500m2.   
- In buildings with a total useful floor area greater than 1000m2, facilities should be provided for automatic meter reading and data collection.
The objective is to develop a sub-metering strategy so that users can identify areas where improvements can be introduced to achieve energy savings of 5-10% or better.
TM39 is an updated version of Cibse General Information Leaflet 65 (GIL65):Metering energy use in new non-domestic buildings, which can be downloaded free of charge from www.cibse.org/pdfs/GIL065.pdf

The L2 requirements apply to premises with a floor area greater than 500m2 and existing buildings where "consequential improvements", normally involving Building Regulations approval, are being made. This includes separate buildings on multi-building sites.
Specific recommendations are made for plant and equipment for which separate metering should be provided as follows:-
- Motor control centres feeding pumps and fan loads greater than 10kW
- Boiler installations greater than 50kW
- Chiller installations greater than 20kW
- Electric humidifiers greater than 10kW
- Final electricity distribution boards greater than 50kW
This last recommendation is especially pertinent because the majority of distribution boards are rated higher than 50kW.

Metering solutions
There are various approaches to sub-metering. In some cases all metering is provided at the main switchboard. This has the advantage that meters are all in the same location so manual collection of data is easy. However, on some sites MCCB panelboards provide sub-distribution to final distribution boards and to loads such as lifts, ventilation or air-conditioning plant. These will require sub-metering at the panelboard.

Final distribution boards frequently supply more than one type of load (typically lighting and small power). If these loads are metered separately back at the main switchboard or panelboard, it will require separate feeders and probably two distribution boards instead of one. If, however, metering of the grouped loads can be carried out at the distribution board it is possible to use a single feeder.
Different solutions are available at the final distribution board:-
- Custom-built boards incorporating metering. This is generally an expensive solution.
- Separate meter packs installed below, or alongside, standard distribution boards offer a more cost-effective solution.
- Distribution boards with integral metering are now available as standard products.

In each case there are options for a single meter to monitor the entire board, or for split metering to provide separate measurement of grouped lighting and small power loads. These options are available with both Type A (single-phase) and Type B (three-phase) boards. However, it should be noted that in some split metering applications one meter monitors the entire board. This calls for external calculation for one group of MCB-ways.
It is recommended that meters should always include remote reading capabilities. As a minimum this should be a pulsed output offering remote measurement of kWh. A better solution is a Modbus design that provides information via an RS485 connection. With Modbus RS485 communication, information is read directly from the meter and some data registers, such as peak demand, can be re-set remotely. If the meter is connected to an effective energy management system (EMS/BMS), it can deliver a more informative energy monitoring capability. Specifiers and installers do not need to go to the expense of custom-built distribution boards and panelboards to ensure compliance with Building Regulations Part L2. A range of metering solutions is now available for type A and type B boards including add-on meter packs for use with standard distribution boards and distribution boards with integral metering capabilities. The design of these units minimises the amount of on-site work for the contractor and the standardised design allows boards to be sourced through the normal electrical wholesaler network.

Where greater sophistication is required, ethernet connectivity can be used to integrate the sub-metering into a comprehensive energy management architecture for effective monitoring, control and management of the complete energy infrastructure in large sites. Eaton's Power Xpert software allows energy use to be monitored and trends identified so that systems can be optimised to reduce energy costs and achieve a more efficient system.
For further information see www.poweringbusinessworldwide.tv

Management of waste on site is becoming increasingly complex. Bryan Neill of Mercury Recycling explains why this is, and explores the special case of discharge light sources

As our understanding of sustainability evolves, much of the thinking now extends beyond simple matters of energy consumption to encompass other environmental issues. One of the most important of these is the management of waste, because all of the materials we use have embedded carbon, so wasting them makes a contribution to climate change.

In addition, we are running out of space to store waste - and a lot of the waste we generate contains environmentally harmful substances that shouldn't be consigned to landfill.

These concerns have not only fostered a greater awareness of waste, they have also spawned a raft of legislative measures to minimise waste. Furthermore, there is a great deal of pressure on companies to demonstrate corporate social responsibility through efficient management of issues such as sustainability. And when it comes to waste, the construction industry is one of the worst offenders.

In fact, according to government statistics, over 70 million tonnes of waste is produced by the construction industry every year. Some of it is necessary waste that can't be avoided but, amazingly, an average of 13% of all the materials delivered to site never get used - they are simply discarded.

All of these factors can impact on the electrical engineer; from the way that systems are designed through to the management of the project on site. For example, on a major project the main contractor will be operating a site waste management plan that involves everyone on site.

Also, there may be smaller projects where the electrical engineer is the principal contractor with responsibility for legislative compliance. So having an understanding of waste management requirements is important in avoiding non-compliance and possible prosecution.

Within the electrical services, lighting can be one of the most challenging areas to manage, because light fittings fall within the remit of the WEEE (Waste Electrical & Electronic Equipment) Directive, while discharge light sources are also classified as hazardous waste.
There are a number of situations where disposal of electrical equipment becomes an integral part of a project, ranging from a fit-out through to refurbishment and demolition projects. In all such cases, it's no longer acceptable to simply throw the old light fittings, control panels and any other electrical items in a skip. Under the WEEE Directive these need to be sent for recycling through an accredited waste stream. And if there are light fittings containing discharge light sources (e.g. fluorescent, metal halide, sodium) the lamps must be removed from the fittings and treated as hazardous waste. This clearly has implications for anyone involved in managing waste on site.

Disposing of WEEE
Many items of WEEE will be covered by manufacturers' take-back schemes so a key element of waste management will be to identify which accredited schemes can be used to dispose of waste. This may lead to separation of different types of waste on site, with provision for appropriate storage until they are collected. Very often, it makes sense to source an accredited waste contractor that can handle any type of WEEE and take these complexities off your hands.

As noted above, though, if the waste includes discharge lamps then it's important the waste contractor is licensed to handle hazardous waste as well. It is also important to separate the lamps from the fittings. Old luminaires are usually sent for shredding and the residual materials re-used in industry, but a single lamp can contaminate the whole batch so that it all has to be treated as hazardous waste. Apart from the additional treatment costs, this can lead to prosecution under environmental legislation.

The special case of lamps
The special status of discharge light sources is the result of their containing small amounts of mercury. Lamp manufacturers have made considerable progress in reducing the total amount of mercury in lamps and using mercury amalgam rather than liquid mercury - particularly in fluorescent lamps. Nevertheless, a small amount is still required and when one considers the cumulative effects from the millions of such lamps that are disposed of each year are considered, it's clear that there is the potential for sloppy waste disposal to inflict significant damage on the environment.

The first piece of legislation to impact on disposal of discharge lamps came into force in July 2004, and significantly reduced the total number of hazardous landfill sites in the UK. At the same time, the cost of sending such waste to landfill sites tripled, so the pressure was on to find alternatives.

Then came the Hazardous Waste Regulations in England and Wales, introduced in July 2005, which classified discharge lamps and tubes as hazardous waste. As a result, these light sources then attracted a hazardous waste consignment fee when they were transported anywhere.

Up to this point it was still theoretically possible to send waste lamps to special landfill sites, albeit it at a high cost. Since the WEEE Directive came into force, though, this has not been an option and all such waste has to be sent for recycling.

Again, the recycling of discharge lamps is considerably more complex than most other forms of WEEE. In fact, discharge lamps are among the most difficult types of waste to deal with because they are made up of many different materials.

For instance, the glass in many lamps is coated with a mixture of phosphors and this has to be stripped off before the glass can be re-used. These phosphors also contain mercury, which is distilled from the phosphor mix at very high temperatures (around 800°C) to reclaim pure liquid mercury. In the case of sodium lamps, the sodium is also reclaimed. Ferrous and non-ferrous metals are also separated from other components and sent for re-use.
Furthermore, the vacuum within the lamps means they implode when crushed so the crushing procedure has to be contained within specially constructed machines. Highly flammable hydrogen gas is also released when the sodium from sodium lamps is exposed to` water, so special precautions have to be taken with this both when processing and storing these lamps.

Consequently, it's important to select a waste contractor that has the expertise to deal with such waste, and has invested in the advanced machinery and techniques to ensure that 100% of the lamp components are dealt with in compliance with the requirements of the legislation.

Managing on site
Clearly, then, on-site waste management is becoming more important. It is now necessary to establish disposal points for different types of waste and to ensure that all operatives on site are familiar with how the waste needs to be sorted.

A typical arrangement might include skips for general construction waste, another skip for non-hazardous WEEE items such as luminaires and a separate, secure storage area for hazardous waste such as lamps. The latter is very important because of the potential for contamination of the site if lamps are broken. For instance, if broken lamps were left lying around on site the mercury could be washed away by rain and contaminate the surrounding land. If there are water courses nearby, these could also be affected. And, of course, any associated major clean up will inevitably lead to delays in the construction schedule.
As a result, any waste lamps need to be stored in secure containers that will not allow such leakage. And, because the waste is hazardous, their location will also need to bear all necessary signage. A specialist waste contractor should be able to provide appropriate storage containers and signage - along with all of the necessary documentation to ensure a complete audit trail is in place.

On your own doorstep
While most Electrical Review readers will be most concerned with the issues of waste on site, it's important to bear in mind that the same considerations apply to disposing of waste from your own premises. Most local authorities now offer facilities for smaller volumes of WEEE but in the case of larger offices it will again make sense to employ the services of a specialist waste contractor.

Here it's important to point out another significant element of the Hazardous Waste regulations for building operators. This is that any site producing more than 200kg of any type of hazardous waste per annum has to register with the Environment Agency as a hazardous waste producer. Registered sites receive a site registration code and waste contractors are not allowed to collect waste from any site that does not have this code.
To put 200kg of waste into perspective, this constitutes around 500 fluorescent tubes, or around 15 CRT monitors - therefore many office buildings need to register as hazardous waste producers. And if you have several offices, it may be necessary to register each one separately.

Disposing of potentially harmful waste is clearly important if we are to safeguard the environment and the health and safety of people who may come into contact with such waste. However, this can be a complex area where it makes sense to take advantage of specialist advice and expertise.

Following a review of the outdated standard IEC 60439, radical changes have been made and the new IEC 61439, governing the safety and performance of electrical panels, now better meets the low voltage assembly market's needs. Although the changes are fundamental it may take specifiers some time to adjust. Here Mark Waters from Schneider Electric explains the changes, why they have been made and how to meet the requirements of the new standard

Although some specifiers are worried by the changes, IEC 61439 has been introduced to enable panel and systems builders in the UK to produce assemblies that meet essential quality standards and mean compliance is unavoidable, bringing a welcome reassurance of quality within the industry.

As Schneider Electric was a significant lobbyist in persuading the IEC to investigate revising the old IEC 60439 standard and subsequently consulted on the new one, it has given the company's experts a unique insight into the background, requirements, and implications of the new standard.

Why is it needed?
IEC 61439 has been urgently needed for many years, for the previous 35-year old IEC 60439 series of standards were lacking in a number of areas. It was a compromise between different national approaches, some of which were strict and others that were more subjective. Where agreement could not be achieved, the subject was ignored, or some vague clause was added that could be interpreted to suit the reader's point of view.
It has been obvious for some time that the foundations of the old standard were fundamentally flawed when considered in the context of today's industry. Designs and market requirements for assemblies have evolved over the years, such that IEC 60439-1 no longer encompasses many commonly used arrangements. Just one of these for example is modular systems, under the old standard these are not effectively covered with respect to the critical matter of temperature rise performance.

It is well known that it's not practical to fully type test every conceivable configuration of assembly produced and so where type testing was not feasible, there has needed to be alternative ways of ensuring an assembly meets the minimum required safety and performance criteria.

The old methods for proving the design of a 'partially type tested assembly' in accordance with IEC 60439-1 were weak and relied entirely on the capability and integrity of assembly designers. Previously there was no standard for assemblies that do not fit within the categories of type tested or partially type tested, therefore the old standard was no longer suitable for today's industry.

These weak foundations have made it difficult to evolve the standard in line with market needs and pressures. Every assembly manufactured should meet minimum performance and safety criteria, in spite of ever increasing demands to optimise manufacture and reduce costs.

The new approach
With the growing pressures for higher network utilisation, assembly design optimisation and more stringent safety regulations, the changes included in the assembly standard IEC 61439-2 are important and long overdue. All assemblies that do not have a specific product standard are covered and there is no opportunity to avoid compliance.

In the new standard, the methods of confirming design performance are practical, reflecting the different market needs and ways in which assemblies are produced. Several alternative means of verifying a particular characteristic of an assembly are also included. These are defined and their use restricted. Overall, the standard is performance based, but in some instances where design rules are used, it has to be prescriptive.

Essential changes
In order to meet its objectives, the review of the IEC 60439 series of standards had to make changes and these have been radical ones. A number of foundations of the old standard have been discarded, in order to have a standard that better meets the low-voltage assembly market's needs and the way it operates.

Under the previous standard, panels can be type tested assemblies (TTA) or part type tested assemblies (PTTA), but since many panels are too small to be covered by TTA or PTTA certifications, they fell outside of any standard. Therefore the categories of TTA and PTTA have been discarded in favour of a design ‘verified assembly'. This is a classless term, where demonstration of design capability can be achieved by type test and/or by other equivalent means that include appropriate margins.

The IEC 61439 series of standards uses the same structure as other series within IEC. Part 1 is General Rules, detailing requirements that are common to two or more generic types of assembly. Each type then has a product-specific part within the series. This then references valid clauses within the General Rules and details any specific requirements belonging to that particular type of assembly. Any clause in the General Rules that is not in the product-specific part does not apply. Part 2 of IEC 61439 is the only part that has a dual role, it covers power switchgear and control gear assemblies as well as any assembly not covered by any other product specific part.

The structure of IEC 61439 also makes revisions easier, as changes to ‘General Rules' will always tend to lag behind their introduction in product-specific Parts. It also means that assemblies cannot be specified or manufactured to IEC 61439-1, since one of the product-specific Parts must be referenced in any assembly specification.  Parts (3, 4, 5 and 6) are currently being prepared by the IEC to cover all product specific Parts from the old standard and more could be added at a later date.

As business becomes more global there is the increasing need for portable designs. This is now fully recognised as the new standard confirms that designs and design verifications are portable. For example, subject to a suitable quality assurance regime being in place, a type test certificate obtained in France, for a design carried out in the UK, is valid for an assembly manufactured in Australia.

For the first time the new standard recognises that more than one party may be involved between concept and delivery of an assembly. IEC 61439 identifies the original manufacturer as the one responsible for the basic design and its verification and possibly, the supply of a kit of parts. It then designates the manufacturer who completes the assembly and conducts the routine tests, as the assembly manufacturer.

The original and assembly manufacturer can be the same, or, a transition may take place somewhere between concept and delivery. In any event, all parts of the assemblies must be design and routine verified by a manufacturer.

Responsibilities
The new standard attempts to focus all parties on their respective responsibilities. Purchasers and specifiers are encouraged to view an assembly as a ‘black box'. Their prime task is to specify the inputs and outputs to the assembly and to define the interfaces between the assembly and the outside world.

How the assembly is configured internally and the performance, relative to the external parameters (as defined by the purchaser or specifier) is clearly the responsibility of the manufacturer(s). They are legally responsible for the correct configuration of the individual parts and must ensure the design meets the specification, is fully verified and fit for purpose.
Compliance with the new standard is compulsory. All assemblies must be shown to meet minimum safety and performance standards by design and routine verification. Once the European equivalent standard, EN 61439-2 (BS EN 61439-2) has been listed in the Official Journal of the European Union, full compliance will become the easiest route to ‘presumption of compliance' with the Electromagnetic Compatibility and Low-voltage Directives, both of which are essential before the CE mark can be applied. Partially proven design or only routine testing of some assemblies is forbidden.

The majority of assembly manufacturers and builders are already competent and diligent, and so the new standard will not mean significant changes. IEC 61439 requires a logical approach to the design and verification of an assembly, which is essentially just good practice.

However, where previously partially type tested assemblies or those outside of the scope of IEC 60439-1 have been provided, the panel builder may find it beneficial to purchase a basic design verified assembly in kit form, from a manufacturer such as Schneider Electric. This will enable the panel builder to avoid the time and cost of much of the design verification process.

More than 50 senior figures from the electricity industry gathered at London's Royal Automobile Club on 5 February 2009, for NetWork 2009 - the first ever international DNO strategy conference. Top of their agenda was how the ability to measure the condition of live assets is making the management of network assets more efficient, at lower cost. Neil Davies from EA Technology Instruments investigates

The inaugural NetWork 2009 event in February was an extremely valuable opportunity for UK DNOs to share knowledge on the key strategic management issues facing network operators and learn from the examples  of two of the world's most reliable and efficient networks - SP Powergrid of Singapore and China Light and Power (CLP) of Hong Kong.
The pressures are common to every operator across the world: how can they manage an ageing asset base so that it will deliver greater network reliability, power quality and safety, while reducing costs to consumers? At the same time, how can they make a watertight business case for investment in maintaining, upgrading and replacing assets to stakeholders, including industry regulators?

The answer to these questions is being found in two developments which are inextricably linked: new techniques for accurately measuring the condition of live assets, plus new methodologies for managing assets more effectively, based on their actual condition.
Let's look at what has been achieved in Hong Kong and Singapore, where condition based asset management has become the driver for remarkable improvements in both reliability and cost efficiency:

SP Powergrid, Singapore
SP Powergrid's network includes nearly 10,000 substations, 40,000 switchgear sets, 14,000 transformers and 30,000km of cable. Since incorporating condition monitoring into its systems, it has dramatically improved an already excellent performance. The System Average Interruption Duration Index (SAIDI) has averaged less than 1 minute per year over the last three years.
NB: The blip in 2004/5 was caused by a third party supply issue outside SP Powergrid's control.

SP Powergrid estimates over the last eight financial years, condition monitoring has enabled it to avert 450 network failure incidents, with a net financial saving of US$29m. In addition to improving customer service, it has been able to pass cost savings on to them.


CLP  Hong  Kong
The China Light and Power network in Hong Kong includes nearly 13,000 substations and 22,000km of overhead lines and underground cables, serving 2.26 million customers.
As a result of focusing over the last 10 years on condition based maintenance, to predict faults and improve reliability , it has reduced its SAIDI figures from more than 40 to 2.68 minutes lost per year

Demand from customers has continue to grow, but in the last two years, greater operating efficiencies have enabled CLP to reduce tariffs.

The UK Business Case
Taken as a whole, the UK electricity network is relatively efficient. But an in-depth analysis by EA Technology Consulting of preventable, condition-related failures, shows there is considerable scope for improvement:
Using condition monitoring as a failure prevention tool is a valuable technique, but is only part of a much wider move towards condition based asset management techniques.

Using Condition Data
The ability to collect data on the condition of live assets is transforming the industry's approach to asset management itself: from one based on time-scheduled maintenance and replacement, to one based on a detailed understanding of the condition of the asset base. It also provides accurate intelligence for investment programmes.
Maximising the value of  data is essentially carried out at two levels:

Asset condition registers
Expert analysis and interpretation of PD activity readings gives a clear indication of the condition of assets, including accurate predictions of when they are likely to fail. In EA Technology's case, this is based on a unique database, built up over more than 30 years, which shows how tens of thousands of asset types have deteriorated over time.
This approach enables operators to develop registers of assets, in which each asset is accorded a ‘health index' showing its present condition, its predicted date of failure and/or its remaining service life.

Condition Based Risk Management (CBRM)
CBRM is a comprehensive new methodology, which takes condition based asset management to a higher level, enabling managers to take more intelligent decisions on revenue and capital spending. It also reduces the cost of network operation, while improving their efficiency and reliability.

The effectiveness of CBRM derives from factoring together probability (derived from the asset condition) and consequences of asset failure, to determine risk in terms of financial cost.
In addition to managing the health of assets, CBRM provides the answers to the key questions:

- If an asset costing £XX fails, what will be the consequential loss to the business? 
- If an asset is refurbished or replaced at a cost of £YY, what will be the benefit to the business?
- Therefore, where should we prioritise our spending?


EA Technology's experience shows that partial discharge (PD) activity is a factor in around 85% of disruptive substation failures. It has thus become increasingly clear the ability to detect and measure PD is key to assessing the health of assets. PD activity provides clear evidence that an asset is deteriorating in a way that is likely to lead to failure.  The process of deterioration can propagate and develop, until the insulation is unable to withstand the electrical stress, leading to flashover.

Partial discharges emit energy, in the form of effects which can be detected, located, measured and monitored:
- Electromagnetic emissions, in the form of radio waves, light and heat.
- Acoustic emissions, in the audible and ultrasonic ranges.
- Ozone and nitrous oxide gases.
The most effective techniques for detecting and measuring PD activity in live assets are based on quantifying:

Transient earth voltages (TEVs)
The importance of TEV effects (discharges of radio energy associated with PD activity) was first identified by EA Technology in the 1970s. Measuring TEV emissions is the most effective way to assess internal PD activity in metalclad MV switchgear.

Ultrasonic emissions
PD activity creates emissions in both the audible and ultrasonic ranges. The latter is by far the most valuable for early detection and measurement.  Measuring ultrasonic emissions is the most effective way to assess PD activity where there is an air passage e.g. vents or door in the casing of an asset.

UHF emissions
PD activity can also be measured in the UHF range, and is particularly useful in monitoring EHV assets.

The latest PD instruments typically use a combination of ultrasonic and TEV sensor technologies, characterised by the EA Technology UltraTEV range. These include:

- Handheld dual sensor instruments which provide an instant indication of critical levels of PD activity, ideal for ‘first pass' PD surveys and safety checks. Traffic light warning levels are precisely calibrated using a database of known patterns of asset deterioration.
- More sophisticated handhelds, which provide audible and numerical readings of ultrasonic and TEV activity.
- PD location instruments which pinpoint and quantify the source of PD activity.
- PD monitoring instruments, which measure, record and analyse PD activity over time.
- PD alarm systems, which give immediate warning of critical PD activity in groups of assets or whole networks.
- Specialist PD monitoring systems for strategically important assets, including Gas Insulated Switchgear (GIS).

Other Asset Classes
Condition based management is by no means confined to assets which present faults in the form of PD activity.

The same principle is equally effective, using a range of condition measurement techniques, to all types of electricity network assets including substations and cables. It can apply to the complete asset, such as an overhead line, as well as to the component parts, such as the overhead conductors, poles, towers and footings.

Conclusion
The ability to assess the condition of live assets is changing the way assets are managed on many levels: as a technique for preventing faults from developing into failures, as a means of moving from time-based to condition-based maintenance, as a way of quantifying risk and as the basis for justifying and prioritising investment.

But the ultimate rationale for condition measurement is that it pays for itself, many times over.

This article includes material from presentations made at NetWork 2009, the first international distribution network strategy conference, held in London in February 2009. The full presentations are available from www.networkconference.co.uk, where readers can also register their interest in  NetWork 2010.
For further information please email This email address is being protected from spambots. You need JavaScript enabled to view it.

By Cal Bailey, sustainability director at NG Bailey

It's official: Britain is in a recession, and while one might argue the downturn reduces carbon emissions through a decrease in economic activity, on the flip-side it can cause surviving businesses to take the risk of delaying investment in sustainable measures in order to reduce short-term costs, without considering the effect this might have over time.

I'd argue this is not a long-term strategy for success, and urge businesses who may be pushing sustainability measures down the list of priorities because of the downturn to have a serious rethink if they're to stay ahead of the game. The changing property landscape means it makes better business sense to future-proof a building today, so as to maximise its asset value and payback in the longer term. 

The overall trend of rising energy prices, concerns over energy security and the strong direction of EU and UK policy increases the desire towards lower energy consumption and, as a result, a lower carbon output. Those that choose not to turn this desire into reality by investing in sustainable measures now, will fail to reap the rewards in the long run.

The long-term value of a property will be significantly affected by its energy performance. This will in part be a result of the implementation of policies such as the Carbon Reduction Commitment (CRC) which will be introduced in April 2010 and the recent introduction of EPCs and DECs. These policies are mandatory and as a result will drive the demand for organisations to occupy energy efficient buildings, with weaker performing buildings losing value.

Through Carbon Action Yorkshire, NG Bailey is currently participating in a trial of the implementation of the CRC and when the policy comes into force, some 5,000 public and private sector organisations will form part of this emissions trading scheme, and will be required to report their UK-based CO2 emissions from all their fixed point energy sources. So, it has never been more important for companies to get their estate in ‘green' order; those that don't will suffer a penalty.

Companies looking to reduce carbon emissions do not merely require investment in technology - there must be an overall commitment to the strategy. My view is that there are three key priorities; the first is the measurement of energy consumption, which is increasingly required by law, and which is essential to good control.

The second priority is to manage this lower energy use of buildings. Poorly maintained and monitored buildings waste energy and generate unnecessary cost and carbon emissions. At the very least, organisations should implement a regular maintenance schedule for their services. However, an intelligent building management system is the key to controlling as much as 70% of a building's energy use.

The third is engaging stakeholders and the workforce and creating a culture to encourage people to get behind a carbon reduction strategy, so they feel empowered to take necessary action in their own area of work.

The industry must consider not only cost minimisation and emissions reduction, but also the cost to run a building and maintain its value throughout its lifespan - instead of just focusing on the initial capital expenditure. Reducing whole-life costs through innovative specification and ongoing maintenance, rather than simply concentrating on the short-term build cost, means adopting a complete ‘cradle to grave' responsibility, focused towards better environmental efficiency and creating a better life in buildings for occupiers.

Minimum standards will only become more stringent.  A new Energy Performance of Buildings Directive is anticipated in 2010, which will further increase the requirement for low carbon building design and performance once in operation. With environmental policies continuously strengthening in order to reduce the UK's carbon emissions by 80% by 2050, it will prove to be good planning and highly cost effective for companies to adopt sustainable measures, even during the economic downturn. Those that choose not to take this route should heed the implications this might have in the future.

As computer power continues to increase, and software algorithms advance, the demand for more complex simulations continues to increase.  The need to improve product reliability and reduce manufacturing costs is driving the demand for accurate and complex simulations. One of the most notable demands is the requirement to couple physics from different disciplines, or as it has recently been called, multi-physics. Bruce Klimpke, technical director at Integrated Engineering Software, explains

In reality all electromagnetic phenomena will change the thermal distribution and properties within a device. For many applications the effect is negligible, or so small it can be ignored. In other cases, neglecting the coupling will significantly reduce the realism of the solution or render the final product useless. Thus for the simulation of chips, or the design of electrical bushings, the combining of thermal and electrical analysis is essential.

For every electrical system the flow of electric current through a conductor generates heat. For many applications, the heat produced can be a significant factor in the design of the product and in some cases the dominant factor. Solving this problem requires first the solution of the electrical conduction problem to be discovered. Once the current paths are known, then the volume heat density can be calculated directly by the ohmic losses.
The volume heat density is then the required heat source for a thermal analysis. The thermal analysis performed by the Kelvin (2D) or Celsius (3D) integrated simulation systems require all the thermal material properties as well as the appropriate boundary conditions, such as the applied temperature, to be known. Celsius and Kelvin serve the same purpose, except Kelvin approximates the full 3D world by reducing the problem to two dimensions. These programs are general thermal analysis programs and can be used to model everything from chips and motors to high voltage equipment. The programs key features are their ease of use and unique ability to be combined seamlessly with electrical analysis software. Thus the designer can focus on achieving the best possible design rather than trying to solve the nuances of the software program.

There are two distinct levels of difficulty involved in solving the combined electrical and thermal problems with a design. The first level is where the temperature distribution within the device has a negligible effect on the electrical solution. For this case, we simply solve the electrical problem once and use subsequent ohmic losses as an input for the thermal analysis. The thermal or temperature distribution within the device is readily calculated. So, when designing a chip, the electrical properties may not be significantly changed by the temperature. In this case only one electrical and thermal analysis is required.

The second level, which refers to more advanced simulations, requires many solutions to be generated from the electrical and thermal programs. For example, if the electrical conductivity of the materials is strongly dependent on the temperature, a further electrical analysis would be required after the thermal analysis. This again would change the heat distribution from the original analysis. Thus, an iterative procedure occurs between the electrical and thermal analysis, until the electrical and thermal parameters converge. This is typically what is required in a high voltage bushing.

Solving electrical and thermal problems are mathematically the same for static problems. However, the practical implementations are quite different. For most practical thermal analysis, radiative effects and forced convective heat transfer are introduced into an equivalent heat transfer coefficient. This is used on the surface of the part being analysed. Therefore most heat or temperature modeling is undertaken by analysing the temperature within the part of interest and the effects on the nearby volumes are simply neglected.
However, when solving electrical problems the situation is quite different. Not only is the designer interested in the electric field within the part, but also the field in the air space that surrounds the part. Even if the electric field calculation is only required within or on the part, the entire volume about the part has to be modeled.

As we can see the electric field extends far beyond the bushing boundaries (theoretically to infinity), whereas the thermal field is only modeled within the bushing. Two distinct types of methods can be used to model both the electric and thermal field. One is called the finite element method (FEM) and the other is the boundary element method (BEM). The FEM method is by far the most commonly used. This method is almost exclusively used for thermal analysis, as there is no need to model the thermal field outside the bushing. The same method can be used for the electrical analysis, but this introduces some major drawbacks, the foremost of which being modeling the space exterior to the bushing. To calculate the electric field, the BEM is far superior as it can model the exterior volume trivially. Therefore, ideally the FEM is used for the thermal analysis and the BEM for electrical analysis.
To conclude, more advanced solutions requiring multi-physics to reach an accurate calculation can be best obtained by simulation software using both finite and boundary element software. These require seamless coupling so the designer can focus on design and not on long learning curves to get the simulation results. In addition, the advent of 64bit personal computers with memory of 32GB is now common. This radical change in computer memory allows the solutions of coupled electrical and thermal analysis that would have been impractical for full 3D simulation on a 32bit machine.  To enhance this further, multiple quad core processors are now available. Software advances making use of many processors (parallelisation) will enhance the level of realism of coupled field analysis due to the radical speed improvements.

Each month, Electrical Review's resident grumpy old man, writer and industry commentator John Houston, explores a hot topic of the day and lets us know his views in no uncertain terms

The demise of the 100W incandescent lamp may be greeted warmly by anyone with a green streak and I too welcome any moves that reduce energy consumption. But, there is almost always a ‘but' when it comes to energy efficiency, carbon footprints and other conservation issues. Saving the planet is never as simple as it seems and most attempts to make life easy seem to throw up equal and opposite arguments.

Let's consider one basic problem with killing off the 100W and higher rated lamps.  Compact Fluorescent Lamps (CFL) - the luminaire most recognised as an ‘energy saving light bulb' - do not work with electronic switches.  Indeed, using CFLs with dimmer switches can cause overheating in the lamp holder, the switch and thereby also significantly shortening the life of the five times more expensive and supposedly longer lasting CFL!

This means anyone with dimmer switches must fit only 80W incandescent lamps and below in future. Yet, crediting the populace with some intelligence, in practice such lighting is frequently, if not mainly, operated in a dimmed state. Arguably a suitably dimmed 100W lamp potentially saves as much as using a CFL, but without the higher price of the lamp itself.
If one considers safety, the likes of stairwells and corridors that have sensibly and responsibly been equipped with occupancy sensors to switch on lighting only when required, are doomed because such applications cannot afford the delay time associated with the operation of the CFL.  Presumably, these applications will be compelled to use 80W lamps rather than, say, the 150W versions once deployed to give adequate lighting for safety purposes.

The issue throws up far greater challenges than whether of not the removal of higher power lamps from western marketplaces is worthy or not. My issue with the control of the availability of such lamps is that it once again illustrates the over simplification of legislation.
Using CFL and low energy lighting in most cases merely mitigates for the otherwise greater waste of energy that would occur when lights are left switched on unnecessarily. It does nothing to change neither the habits nor the controls that really dictate our energy wastage.
As an analogy, let us consider applying the same legislative logic to motoring.  We all know that the bigger the car, the greater its emissions don't we? Surely then, it would make sense, at least in the short term, to limit all personal vehicles to, say 1.5 litre diesel engines wouldn't it? However, emissions are highest in stationary traffic, whatever the engine size. Hence, we need also to control traffic flow - perhaps by levies, traffic bans or restricted usage.  But, what about people's driving style? Those with a heavy right foot on the accelerator pedal, whatever the engine size, will emit more than others.  So, with such logic, should all vehicles be automatics, with governors fitted?

The point at issue, before too many readers lambast me for being of the Jeremy Clarkson school of devil may care scepticism and cynicism, is not what we do, but rather how we do it. Let me state right now that I am in favour of reducing carbon emissions, reducing energy waste and generally saving the whale, the world and all exists within it. What I find frustrating is the naivety of legislators and the essence of the nanny state some moves induce.

Banning incandescent lamps above 100W at this stage is posturing.  I suggest it may be borne out of a lack of imagination as to how better to encourage people to save energy. This I can fully appreciate.  Lighting and other energy controls cost money and with a credit crunch, global recession and general lack of confidence, it's a brave person who invests in the ‘unknown' of energy efficiency controls.  But, there's never been a better time to invest; for the payback is short on most items installed - certainly shorter than the return on investment of a CFL.

Those like me that are packing far too many additional inches in all the wrong places welcome low fat, low sugar, high fibre foods provided they taste good (however rare that may seem). Unfortunately, the awful truth for most of us is that we have too many inches because we eat too much nice, tasty bad stuff, while never leaving our seats for most of our waking hours!

So too, if we really want to save the planet, we really need to change the habits of a lifetime and do more than fit a few CFL lights.