Network data communications has evolved rapidly from 'high-speed' gigabit (Gb) Ethernet local area networks (LANs) to the integration of cloud-based information technology (IT) and data streams demanding literally terabytes (TB) of data. Moreover, according to the Cisco Visual Networking Index (VNI) forecast, internet protocol (IP) traffic on a global basis will triple over the next five years, reaching over 120 exabytes (EB) per month. Erin Byrne and Mike Tryson from TE Connectivity explain
To meet today’s and future demands, advances in fibre optic technologies and effective integration with copper networks are making it possible to accommodate the ever-increasing demands placed on enterprise voice and data communications systems while enabling reduced energy consumption.
Big Data, a term that seems to encompass many of today’s technology buzzwords including the Internet of Things (IoT), machine to machine (M2M) and wireless communication, cloud computing and more is driving the need for more bandwidth. But the demand for more bandwidth is nothing new. So what could compel a transition to fiber optics that has been predicted for over twenty years?
As LightCounting, a market research firm focusing on high-speed interconnects reports in “Embedded Optical Modules” (May 2012), “At 10 Gbps signalling, the problems were manageable but as the industry transitions to 25G signalling, significant signal losses and issues are surfacing. As speeds increase to 25 GHz, the number of signal compensating electronics needed is skyrocketing along with costs.”
More data = more energy consumption
The emergence of faster data rates and decreasing signal rise times are a part of the technical challenges. Among the problems that became apparent even at 10Gbps is the power consumption of copper-connected signals within enclosures that is part of the overall energy consumption problem of data centres worldwide.
The energy use of data centres is already staggeringly large and estimated to be more than 2% of total U.S. electricity consumption. IT equipment consumes about 50% of a typical data centre’s energy. Air movement and cooling equipment consume about 37%, transformers and uninterruptible power supplies account for 10% and lighting and other items take another 3%.
The U.S. Department of Energy recognised the impact of massive data processing and storage many years ago. In addition to the recently updated Version 2.0 ENERGY STAR specification for Computer Servers that took effect on December 16, 2013, Version 1.0 ENERGY STAR specification for Data Center Storage was also finalised and took effect on December 2, 2013. Purchasing ENERGY STAR rated equipment should significantly lower a data centre’s energy consumption and improve the bottom line by reducing energy costs. Acquiring an ENERGY STAR rating should help server and storage equipment suppliers differentiate and increase the sales of their products. More efficient data transmission can play a role in obtaining the ENERGY STAR rating.
Concerns for higher efficiency in data centres go beyond government organisations. For example, the Open Compute Project initiated by Facebook targets greater efficiency in servers and data centres. This type of industry interest places even further demands on more-efficient data communications that can benefit from the use of fibre optic technology.
An optical data solution: end to end integration
With the inherent performance advantages of fibre optics, the key to optimum systems-level solutions involves the appropriate integration of active and passive technologies.
As part of a systems design approach, TE has developed four new products for intra-chassis connectivity:
1. 100G QSFP28 active optical cable (AOC);
2. 100G QSFP28 transceiver, a 1.5W transceiver (100 GbE SR4, EDR)
3. 400G CDFP AOC at 6.0W that meets the CDFP MSA emerging standard and is well below the 8W target
4. 300G Mid-board optics (MBO) transceiver at 4.5W (12-channel/25 Gbps module)
The QSFP28 transceiver shown in Figure 1 contains TE Connectivity’s Coolbit optical engine with a Vertical-Cavity Surface-Emitting Laser (VCSEL) and photodiode array mounted with driver and receiver ICs. This design approach offers high-speed data transmission and low power consumption and enables the design of faster, denser and more energy efficient communications systems.
Figure 1. QSFP28 module with 4 parallel bidirectional 25.78 Gbps channels.
Figure 2 shows the CDFP AOC. The plug or electrical interface connects to a customer’s board or apparatus through a front or back panel. Within the metal housing, electricity is converted to light, sent out over the passive fiber that transmits to the far end through the cable where it is detected and converted back to an electrical signal.
Figure 2. The CDFP form factor and its design elements.
Figure 3 shows a comparison of faceplate density when optics move off the faceplate and onto the system’s printed circuit board. The top image shows a near optimum design with an electrical connection as the external I/O. In contrast, the bottom shows the benefit if electrical I/O density is converted to optical signals. Essentially, the faceplate for a 1-rack system changes from totally congested and near thermal limits with copper to a streamlined design with fiber optics. With intra-chassis MBO interconnects, designers can position components around cooling locations inside the chassis, manage the heat internally and have another degree of freedom in the system design. The optical solution results in substantially higher electrical I/O density while eliminating the cooling problem at the faceplate.
Figure 3. Faceplate airflow as well as power and heat requirements provide a driving force for intra-chassis MBO that delivers increased signaling density and front panel capacity.
The top design in Figure 3 has 22 CDFP MSA x 400 Gbs = 8.8Tbs* (*bidirectional calculation) faceplate capacity. This can be extended to 22 CDFP MSA x 640 Gbs (40G) = 14.1 Tbs*. Changing from an AOC to VCSEL MBO approach achieves 6 col x 10 interconnects x 12 fibers x 25Gbs = 9Tbs*.
The mid-board optics module shown in Figure 4 provides the key to eliminating the faceplate density problems and the attendant heat management problems. The MBO module is a 12-channel transceiver capable of transmitting and receiving 300 Gb/s. The electrical interface is provided through a land grid array (LGA) socket on the optical module side and a ball grid array (BGA) on the host board, and allows modules to be placed on a 1-inch grid.
Figure 4. Mid-Board Optics modules provide the key to a high-density interface.
The CDFP AOC at 400 Gbps total bandwidth (paralleling 16 lanes at bit rate of 25 Gbps each) with an area of 2 square inches within the faceplate has a total power consumption of 6.0 W. This yields a power density of 3.0 W/in2, power per 100 Gbps of 1.25 W and panel density of 444.4 Gbps/in2. This provides the highest density pluggable interconnect per panel area compared to other competing form factor products and results in the lowest watts per Gb per sq. inch.
Industry-leading power efficiency
Power consumption is critical to high-speed data transmission. For the same performance in terms of data transmission, lower power consumption reduces the cooling load level that end customers need to provide which reduces their energy bill and overall expenses. With the lowest power consumption products in the industry, TE minimises these energy costs. For example, the target power consumption on the CDFP product is 6 W, for the QSFP28 transceiver and AOC at 100 Gbps it is 1.5W and for the mid-board module it is 4.5W. For comparison, MSA suppliers are currently targeting the emerging standard for the CDFP AOC for a maximum of 8W. The reduced energy consumption could help achieve an Energy Star rating for further product differentiation.
Marek Chaciński, Nicolae Chitica, Stefan Molin, Nenad Lalic, and Olof Sahlén, 25.78Gbps Data
Transmission with 850nm Multimode VCSEL Packaged in QSFP Form Factor Module, Optical Fiber
Anaheim, California United States, March 17-21, 2013
EPA Creates Energy Star Spec for Data Centre Storage Equipment (10/7/13)
Cisco Visual Networking Index: Forecast and Methodology, 2013–2018, June 10, 2014