For years data centres have been designed to handle networking speeds of up to 100Gbps, although things are about to get supercharged. ProLabs explains what decisions will need to be made for data centres to transition to the new 400G technology that is just coming on the market.
The building and design of new data centres can take years and require discipline and prioritisation. Flawless project management pulls together the elements of construction, networking, power and cooling to ensure the data centre project is completed on-time, under-budget and to specification. New data centre designers must keep an eye on the impact that emerging 400G networking technologies will have on data centre infrastructure.
After years of non-starts, 400G networking is finally a reality. Data centre operators understand that 400G will not be connected to each rack or cabinet. Rather, 400G will start in the core, aggregating 100G connections from each segment of the data centre and working out from there. The challenge for data centre operators is to ensure that the networking and structured cabling design remains flexible to minimise cabling infrastructure costs when the time to upgrade to 400G arrives.
Each transceiver or network cabling element has a role in the network based upon reach or distribution requirement. For each 100G use case, a corresponding 400G transceiver or network cabling element is available. The basic architecture remains the same, yet without careful consideration costly cabling and network equipment upgrades may still be necessary when migrating to 400G. Overall, the great news for data centre managers is that forecasting these needs is relatively painless.
Key 400G considerations in data centre cabling architectures
One of the first considerations is the fundamental decision of using multi-mode fibre (MMF) or single-mode fibre (SMF) cabling. MMF cabling has been a stalwart for data centre connections for the inherent low-cost yet high-quality connections that it provides. SMF cabling has recently grown in acceptance by data centre operators with intermediate reach scenarios beyond 100m, brought by the growing demand for 40G and 100G transceiver connections. A staple of the service provider domain, SMF offers future proofing, forward-thinking qualities for 400G and beyond.
Hand-in-hand with the cabling discussion is the use of multi-fibre MPO connectors. These connectors are commonly associated with short reach or parallel series optics. MPO designs require higher fibre counts and must be fully planned in preparation for moving forward with the installation.
Short Reach Transceivers
Another consideration is the initial short-reach MMF transceiver – the QSFP-DD SR8 – which requires an MPO-16 connection for eight transmit-and-receive pairs. The evolving QSFP-DD BDSR4.2 transceiver will use eight fibres in an MPO-12 connector, the same as earlier QSFP+ and QSFP28 SR4 transceivers. This transceiver will transmit and receive over two separate wavelengths on a single fibre. To maximise the distance reach, reliability and speed performance of this transceiver, OM5 cabling is recommended to accommodate the wider bandwidth.
For connections beyond 100m, only SMF 400G connections are available. Upgrading from a 40G or 100G ESR4 (300M) transceiver to a 400G transceiver will require a fibre upgrade from multi-mode to single-mode.
Top-of-Rack and distributed architectures have deployed Direct Attached Cables (DACs) and Active Optical Cables (AOCs) to quickly and cost-effectively expand data centre service levels. Many top-of-rack, leaf/spine, and similar data centre configurations depend upon four breakout connections to maximise switch port density. In current 100G or 40G architectures, there are no concerns with interoperability between top of rack switches and the network elements.
Standard, off-the-shelf DACs and AOCs allow for simple and efficient connections of network elements and both cable types will largely continue to be deployed in 400G data centre architectures. The underlying tenets for DACs and AOCs will remain the same, yet new 400G technologies will require additional considerations when coming from traditional 100G and lower deployments.
PAM4 signal modulation is a once-in-a-generation shift in networking that adds complexities to data centre cabling infrastructures, especially when weighing the cost-benefits and planning future upgrades. PAM4 modulation techniques enable a fourfold increase in data rates, from 100G to 400G.
Another consideration that needs to be taken into account is that QSFP-DD network element ports are backwards compatible to pluggable NRZ (non-return zero) QSFP transceivers, DACs and AOCs. However, legacy QSFP28 and QSFP+ ports are not forwards compatible with QSFP-DD pluggables. The practical benefits of backwards compatibility present themselves when connecting a legacy NRZ 100G or 40G QSFP network element to a QSFP-DD switch with a legacy DAC or AOC. However, these practical benefits may unfortunately be negated by underutilisation of a 400G QSFP-DD port.
As mutually exclusive modulation methodologies, it is important to understand that PAM4 and NRZ connections are not interoperable. PAM4 signalling changes the dynamic on both the electrical and optical sides of the connection. For legacy 40G and 100G NRZ transceivers, DACs and AOCs communicated with four lanes of 10G or 25G on both the electrical and optical side of the pluggable. In contrast, PAM4 modulation delivers eight lanes of 50G on the electrical side and either eight lanes of 50G or four lanes of 100G on the optical side of the pluggable.
The practical considerations are revealed when integrating legacy QSFP28 NRZ ports into 400G networks. Standard QSFP-DD to 4x100G breakout cables are designed for two lanes of 50G PAM4 on the electrical interface, which will not interoperate with the four lanes of 25G in QSFP28 ports. A gearbox is required in the QSFP28 pluggable end to perform the conversion to 4x25G NRZ. Transceiver solutions exist today with gearboxes for breakout solutions, such as DR1, FR1, & LR1. Network operators need to factor in these gearboxes in order to integrate legacy networks into modern networks.
400G QSFP-DD DAC and AOC options are both limited and more complex than earlier generations of pluggable cable; passive DAC cables are limited to 3m in length, beyond 3m will require active copper or optical cables. However, technical challenges with power and VCSEL lasers to support 400G breakout AOC cables have hindered the development of this option. The aforementioned PAM4 50G electrical interface presents more complex options for breakouts, while QSFP-DD DAC breakout cables currently callout several variants by network application and data rates.
PAM4 breakout cable configurations:
- QSFP-DD to 4 QSFP28 (2x50G PAM4 signaling)
- QSFP-DD to 2 QSFP28 (100G PAM4 signaling)
- QSFP DD to 8 SFP56 (50G PAM4 signaling)
NRZ 200G breakout cable configurations:
- QSFP56 to 2 QSFP28 (4x25G NRZ signaling)
- QSFP56 to 4 QSFP28 (2x25G NRZ signaling)
- QSFP56 to 8 SFP28 (25G NRZ signaling)
While we are on the precipice of the 400G era, the impacts of this new technology will be felt everywhere from the network to the rack level. Planning for the required interfaces, link distances, new standard and practical considerations of the shift to PAM4 will avoid costly upgrades down the road. In order to keep up with the growing demands and ensure data centres are prepared for the future, network operators need to deploy high-quality 400G solutions. These next-generation solutions will empower them to deliver the networks of tomorrow – today, without the price tag of new, non-mature technologies.