Optical Networks in the Hyperscale Era

Earth from space with virtual connections

Hyperscale data centers are starting to crop up all over the world, providing both the scale and the resource density needed to handle the massive workloads of big data and the internet of things.

But while much of the attention surrounding hyperscale is on their advanced server, storage and internal networking architectures, they will also require extreme bandwidth over the long-haul – so extreme, in fact, that the industry is already looking to take the next step in fiber bandwidth.

According to Christine Young of IC designer Maxim Integrated, hyperscale is kicking up the development of optical solutions to the point that data rates are doubling every few years, rather than once a decade as in the old days. That means today’s 25Gbit/s solutions will likely be upgraded to 50Gbit/s by the end of the decade, with 100Gbit/s becoming the norm in the early 2020s. But as rates increase, so does the challenge to maintain stable signal integrity without increasing the cost or design complexity of the optical module. One way to do that is through integrated chipsets that incorporate physical components in silicon, allowing network operators to implement advanced architectures with the same ease as a low-bandwidth design.

One of the more critical applications for hyperscale optical infrastructure is the data center interconnect (DCI). This is the primary conduit for data exchanged between cloud providers and between providers and their enterprise clients. According to Dell’Oro Group, internet providers increased their DCI spending by 38 percent last year, topping $904 million. The firm says that by 2020, demand for 400Gbit/s links will be on the rise, due in large part to the accelerated refresh cycles that hyperscale providers have embarked upon. ADVA Optical Networking recently set a new benchmark for DCI bandwidth with the TeraFlex terminal that delivers 600Gbit/s over a single wavelength. In full duplex mode, that enables 3.6Tbit/s for a single rack unit, which allows organizations to double the density of current designs. 

Still, it doesn’t seem like hyperscalers’ insatiable demand for bandwidth is going to end any time soon. Already, providers like Google and Microsoft are starting to chafe at the limitations of existing solutions. Urs Hölzle, Google’s VP of technical infrastructure, told the recent Optical Fiber Communication conference in Los Angeles that with new offerings like serverless computing coming online, today’s technology will start to hit its practical carriage limits in as little as three years. Edge communications are particularly vulnerable to bottlenecks due to the cost and complexity of high-bandwidth modules like 100GbE. What’s needed, he says, are easily deployable modular solutions that enable flatter, more programmable network topologies.

Improved undersea communications are also a priority for Hölzle. At the moment, marine cables offer high capacity, but are very expensive. A more effective solution would be multiple smaller cables that can more effectively scale to both large and small(er) workloads, so instead of leasing three cables across the Pacific, Google would use 30 to gain greater network redundancy and the ability to divvy up international traffic over multiple links.

Clearly, today’s fiber infrastructure will need a rapid upgrade in order to meet that challenging data environment of the next decade and beyond. With the hyperscale sector pushing for greater bandwidth and more flexibility, ordinary enterprises and cloud providers will need to up their fiber games as well. As with any other networking technology, optical fiber is only as effective as its weakest link.

And if the IT industry has learned anything over the past 50 years, it’s that if you give users more bandwidth they’ll simply find creative new ways to transmit more data.

Related articles