Increased data traffic on optical transport links took a jump over the past month due in part to the increase in teleworking around the globe and of course the increased consumption of streaming video content. While this is expected to ease somewhat as the pandemic recedes, it won’t happen overnight, and most experts think we will see a permanent increase in the number of people working from home.
Upgrading network infrastructure is difficult, however, even in this age of virtualization. Adding raw bandwidth and throughput requires new hardware, which involves integration, setup, performance tests, calibration … So it always makes sense to implement the most robust technology with the longest shelf-life.
Onward and upward
Currently, network providers are preparing their OTNs for upgrade to 400Gbit/s, with expansion to 800Gbit/s when that technology becomes more readily available. This has market analyst Premium Market Insights projecting the field will nearly triple to $33.44 billion by 2025.
ADVA, however, recommends an expansion window up to 1200Gbit/s. Using the FSP 3000 TeraFlex™ terminal, which is available now, network providers can deliver 1200Gbit/s per channel in as little as 150GHz of spectrum. In this way, providers gain a 50% raw capacity boost over 800Gbit/s while at the same time benefitting from software-defined fractional QAM modulation and adaptive baud rates that enable deployment of a range of channel and client configurations. As well, the platform’s dual-core coherent engine improves spectrum utilization over planned single-core 800Gbit/s solutions, helping to ensure that none of that bandwidth goes to waste.
In total, every TeraFlex™ terminal provides up to six channels, which gives it a maximum capacity of 7.2Tbit/s – not bad for a 1RU device.
Why is this important? As mentioned above, teleworking has already produced a spike in traffic, particularly as the use of video conferencing and sharing of high-resolution images and graphics becomes more prevalent. At the same time, 5G and the IoT are on the cusp of mainstream acceptance, which will still bring a lot of data into core network infrastructure despite the fact that much of the day-to-day workload will remain in distributed processing and storage resources on the edge.
Prepping for the future
Secondly, it’s always best to build in the most extreme scale as early and as quickly as you can. The fact is no one knows what OTN data loads will look like next year, let alone by mid-decade or beyond, given the uncertainty we find ourselves in at the moment. By building a higher bandwidth envelope now, providers will be better able to manage costs over the long term. With margins for data services likely to become even tighter, those who can delay the next hardware upgrade will be in far better position to meet future needs than those who cannot.
And finally, better scalability allows the provider to offer more in the way of network segmentation and virtualization that will make it easier to deliver consumption models that users need. Everyone is going to want more bandwidth at cheaper rates going forward, so the more scale the provider can deliver the more options are available to customize network services in a wide variety of ways.
As a general rule, of course, it’s always smartest to deploy the latest, most advanced technology, particularly in such a competitive environment as networking. The name of the game, after all, is helping clients reach their maximum potential. The only way to do that is by pushing infrastructure to its maximum potential as well.