Wide area networks are in the midst of a dramatic transformation. Whereas once they were used almost exclusively for bulk data transfers between centralized and remote processing centers, they are quickly converting to fabric-style architectures supporting live data connections between the data center, the cloud and the IoT edge.
This shift is altering the way long-haul networks are monitored and managed. Live production data is far less predictable than bulk data, which means wide area connections need to adapt to the bursty, small packet, two-way delivery that has traditionally characterized local infrastructure. At the same time, network assurance is rising on the wide area because many of the applications it supports are becoming increasingly intolerant to interrupted or even diminished connectivity. Whether it is autonomous or semi-autonomous vehicles, connected healthcare devices or simple utility monitoring, a lost connection is quickly becoming a life-or-death situation.
From reactive to proactive
Much of today’s wide area infrastructure consists of optical fiber, of course, which means the owners of that fiber, and even the lessees, are under the gun to replace slow, manual processes like fault isolation and retroactive troubleshooting with proactive, automated and increasingly intelligent monitoring solutions. The ultimate goal is to prevent bottlenecks and other performance-degrading conditions before they impact the user experience, but this transition also benefits fiber users in a number of other ways.
For one thing, maintenance and operational costs tend to fall dramatically. A more proactive assurance solution limits the number of expensive truck rolls that fiber companies typically make in order to maintain high levels of service. By analyzing multiple traffic and performance metrics, and then coupling this with remote automation of virtualized network architectures, fiber links can be made more reliable without having to lay hands on physical infrastructure. And even when a manual repair is required, it can usually be scheduled during off-peak periods or otherwise conducted in ways that produce minimal service disruption. In the end, this lessens the burden on the operational budget, which in turn decreases user costs even as it boosts profitability for the provider.
Improving fiber monitoring capabilities also enhances the ability to scale network consumption and even tailor it to specific workloads, which in turn lowers capital expenditures by streamlining the addition of new equipment or the activation of new links. This will likely prove crucial in the coming years as, as mentioned above, new IoT and 5G services come online. While much of this data will reside on the edge, enough of it will make its way to centralized processing facilities to add even greater loads to wide area infrastructure. Through improved monitoring, organizations will be able to see exactly how their networks need to be improved and identify the most cost-effective, non-disruptive solutions.
And finally, there are the myriad ways in which improved monitoring helps reduce the errors and poor judgement calls that often lead to substantial service disruptions. In most cases, these events take place because management technicians lack the level of visibility and insight into disparate network architectures. As POST Luxembourg found out upon deploying the ADVA ALM fiber monitoring solution, proper monitoring is key to provide the 24/7 assurance that today’s digital users require, even those who are leveraging dark fiber. And as an added bonus, the ALM solution uses standard, open interfaces to streamline integration into existing operational support systems – no need to reinstall an entirely new management stack to gain next-generation monitoring capabilities.
The complexity of network architectures and the data volumes they support have already hit levels at which legacy management and monitoring systems are struggling to keep up. Going forward, more of this burden will fall on automated, autonomous solutions while human technicians focus on broader, more strategic concerns. But automation, particularly intelligent automation, is only as good as the data it receives, which is why today’s fiber service provider needs to start thinking seriously about ways to improve data collection across a wide range of operating parameters.
And the best time to do this is before data loads spike, not after.