We’ve been working on bringing the cloud into the network since way back in 2012. What have we accomplished, and what have we learned? Let’s take a look.
In the beginning there was NFV
The journey to edge cloud started way back in 2012 with the ETSI network functions virtualization (NFV) white paper. In it, a group of operators described how they wanted to use the model of the cloud to drive a set of benefits into their networks. Specifically, they wanted to break the lock of single-vendor products in their networks. Instead, they wanted to leverage the model of openness, best-of-breed software, low-cost hardware and rapid innovation that we see in the cloud.
Great idea, but it took some time to get going. And we had to get past some pre-conceived notions.
One of these was that data center technologies were suitable for deployment in the communications network and in customer sites. While there was some early success with centralized NFV being used to transform large, centralized functions into virtualized implementations, there are big differences between the data center and the edge of the communications network.
The next step was to optimize NFV for lightweight edge deployments. Enter universal CPE (uCPE).
Next came universal CPE
With uCPE we moved beyond the basics of replacing an appliance with software running on a server. We started to address some other critical requirements, such as:
- The need to support a wide variety of hardware platforms and software virtual network functions (VNFs)
- Enabling simple deployment at scale by leveraging automation to minimize operational overhead
- Providing operational capabilities to monitor, troubleshoot and administer the system
- Optimizing the system to use standard cloud computing models and tools
- And leveraging the capabilities above to address new use cases – including some that weren’t previously possible
I believe we have now met those requirements and can support deployments of uCPE with our carrier and enterprise customers. And that’s great, but there’s more. Once you have an open and software-centric system located on a customer’s site, what else can you do with it?
Once you have an open and software-centric system located on a customer’s site, what else can you do with it?
Enter edge cloud
With the growth in uCPE deployments, we have enabled the deployment of generalized cloud computing infrastructure at the edge of the network. The next logical step for managed service providers is to leverage that infrastructure to offer IaaS and PaaS solutions to end-customers. Doing so will allow end users to run their own applications on the same infrastructure that is supporting communications and security applications. We will then have a variety of applications that are managed by different groups sharing the same physical infrastructure and hosting software. In other words, we have an edge cloud.
While the benefits of edge cloud are clear, it brings in its own set of additional characteristics and manageability requirements for scaling the edge cloud deployments successfully. Here’s an overview:
Options for embedded microcloud – The edge cloud should act like a centralized cloud. It should support standard management models and the ability to scale out the deployment. We believe the best way to achieve this is by running a local copy of OpenStack controller, creating an embedded micro-cloud. Doing so provides numerous benefits in terms of security, scalability, and manageability. See here for more details.
Zero-touch support – Ability to turn up edge clouds without a truck roll, especially in a post-Covid world.
Centralized administration – Ability to manage all aspects of edge cloud infrastructure from a centralized administrative interface. This includes managing updates of NFV platform, VNF or CNF images, service chain edits, disaster recovery, troubleshooting, etc.
End user control - This centralized administration also enables the service provider to open up the management of the private edge workspace to the end user. This would typically take the form of a customer portal that ties into the centralized administration.
Local VM/VNF image management – VNF/VM images that are required to turn up services in edge cloud are typically huge and may be multiple gigabytes in size. Edge clouds are deployed at remote sites and the connectivity to those locations is sometimes unreliable and limited in bandwidth. These two limitations create challenges with software management and demand specialized solutions. An edge cloud solution must provide the ability to be deployed with pre-installed images to ensure rapid turn-up even when bandwidth is limited.
Disaster recovery – Ability to rebuild cloud and services from snapshots in case of hardware failures, and the ability to support resilient clusters of compute nodes. The resource constraints typically present in edge clouds pose challenges requiring optimized solutions.
Security requirements – Management architecture for edge cloud should use a mode of connectivity to edge cloud that reduces the attack surface of the solution. It should also support two-factor authentication, blacklisting and encrypted communication channels.
Dynamic service chains – This capability to dynamically alter service chains is also referred to as service edit. Unlike simple Heat-based models that disruptively rewrite the entire config, service edit enables insertion of a single VM to support a customer application.
The result? Edge cloud!
With all that we’ve learned, we can now realize the dream that started in 2012. We can deploy standard servers on a customer site and use them to deploy managed connectivity and security services. We can layer on the ability to run local workloads administered by the end user. And we can do it all under standard management and control models to ensure availability and accountability. This is not a forward-looking dream; it‘s real. And we’re now ready to use the edge cloud to unleash the next wave of innovation at the edge of the network.