Moving from hyperscale to hyper-localized
Hyperscale data centers are the bread-and-butter approach for today’s cloud computing applications. They provide scalable and on-demand computing. So why is edge computing / edge cloud getting so much attention? Because it’s the only way we can meet the new requirements for hyper-localization.
Moving everything to the cloud has become the mantra of enterprises. Cloud computing today is built on hyperscale data centers. And the centralized cloud has undeniable benefits. The biggest is scalable and on-demand compute resources, without the need to build infrastructure. And this benefit is reflected in the continued rapid growth of cloud suppliers like AWS, Azure and Google. But cloud computing using hyperscale data centers isn’t a panacea. There are tradeoffs to consider.
I recently attended the OSA 5G Summit webinar with Jason Hoffman from MobileEdgeX. He mentioned the current move from hyperscale to hyper-localized. Jason was talking about the case of 5G infrastructure. He noted that it must be located very close to the radio to meet requirements for latency.
And we’re now seeing requirements across a broad set of applications that drive hyper-localization. Those requirements include security, low latency, standalone resilience and data sovereignty.
Hyperscale versus hyper-localized
Analysys Mason had a recent webinar titled “From cloud-native to edge-native computing: defining the cloud platform for new use cases.” It included the diagram below that compares edge cloud and centralized cloud.
The diagram compares hyperscale (“cloud-native infrastructure”) on the left with hyper-localized (“edge-native infrastructure”) on the right.
- Computing: The traditional hyperscale cloud is built on centralized and pooled resources. This approach enables unlimited scalability. In contrast, compute at the edge has limited scalability, and may require additional equipment to grow applications. But the initial cost at the edge is correspondingly low, and grows linearly with demand. That compares favorably to the initial cost for a hyperscale data center, which may be tens of millions of dollars.
- Location sensitivity and latency: Users of the hyperscale data center assume their workloads can run anywhere, and latency is not a major consideration. In contrast, hyper-localized applications are tied to a particular location. This might be due to new laws and regulations on data sovereignty that require that information doesn’t leave the premises or country. Or it could be due to latency restrictions as with 5G infrastructure. In either case, shipping data to a remote hyperscale data center is not acceptable.
- Hardware: Modern hyperscale data centers are filled with row after row of server racks – all identical. That ensures good prices from bulk purchases, as well as minimal inventory requirements for replacements. The hyper-localized model is more complicated. Each location must be right-sized, and supply-chain considerations come into play for international deployments. There also may be a menagerie of devices to manage.
- Connectivity: Efficient use of hyperscale data centers depends on reliable and high-bandwidth connectivity. That is not available for some applications. Or they may be required to operate when connectivity is lost. An interesting example of this case is data processing in space, where connectivity is slow and intermittent.
- Cloud stack: Hyperscale and hyper-localized deployments can host VMs and containers. In addition, hyper-localized edge clouds can host serverless applications, which are ideal for small workloads.
- Security: Hyperscale data centers use a traditional perimeter-based security model. Once you are in, you are in. Hyper-localized deployments can provide a zero-trust model. Each site is secured as with a hyperscale model, but each application can also be secured based on specific users and credentials.
You don’t have to choose upfront
So, which do you pick? Hyperscale or hyper-localized?
The good news is that you can use both as needed, if you make some good design choices.
- Cloud-native: You should design for cloud-native portability. That means using technologies such as containers and a micro-services architecture.
- Cloud provider supported edge clouds: Hyperscale cloud providers are now supporting local deployments. These tools enable users to move workloads to different sites based on the criteria discussed above. Examples include IBM Cloud Satellite, Amazon Outposts, Google Anthos, Azure Stack and Azure Arc.
The hyper-localized edge isn’t the hyperscale data center
And the operational requirements are very different. Support for automated deployment and remote operation are essential. And a truly integrated solution should be able to support both virtualized network functions as well as arbitrary user workloads.
That’s exactly what we created at ADVA with our Ensemble software suite. Ensemble includes our Connector network operating system for networking and hosting applications. And there’s our Ensemble management and orchestration (MANO) software. It provides the same pay-as-you-go model that you see in hyperscale clouds.
With Ensemble, you can support hyper-localized applications at customer sites or cell sites. And do so with a multi-vendor deployment. That means using standard servers and best-of-breed networking software.
You may be working at a web-scale cloud provider, a traditional telco or an enterprise. All of these have applications that require hyper-localization. ADVA has the tools to help you deploy edge compute to support those applications.