With Distributed Cloud Edge, Google finally gets the right edge computing strategy

Announced at the Google Cloud Next ’21 conference, Google Distributed Cloud (GDC) plays a critical role in the success of Anthos by making it relevant to telecom operators and enterprise customers. Google Distributed Cloud Edge, part of GDC, aims to make Anthos the foundation for running 5G infrastructure and modern workloads like AI and analytics.

Recently, Google announced the general availability of GDC Edge by sharing details on the hardware configuration and requirements.

In its original form, GDC Edge runs on two form factors – rack-based configuration and GDC Edge appliance. Let’s take a closer look at these choices.

Rack-based configuration for GDC Edge

This configuration is aimed at Telecom Operators and Communication Service Providers (CSP) for 5G core and radio access network (RAN) operations. The CSPs can provide their end customers with the same infrastructure to run workloads that require ultra-low latency, such as AI inference.

The locations where the rack-based hardware runs are referred to as the Distributed Cloud Edge Zone. Each zone runs on dedicated hardware provided, deployed, operated and maintained by Google. The hardware consists of six servers and two top-of-rack (ToR) switches that connect the servers to the local network. In terms of storage, each physical server comes with 4 TiB hard drives. The gross weight of a typical rack is 900 lbs or 408 kg. The Distributed Cloud Edge Rack comes pre-configured with the hardware, network, and Google Cloud settings specified at time of order.

Once a DCE zone is fully configured, customers can group one or more servers from the rack to create a NodePool. Each node of the NodePool acts as a Kubernetes worker node connected to the Kubernetes control plane running in the closest Google Cloud region.

This distributed topology gives Google the flexibility to update, patch, and manage the Kubernetes infrastructure with minimal disruption to customers’ workloads. This allows DCE to benefit from a secure and highly available control plane without consuming the processing capacity on the nodes.

Google took a unique approach to edge computing by moving the worker nodes to the edge while the control plane ran in the cloud. This is very similar to how Google manages GKE, except that the worker nodes are part of the NodePool deployed at the edge.

The clusters running on DCE can be connected to Anthos for better control over deployments and configuration.

A secure VPN tunnel connects the on-premises Distributed Cloud Edge infrastructure to a virtual private cloud (VPC) configured in Google Cloud. Workloads running at the edge can access Google Compute Engine resources deployed in the same VPC.

The rack-based configuration requires connectivity to Google Cloud at all times. Because it runs in a controlled environment in a CSP facility, meeting this requirement is not a challenge.

Once the clusters are deployed on the DCE infrastructure, they can be treated like other Kubernetes clusters. It is also possible to deploy and run virtual machines based on kubevirt within the same environment.

CSPs from the United States, Canada, France, Germany, Italy, the Netherlands, Spain, Finland and the United Kingdom can order rack-based infrastructure from Google.

GDC Edge Appliance

The GDC Edge Appliance is a Google Cloud-managed, secure, high-performance appliance for edge locations. It offers local storage, ML inference, data transformation and export capabilities.

According to Google, GDC Edge Appliances are ideal for use cases where bandwidth and latency limitations prevent organizations from processing the data from devices such as cameras and sensors in cloud data centers. These appliances simplify data collection, analysis and processing in remote locations where large amounts of data originating from these devices need to be processed quickly and stored securely.

The Edge Appliance is aimed at companies in the manufacturing, supply chain, healthcare and automation industries with low latency and high throughput requirements.

Each appliance features a 16-core CPU, 64GB of RAM, an NVIDIA T4 GPU, and 3.6TB of usable storage. It has a pair of 10 Gigabit and 1 Gigabit Ethernet ports. With the 1U rackmount form factor, it supports both horizontal and vertical orientation.

The Edge Appliance is essentially a storage transfer device that can also run a Kubernetes cluster and AI inference workloads. With ample storage capacity, customers can use it as a cloud storage gateway.

In practical terms, the Edge Appliance is a managed device running Anthos clusters on bare metal. Customers follow the same workflow as installing and configuring Anthos in bare metal environments.

Unlike the rack-based configuration, the clusters run both the control plane and worker nodes locally on the appliance. However, they are registered with the Anthos management plane running in the closest Google Cloud region. This configuration allows the Edge Appliance to run in an offline, air-gap environment with intermittent connectivity to the cloud.

Analysis and takeaways

With Anthos and GDC, Google defined a comprehensive multicloud, hybrid and edge computing strategy. GDC Edge targets CSPs and enterprises with purpose-built hardware offerings.

The telecom operators need a reliable and modern platform to operate the 5G infrastructure. Google is positioning Anthos as a cloud-native, reliable platform for running containerized network functions (CNFs) required for 5G core and radio access networks (RAN). By providing a combination of managed hardware (rack-based GDC Edge) and software stack (Anthos), Google aims to enable CSPs to offer 5G Multi-Access Edge Computing (MEC) to enterprises. It has partnered with AT&T, Reliance JIO, TELUS, Indosat Ooredoo and more recently Bell Canda and Verizon to run 5G infrastructure.

Google’s approach differs from Amazon and Microsoft in deploying 5G MEC. Both AWS and Azure have 5G-based zones that serve as extensions of their data center space. AWS Wavelength and Azure Private MEC enable customers to run workloads at the closest edge location managed by a CSP. Both Amazon and Microsoft are working with telcos like AT&T, Verizon, and Vodafone to offer hyperlocal edge zones.

Google relies heavily on Anthos as the fabric to power 5G MEC. The company is working with leading telcos worldwide to help them build 5G infrastructure on top of its proven Anthos-powered cloud-native infrastructure. Although Google may have a competing offering for AWS Wavelength and Azure Private MEC in the future, its current strategy is to push GDC Edge as the preferred 5G MEC platform.

Google has finally responded to Azure Stack HCI and AWS Outposts with the GDC Edge Appliance. It targets organizations that need a modern, cloud-native platform to run data-driven, compute-intensive workloads at the edge. The Edge appliance can be deployed at remote sites with intermittent connectivity, as opposed to the rack-based configuration.

With Anthos as the cornerstone, Google’s distributed cloud strategy looks promising. It aims to gain both the enterprise advantage and the telecom advantage with purpose-built hardware offerings. Google finally has a viable competitor for AWS Wavelength, AWS Outposts, Azure Edge Zones, and Azure Stack.

Leave a Reply

Your email address will not be published.