Constructing a Industrial Scale 4G/5G Cloud Native Structure and Platform

0
6
Adv1


Adv2

Over the past thirty years, the idea of a cellular core has advanced dramatically. From analog origins counting on circuit switching to the introduction of packet switching within the early Nineteen Nineties, the primary technology of cellular packet cores had been vendor home equipment with specialised {hardware}.   An awesome instance of that is the Cisco ASR 5500, which tightly built-in {hardware} with software program to offer industry-leading reliability and efficiency.   Though the ASR 5500 performs admirably, the technique of constructing, sustaining, and upgrading devoted home equipment is pricey, when every new technology requires new customized parts like knowledge processing boards for increased efficiency.

Advances in off-the-shelf {hardware} and open-source software program, corresponding to 25/40G NICs, SRIOV, DPDK, and VPP, have enabled the deployment of less expensive cellular packet cores that meet the efficiency calls for of cellular community operators, and Cisco has led the {industry} on this space by creating the Cisco Extremely Packet Core for virtualized environments.   This community perform virtualization (NFV) had a {hardware} value benefit over conventional home equipment however proved fragile because of the advanced NFV deployment architectures required to deploy digital community capabilities (VNFs).  Consequently, NFV deployments usually have extra operational prices than conventional appliance-based fashions.

The transition to 5G offered a possibility for the {industry} to leverage new know-how developed to deploy purposes throughout private and non-private clouds.  The 3GPP requirements physique encourages using cloud-native applied sciences and has emboldened the {industry} to deal with the decomposition of purposes into composable microservices.  By embracing a cloud-native structure, the {industry} is steering in a brand new path, away from the unreliability and complexity points that troubled the {industry}’s preliminary try at transitioning with virtualization.

Reliability, Operational Simplicity, and Scale

A Kubernetes-based cloud-native resolution was the plain alternative for a way we went about constructing our Converged Core. Embracing Kubernetes gives quite a few advantages, corresponding to fast utility improvement, new CI/CD supply patterns, and higher resiliency fashions.  Whereas Kubernetes is useful for managing the multitudes of containerized purposes on this new cloud-native panorama, the pitfalls of reliability and complexity that plagued the early VNF deployments throughout the {industry} remained.  As promising as cloud-native software program containers are, creating a converged core required marrying this new cloud-native strategy with a complete structure—an structure that had but to be outlined.  After we started defining what a Converged Core structure could appear like, we wrestled with many decisions:

Selection 1 – BareMetal vs Virtualized Deployments

In evaluating how we must always deploy our new Converged Core we thought of the present NFV structure with Kubernetes embedded inside the VNFs or a BareMetal deployment mannequin.  BareMetal turned the clear alternative, it allowed us to simplify the answer and improve reliability by eliminating advanced and failure-prone elements of the earlier NFV structure.  Gone had been the VNF supervisor, NFV orchestrator, VIM, hypervisor, and all of the complexity and friction that got here with these parts.  What was left?  A hardened Linux OS working on prime of UCS M5 {hardware}.

Selection 2 – The Cloud-Native Stack

The Cloud Native Computing Basis (CNCF) panorama gives an abundance of options for constructing a platform stack, even offering a useful map (https://panorama.cncf.io/) that engineers can use to visualise choices in constructing a cloud-native stack.

Our priorities in creating a brand new structure are rooted in simplicity and reliability, so we centered on including solely essential, mature CNCF parts to the stack, corresponding to Helm, ContainerD, Etcd, and Calico.  Our guiding rule in improvement was so as to add solely essential and mature options, aiming to maximise reliability and decrease complexity.  For instance, to enhance reliability the Converged Core makes use of solely native storage volumes, in consequence, we don’t require any cloud-native storage add-ons.

Selection 3 – Managing Day-0 Set up and Day-N Upgrades

Managing day-0 set up / day-n upgrades of NFV architectures will be difficult with a number of integration factors into completely different orchestrators within the MANO stack, leading to lengthy integration occasions and a comparatively fragile resolution.  For the Cisco Converged Core crew, a steady cloud-native stack was a important element, as was automated lifecycle administration for all layers – not simply the appliance layer. Consequently, Cisco developed a cloud-native cluster administration layer that ensures constant software program and tunings throughout all layers – BIOS settings, firmware, host OS, Kubernetes, and utility variations.    This expertise is so easy that upgrading the Cisco Converged Core has grow to be a two-step operation – the first step, choose your new software program model after which step two, commit it to the cluster.  To facilitate automation, the cluster administration layer gives CLI, REST, and NETCONF interfaces.  Help for a variety of interfaces allows seamless integration right into a cellular service supplier’s present automation resolution – corresponding to Cisco’s Community Service Orchestrator (NSO).

Selection 4 – Managing Utility Configuration

When creating an answer just like the Cisco Converged Core, recognizing when to and when to not use new know-how is vital. Utility configuration administration is one in every of these difficult areas. Historically, cellular service suppliers have managed utility configurations utilizing NETCONF/REST or CLI.  With our new Converged Core, we are able to leverage present SP interfaces or use cloud-native choices like Kubernetes CRD or configuration maps.  Our alternative was the established order as a result of sustaining a conventional administration interface would drastically simplify integration into the cellular service supplier’s configuration automation resolution.

Placing it collectively

By specializing in simplicity, reliability, and scale, we’ve developed an structure that permits service suppliers to handle 100s of Kubernetes clusters throughout 1000s of servers whereas serving thousands and thousands of subscribers.

 

For Extra Data

To study extra concerning the Cisco Converged Core, go to our product pages. To study extra about T-Cell and Cisco’s Launch of the World’s Largest Cloud Native Converged Core Gateway, learn the December 2022 press launch.

 

 

Share:

Adv3