Once you search on-line, use e mail, watch a video, or click on on a really useful hyperlink, then you definitely may’ve used hyperscaler networks that help every part from cloud-hosted functions to neural networks with AI/ML. Functions are producing and ingesting knowledge at super charges, which suggests knowledge facilities are dealing with huge visitors masses. Based on the Worldwide Vitality Company (IEA), for each bit of information that travels the community from an information middle to finish customers, one other 5 bits of information are transmitted inside and amongst knowledge facilities (“Knowledge Centres and Knowledge Transmission Networks”, November 2021). IEA estimates that 1% of all international electrical energy is utilized by knowledge facilities, with rising vitality use over previous years, growing 10%-30% per 12 months (IEA 2022).
Challenges
To deal with these huge calls for, cloud suppliers add extra servers with increased capacities leading to extra knowledge being pushed into the community– each inside and outdoors of the info middle. With out correctly scaled infrastructure, the community turns into a bottleneck. And that’s when customers submit about their sub-par experiences.
Given the present environmental and geopolitical issues, vitality effectivity and attaining web zero carbon emissions are more and more turning into high priorities for cloud suppliers. However as knowledge facilities have to scale and help extra bandwidth-hungry functions, the query is how a lot energy, area, and cooling are wanted whereas going inexperienced?
Throwing bandwidth on the downside may appear to be a straightforward repair till the tradeoffs seem. Rising capability includes extra tools, energy, area, and cooling to keep away from potential overheating, or dangers with working out of rack area. For instance, scaling to over 25Tbps capability in a leaf/backbone community utilizing 32x400G switches at 1 RU every would require six switches. That’s roughly 3000 watts consuming 6RU area, to not point out the 36 followers wanted for cooling.
Resolution
However, if we might construct huge capability in a small footprint, we might tip the price and efficiency scales again in favor of the suppliers and assist the setting. What may sound close to unattainable is now obtainable and delivery, with the latest member of the Cisco 8100 Collection, the Cisco 8111-32EH that’s able to 25.6T capability in a compact 1 RU kind issue (see press launch). With ultra-fast QSFP-DD800 ports utilizing a Silicon One G100 25.6T ASIC, the Cisco 8111 can help 64x400G ports in the identical 1 RU kind issue at roughly 700W.
That’s as much as a 77% discount in energy, 83% discount in area and variety of followers to realize the equal capability utilizing a number of 12.8T ASIC switches, primarily based on inner lab research 1 (see Determine 1).
Not solely can cloud suppliers profit from main operational price financial savings and decrease their energy payments, however this discount additionally interprets to important financial savings in carbon emissions with ~9000 kg CO2e/12 months in Greenhouse Gasoline (GHG) discount (primarily based on inner estimates). And the facility financial savings could possibly be used so as to add extra revenue-generating servers that assist cloud suppliers develop their enterprise.

The huge vitality financial savings are a results of our in depth investments, akin to Cisco Silicon One. For instance, utilizing cutting-edge 7nm expertise helps enhance energy effectivity, whereas using 256x112G SerDes helps ship 25.6T in a single chip for main energy/area/cooling discount.
With excessive density QSFP-DD800 modules, we’re introducing new 2x400G-FR4 and 8x100G-FR modules that allow excessive density breakouts to 100G and 400G interfaces, supporting 64x400G ports or 256x100G ports in simply 1 RU.
These modules will allow increased radix for next-gen community design and double the bandwidth density of the platform footprint with environment friendly connectivity over copper, single-mode, and multi-mode fiber. The QSFP-DD800 kind issue can help subsequent technology pluggable coherent modules which will require increased energy dissipation and nonetheless present the effectivity and cooling capabilities.
By delivering ground-breaking improvements, akin to for public cloud knowledge facilities, with a lot increased densities in compact kind elements, we will help clients drastically cut back operational prices. Primarily, we’re redefining the economics of cloud networking by means of cost-effective scale.
Improvements with Cisco 8000 Collection
The Cisco 8000 portfolio is used for mass-scale infrastructure options to ship excessive efficiency and effectivity, together with for cloud networking with hyperscalers and web-scale clients adopting hyperscale architectures (see Enabling the Net Evolution). The Cisco 8000 portfolio consists of the next merchandise and cloud use instances:
- Cisco 8100 Collection merchandise are mounted port configurations in 1 RU and a pair of RU kind elements which are optimized for web-scale switching with TOR/leaf/backbone use instances. This product line consists of 8101-32H, 8102-64H, 8101-32FH and now Cisco 8111-32EH. The 8100 might be supplied as disaggregated techniques utilizing a third-party NOS, akin to Software program for Open Networking within the Cloud (SONiC), along with built-in techniques with IOS XR.
- Cisco 8200 Collection merchandise are mounted port configurations in 1 RU and a pair of RU kind elements, together with Cisco 8201 and Cisco 8202 that can be utilized for the Knowledge Heart Interconnect (DCI) use case to hyperlink knowledge facilities utilizing IP transport. These are supplied as built-in techniques with IOS XR.
- Cisco 8800 Collection are modular techniques, and embrace Cisco 8804, Cisco 8808, Cisco 8812, and Cisco 8818 that can be utilized in quite a lot of use instances akin to super-spine, high-capacity DCI and WAN spine use instances
Extra particulars might be discovered within the Cisco 8000 knowledge sheet.
The Cisco 8000 provides our clients the flexibleness to select from a spread of kind elements, speeds throughout 100G/400G/800G ports and quite a lot of consumer optics, built-in techniques, and disaggregated techniques utilizing SONiC for open-source networking use instances (see Rise of the Open NOS), and leveraging Cisco Automation portfolio.
At Cisco, we meet clients the place they’re, which suggests offering answer selections that match their use instances and necessities to allow the proper buyer outcomes.
Increased networking capability is now potential with out dramatically increased energy payments and inefficient cooling options, which ends up in quickly increasing the carbon footprint. Clients can save prices, assist the setting, and decrease person frustrations by means of higher experiences. As an alternative of selecting between going inexperienced or scaling large, we’re serving to cloud suppliers do each with Mass-scale Infrastructure for Cloud. Discover out extra concerning the Cisco 8000 Collection.
Open Compute Venture (OCP) World Summit
The Open Compute Venture (OCP) World Summit is assembly this week (Oct 18th – 20th) in San Jose, and this 12 months’s theme is “Empowering Open”, which we absolutely help by means of open collaboration with the open-source neighborhood. Two years in the past, on the OCP World Summit, we first launched Cisco 8000 supporting SONiC with each mounted and modular techniques, and proceed to collaborate with the OCP neighborhood to develop open options. For instance, on the OCP 2021 World Summit, Meta and Cisco launched a disaggregated system, the Wedge400C, a 12.8 Tbps white field system using Cisco Silicon One (see press launch).
This 12 months, we’re showcasing our 8100 portfolio with 8101-64H, 8102-32FH, 8101-32H together with the brand new 8111-32EH and QSFP-DD800 optics at OCP. We can even be exhibiting SONiC demos with totally different use instances, akin to twin TOR and with modular techniques utilizing the 8800. Cisco can even be talking on the Govt Speak, that includes Rakesh Chopra on “Developed Networking, the AI Problem”.
Go to our sales space at Open Compute Venture (OCP) World Summit this week to see our newest improvements.
1 Supply : Cisco inner lab take a look at primarily based on restricted pattern measurement and take a look at run-time.
Share: