![]() 51.2 Tbps Switch ASICs to Drive the Migration to 800 Gbps and 1.6 Tbps Starting in 2H22 NVIDIA continues to drive products and solutions at a rapid pace. They continued to push the boundaries and accelerate the pace of innovation in the data center through the recent GTC announcements. With the launch of Spectrum-4, we got to see NVIDIA’s first Ethernet Switch announcement and what they were busy developing since acquiring Mellanox. Keeping with the accelerating theme, NVIDIA clearly accelerated the pace of Innovation and proudly showcased Spectrum-4 at GTC. Market Background Over the next five years, 51.2 Tbps ASICs will be responsible for over $20B in Data Center Switching revenue. About 2X the size of the 100 Gbps upgrade cycle and 4X the size of the 400 Gbps upgrade cycle. This class of ASIC is critical to allowing data centers to scale as more workloads become hardware accelerated. Hardware acceleration can come via the server, NiC, or DPU and causes networking bandwidth to double every year compared to the more traditional growth of 30-40%. NVIDIA’s 51.2 Tbps ASIC NVIDIA’s Spectrum-4 announcement included many new and first-to-market capabilities. It was the first 51.2 Tbps ASIC announced in the market and the first switch that can do 64 ports of 800 Gbps from a single ASIC. It also supports 1.6 Tbps speeds, another first and the preferred speed for the two most prominent Hyperscalers. In addition, NVIDIA announced Spectrum-4 as a fully integrated Switch that will sample later in 2022 and begin shipping in 2023. The chip will be produced at TSMC in the 4N process. NVIDIA did over $1B in Ethernet Switch and NIC revenue in 2021. It's All About The SERDES NVIDIA will be using its own, homegrown, 112 Gbps SERDES. SERDES has always been a challenge in switch development, with many ASIC companies having to integrate 3rd party SERDES. However, we view vertically integrated as a source of differentiation and will become more common in the future as ASICs increase in speed. SERDES will also be a key to adding additional IP blocks and photonics to the networking ASIC, as those markets evolve. The Speeds And Feeds Race NVIDIA’s Spectrum-4 introduced several features that help it distinguish and route traffic. These traffic classes are often described as elephant and mouse flows. The key at a market level is that AI and accelerated computing put a considerable burden on the network. So AI networking switches need to keep pace with this new class of ‘elephant flow’ traffic and not slow or drop packets. At the same time, accelerated traffic is growing as a percent of traffic. Multiple 100G ports are already standard at the RFP level for AI workloads. 12.8 Tbps and 25.6 Tbps ASICs can not keep pace with a fully loaded AI cluster. Beyond AI While the announcement focused on NVIDIA examples, such as connecting Spectrum-4 to NVIDIA Certified OVX OEM servers, we view the market as benefiting from NVIDIA taking networking seriously. Cloud customers have always requested multi-vendor at the ASIC and system level, and the ability to collapse networking tiers with 51.2 Tbps will be a critical driver to adoption. We, therefore, expect Spectrum-4 to apply to the whole data center, not just focused on the high-end or NVIDIA-connected solutions. We are excited to see this switch show up in 2H22 in the customer’s hands.
0 Comments
DPU Opportunity IncreaseS with New Market Opportunities in RAN, Security, and Enterprise WORKLOADs9/1/2021 ![]() For many, the DPU is associated with the Smart NIC and the boundary between computing and networking. Therefore it has a finite market size tied to the number of servers and appliances in the market. DPU history focused on offloading the processor with storage, basic security, and virtualization allowed the CPU cores on a server to be 100% utilized for workloads and not for tasks viewed as overhead. Today the market for DPUs is very different and in the early stages of redefining many markets. Today’s DPU metrics are impressive and altering the conversation. For example, two-three years ago, it was about getting to 100% utilization and defined to a handful of use cases. Today the DPU accelerator can do more, not only offloading basic tasks but accomplishing processing that would take multiple servers to achieve in one card. This is a game-changer and becomes a building block to future data centers looking and acting differently than the current paradigm. The DPU, and general accelerators, create new markets now and are at the forefront of several technology innovation cycles. They are not only disrupting adjacent markets with new architectures but are developing new use cases and applications. DPUs in the Telcom market was an adjacent natural market, and we highlight a set of new announcements below on several new areas the DPU is taking over: -- NVIDIA and Palo Alto Networks (May 2021) announce high-end next-generation firewall(NGFW): Palo Alto’s firewall, running on BlueField-2, opens up the entire firewall space to the DPU market. DPUs can replace custom silicon and achieve better performance. This becomes critical as the security market transitions to Cloud and as-a-Service models. -- NVIDIA and Marvel (MWC 2021 and 2022) announce multiple edge computing, NFV, RAN initiatives: Over the past two years, Marvell and NVIDIA have made significant strides in addressing almost the entire Telecom infrastructure hardware stack. For example, today, a Telecom SP can run RAN baseband directly on the DPU based system instead of on proprietary ASICs and hardware. In many cases, the DPU can achieve better cost, power, and performance than previous hardware versions offered by Ericsson, Huawei, and Nokia. The DPU also lays the hardware building block for Open RAN. There are many other examples of the DPU moving beyond basic offload and compute tasks. The above examples highlight new markets for the DPU that are direct replacements of legacy products. To date, much of 2021 has highlighted Telco markets and been SP-focused. That brings us to the enterprise which has several opportunities ahead for DPUs and is more forward-looking: -- AI/ML workloads in the enterprise: This market remains in its infancy, mostly around basic inference in a few verticals. However, all enterprises will go through a digital transformation around AI or go out of business. AI in the enterprise will be many new applications ranging from business intelligence and customer interaction to the design of new products. -- Server architectures are changing: Server designs in the enterprise and the overall march towards disaggregated pools of computing, storage, memory, and networking resources will define the next decade. The DPU will play an essential role in this transformation, and it will occur in many parts of the new architecture. Overall the market for DPU is poised for growth as new workloads and adjacent markets become prime use cases for the technology. By Alan Weckel, Founder and Technology Analyst at 650 Group. |
CHRIS DePUY
|