This week's MWC Barcelona 2021 had several themes; the most important was that several outsiders to the telecom industry were ever-present. The new entrants – the party-crashers - included Starlink, Microsoft Azure, Amazon Web Services, Google Compute, and NVidia. These new players are forcing change either through economics, new technology, or new regulatory frameworks, or combinations thereof. We’ll touch on the importance of these crashers and then circle back to a few other ongoing themes that continue to remain relevant in this article.
Satellite broadband, while not exactly a mobile technology, will catalyze significant changes to the mobile industry. Low Earth Orbit (LEO) satellite services, evangelized today by SpaceX-owned Starlink, announced plans to spend as much as $30B in building out its constellation over its lifespan. Yet, it will reach users across the globe. Elon Musk said Starlink is in beta in 12 countries, and it plans to have ½-million users in the next 12 months. The billionaire highlighted that Starlink’s ability to reach rural populations is unlike that of terrestrial players. We think the rural reach of LEO broadband is precisely why Starlink will be so important. Musk’s pitch to the mobile industry was that of a partnership – he said that Starlink is partnering with 5G MNOs to offer satellite backhaul and rural broadband services. We view satellite broadband, and later 3GPP satellite, as critical components in the telecommunications industry, and therefore we chose to write about satellite first in this article.
All three hyperscalers, Azure, AWS, and GCP, made a splash at MWC21. As a group, these infrastructure providers have already changed the way telcos operate. In fact, the hyperscalers’ architectures were the inspiration behind the decade-old telco push for Network Functions Virtualization (NFV). But, these days, hyperscalers’ operations are more than an inspiration to the telcos. MNOs are now moving some of their workloads to hyperscaler infrastructures. The evolution of these workload migrations to hyperscalers is moving in three phases, phase 1, the back-office, then phase 2, telecom core, and last, phase 3, the access layer. In the weeks leading up to MWC21, we’ve seen progress on all three workload migrations, including that on Mobile RAN. Incoming AWS CEO Adam Selipsky said at MWC that AWS is talking to “virtually every telecom operator.”
Some examples of announcements made surrounding the MWC show include:
With Open RAN capabilities come the possibility that MNOs can source various RAN components from multiple vendors. Rakuten has already technically demonstrated multi-vendor sourcing (Altiostar baseband and Nokia and NEC radios). In addition to system-level multi-vendor interoperability, in previous years, multiple semiconductor companies had been bolstering their RAN offerings (Marvell, Qualcomm, EdgeQ). Marvell had previously crashed MWC (MWC19 and MWC20) and is now a RAN supplier to Samsung and Nokia. For MWC21, we saw yet another entrant to the RAN chip market, NVidia. NVidia has received pubic endorsements from Ericsson, Fujitsu, Mavenir, and Radisys. NVidia’s current chip offering is called “AI-on-5G,” and the company’s offering starts in 2021 as an “on a server.” NVidia’s next offering is expected in the 2022-2023 era and will be an “on a card” offering. Then, after 2024, NVidia will offer its “on a chip” offering.
On April 21, 2021, DISH, the fourth wireless operator in the US market, and hyperscaler Amazon Web Services (AWS) announced plans to work together, whereby DISH will leverage AWS infrastructure and services to build a cloud-based 5G Open Radio Access Network. The DISH/AWS announcement is important because this is the first 5G Radio/hyperscaler deal – or second if you count Rakuten as a hyperscaler. We are encouraged by the DISH/AWS deal and think this represents a big step in the industry. What’s so important is that two of the three major Radio Access Network (RAN) functions will be running on AWS; these are the Centralized Unit (CU) and the Distributed Unit (DU). We see the DU running on the AWS service called Outposts as being the most critical part of this announcement, because historically this hardware has been delivered as a proprietary hardware system using proprietary semiconductors from the likes of vendors like Ericsson, Nokia and Huawei. Thus, AWS’ involvement in the DISH network serves as a reminder of the opportunity for RAN vendors to deploy cloud native RAN in future cellular network deployments.
DISH is employing a terminology it called a “Capital Light” model, whereby it reduced the amount of capital spending it requires to build out its planned national network. Key to achieving this light capital model is leveraging the capital spending done by AWS and instead leveraging what some might call an OPEX oriented model. DISH plans to launch live cellular services in Las Vegas, NV first, and then its 5G network will cover 20% of the US population by June 2022 and then 70% by June 2023 and 75% of by June 2025, and thereafter it will continue its build to “match competitors beyond 2025.” The company also plans to begin building enterprise focused 5G networks beginning in 2021.
In our follow-up inquiries to the AWS and DISH teams, we have learned that DISH is exercising an option to run O-RAN using AWS Graviton hardware plus Enterprise Kubernetes Services. Additionally, DISH has the option to use Intel based COTS based hardware in parts of its network. Thus, DISH has flexibility to deploy baseband systems on AWS or in its network, and can use Graviton or Intel systems. We have seen AWS engage in contracts with other parties where there are minimum usage rates or dollar commitments. We are not sure this is the case for the DISH deal, but AWS explains that it expects to deliver “thousands of site specific hardware,” while at the same time DISH expects that by mid 2023, it will have built out “15,000 cellular sites.”
We wanted to share some insights on how this relationship appears to be structured. It appears that many scenarios have been envisioned as to how the relationship may evolve in the future, and we think that both parties have worked in contract terms that allow some flexibility in achieving each company’s goals. We did not review the contract between the two companies, but in a webinar presentation held April 30, 2021, executives from DISH hedged their bets somewhat on the relationship with AWS in ways we found interesting:
In a briefing with Rakuten Mobile today, we learned two neat things: It is experimenting with 3GPP on satellite, and it hopes to announced a full-stack Rakuten Communications Platform (RCP) customer as early as next quarter. The company also shared some plans that it has for improving coverage to 96% by the end of the summer '21, and that it believes it has a 50% total cost of ownership advantage for its 5G infrastructure versus a traditional network operator.
So, what's so important about "3GPP on satellite?" If satellites are able to communicate with all cell phones and other cellular devices, this would mean that coverage could be enabled where we might need to have placed macro base stations. If we don't need macro base stations everywhere as satellites provide that coverage in sparse areas, or maybe even along highway routes, then a future cellular operator might be able to build its network with far fewer macro towers and rely more on a "barbell" approach, with small cells providing high throughput in busy areas and satellites providing coverage between busy areas. This would reduce demand for 5G base stations. Rakuten expects that its satellite partner, AST, may offer satellite coverage for Japan at the end of 2023 or the beginning of 2024 - that is a ways off. But, this means that in 3 or so years, the need for base stations may be considerably reduced.
Also, Rakuten spokesperson, Tareq Amin, said he thinks it is possible that Rakuten may announce its first RCP customer as early as next quarter. We published about RCP in November 2020, around when the team first started making RCP known to the public. This means that a division of a mobile operator, Rakuten Mobile, may be selling its know-how, technology and services to another telecom operator, presumably outside of Japan. This is a big deal in that most operators buy from vendors and systems integrators, not from others who are in the same business as them. It is also a big deal because cloud companies like Amazon, Microsoft and Google all want to sell their cloud services to operators, too. And, if RCP gets there first, and sells its full stack (radio, core, billing, orchestration, OSS) it would represent a first-ever full stack services deal.
AWS (Amazon Web Services) grew nearly 30% Y/Y, remarkable results for a $10B a quarter business. 650 Group enterprise interviews indicate that IaaS is the preferred platform for new application development in the new-normal COVID-19 world.
We do expect at some point enterprises will move some of these workloads back to the premise, but don’t expect a headwind. Still, more of normalization as this premises-based move in 2021 will be occurring right as AI workloads add a new leg of growth to IaaS providers.
AWS Custom ASIC and semi-custom ASIC development include many projects beyond Annapurna’s Smart NIC and Amazon’s investment into satellite connectivity with a $10B investment in project Kuiper for low earth satellites in direct competition with SpaceX’s Starlink will make the company's Cloud platform even more popular. Also, if satellite connectivity is just for media, which we see as unlikely, the way consumers connect their devices over the next decade is going to go through a significant transformation, and this is just the best-case will have a minimal impact.
Cloud Revenue Differs Greatly Between Search and IaaS as 2Q20 results Affirm 650 Group Forecast Projections
Over the next five days, we will highlight each of the US Hyperscalers and the results they had during 1H20 and 2Q20. Today we will start with the overall trends in the market. US Hyperscaler revenue grew 20% in 2Q20 compared to a year ago, setting a new record.
US trade war activities, mainly against Huawei, caused significant lead time increases in many critical components for Cloud data center build-outs during the quarter as the 5G battle against China is having ripple effects into the Cloud supply chain. Custom ASIC and semi-custom ASIC development in the Cloud continue to expand with multiple new initiates around #AI #ML., #SmartNICs, accelerators, and #CPUs underway. This is not to mention Amazon Web Services (AWS) getting into #6G with a $10B investment in project #Kuiper for low earth satellites in direct competition with @SpaceX Starlink. There are over 50 custom ASIC projects in the Cloud. Each one has implications on the supply chain and immediate potential to shift market share from each Cloud provider.
Our overall projections for data center spend in switching, servers, and storage remain relatively unchanged since our previous forecast. Current results affirm our forecasts as we shift to vendor performance over the next two weeks, which we expect to be dependent on each company’s vertical and enterprise exposure.
Federated Wireless announced that it will offer a managed service that will be offered to enterprises that plan to operate private cellular networks (both 4G and 5G). For companies to use Citizens Broadband Radio Service (CBRS) spectrum (3.5 GHz) in the US market, a service provider called a SAS is required; Federated is a pioneer in this SAS market. What the company announced today, though, is that not only is it going to offer SAS services to customers, but it will now offer discovery, planning, design, building, operations and support services that will allow enterprise to get the benefit of cellular coverage in their facilities.
Another very interesting facet to the Federated Wireless entry to managed services is that it has also announced selling partnerships with Amazon Web Services and Microsoft Azure. In summary, customers can visit each AWS or Azure sites, click some buttons and then Federated will show up and build and operate the cellular network to allow services such as critical communications (like employee-to-employee communications), mobility services (such as trucks moving onsite), Wi-Fi backhaul (without the need for installing new conduit and wires), IoT sensor deployment, and many other uses.
Federated will be an enabler to companies who don't want to work with traditional mobile network operators in order to expand cellular coverage to their corporate locations. What this means is in the US market, companies may contact AT&T, Verizon or T-Mobile to get licensed cellular, but now they can contact Federated Wireless to get their own shared-spectrum, in this case CBRS, network that carries only their traffic.
Today Intel had a major Data Center event in San Francisco. It was a multi-hour announcement showcasing all the different products Intel is/will be launching.
Some interesting background that Intel talked about was that only 2% of worlds data has been analyzed and a 5G will be a major driver move compute to the edge. They also touted that over 50% of AI workloads are inference and runs best on Intel (X86). Noticeably absent at the beginning of the presentation was Intel’s work on training.
We found the most interesting parts of the announcement being that AWS talked about custom versions of Intel’s CPU and a Up to 14X inference improvement from just July 2017 in its XEON processors. Overall there is a 30% gen/gen improvement in XEON, the biggest jump in 5 years. While staying at 10NM, Intel is able to continue to squeeze performance gains out of the server.
It is important as we hit the limits of process geometry that everything be accelerated, especially with Intel having a similar view of AI workloads and that’s what we saw with Intel. New Optane memory, new persistent memory, faster Adapter cards which will lead to more Smart NIC announcements, and a 10nm FPGA. It was clear at the event that large cloud provers like AWS, Azure, and Tencent are looking at all avenues to increase performance and reduce power consumption via software and hardware advancements.
Some interesting highlights included AWS touting over 100 unique instances that leverage Intel processers with more SAP instances running on AWS than anywhere else and Formula 1 using of 65 years of historic race data to train its models in order to make real-time race predictions.
Western Digital Corporation, also known as WDC, held its investor meeting yesterday. We highlight four important disclosures: (a) it expects HDD to survive, despite flash's advantages, (b) WDC's plans to use RISC and launch a new memory interface, (c) an update on current business trends, and (d) the company's plans for its storage systems business.
First, since the company owns both Hard Disk Drive (HDD) and Flash divisions, it has an interest in keeping investors informed of the relative competitiveness between the two. Two and a half years ago, WDC closed on its acquisition of flash pioneer Sandisk. Around that time, the company was asserting that there is, and the company anticipates it will remain the case, that HDDs will retain at least a 10-times advantage versus flash on a cost per bit ($/GB) stored basis. At the company's meeting yesterday the company reiterated its view that this 10x advantage will continue at least till the year 2022.
The 10x $/GB differential is important because, if true, it means HDDs will not go away for many applications, especially for storage of big data at hyperscalers. Consider this - storage systems vendors such as Pure Storage are announcing software systems that allow premises-based flash storage to be seamlessly moved to the cloud for longer-term storage, which means that cloud hyperscalers like Amazon Web Services (AWS) are likely to be HDD customers for a very long time. Nevertheless, flash will keep on taking share from HDD. The company has been reducing its manufacturing capacity for HDD, with significant layoffs and factory shutdowns over the past several years. In fact, it plans significant ongoing cost cutting in its HDD manufacturing, it the range of 15-25% Y/Y decline next year.
The company also announced an ambitious plan to transition from licensed CPU cores to RISC based cores, for which it expects to pay no licensing and/or royalty fees. WDC says it ships over 1 billion CPU cores per year, so this is a significant shift. Also, the company plans to introduce an open-sourced memory interface called OmniXtend memory fabric, which will pit it against Intel (using DDR4, etc.) who has historically launched its own interfaces. WDC is now a very big player in storage, having built through organic growth and acquisitions. It has more market might and these new initiatives have a better opportunity to launch than in the past.
Additionally, the company said that its near-term business trends are under pressure due to cyclical market fluctuations. The memory semiconductor industry has historically endured a boom/bust cycle and now the company is explaining it has entered the bust part of that cycle. Since its earnings call on October 25, 2018, it says conditions have deteriorated somewhat more, partly the result of hyperscalers continuing to reduce inventories and partly due to mobile phone companies still reducing forecasts of demand. The company expects 2019 will see flash demand below the historical range and that hyperscalers should see a return to growth in 2H19.
Lastly, the company's Data Center Solutions group, which employs a vertically integrated strategy to compete with traditional storage systems companies such as DELL-EMC, NetApp, HPE, Pure Storage and others, just experienced a record quarter on a revenue basis and is approaching break-even in its operations. The company has the goal of becoming a top 5 player in data center solutions, which we take to mean, it is planning to take share from the current players. The group has experienced 17x revenue growth from Q1FY19 compared to Q1FY16, according the presentation (of course, the group has made acquisitions that bolster this number), has shipped 3 Exabytes in the months in calendar 2018 (which isn't over yet) and has shipped 8,500 systems and platforms since inception. The company plans to experience "double digit" revenue growth rates for this unit in the future.
There were two main announcements, a new relationship with Google Cloud Platform and a new flash device - the AFF A800. Also, in our interviews with NetApp, we learned about the future of Fibre Channel at the hyperscalers.
Google. Google Cloud Platform now integrates NetApp Cloud Volumes as a drop-down menu capability as part of the Google console. This allows enterprise customers, for instance, to use Cloud Volumes to manage their data on Google's cloud service while simultaneously managing their data on premise. This relationship with Google now rounds out the NetApp relationships with the main hyperscalers - it already has in place relationships with both Amazon (AWS) and Microsoft (Azure). NetApp Cloud Volumes on Google Cloud Platform is currently available as a "preview" capability (sign up at www.netapp.com/gcppreview) and is expected to go to commercial status by the end of 2018. Customers will pay Google for the use of NetApp Cloud Volumes.
AFF A800. New flash hardware available from NetApp, which besides having impressive density and low-latency capabilities supports NVMe-over-Fibre Channel. Of course, the product also supports 100 Gbps Ethernet. From a historical standpoint, it is interesting that NetApp, a company whose heritage was driven by storage over Ethernet, is touting Fibre Channel. But, that's what its customers are asking for in order to accelerate their on-premise workloads such as database (Oracle), ERP (SAP) and other mission-critical enterprise workloads. In our interviews with NetApp, we were told that Fibre Channel is growing faster than Ethernet - this makes sense given the company's foray in recent years to flash and low-latency workloads.
Fibre Channel at the hyperscalers? We asked about what is going on with the hyperscalers' architecture to adapt to AI/Deep Learning workloads. NetApp executives explained that AI workloads are different from traditional workloads; they are random, low latency workloads connecting to GPUs. This type of workload, we were told by NetApp, works very well when attached to Fibre Channel. From NetApp's perspective, if customers want to run AI workloads fastest, they would likely do so on-premise, using Fibre Channel. Yet, many customers run their workloads on hyperscalers, all of which use Internet Protocol and the underlying Ethernet infrastructure. We have always been skeptical that hyperscalers would adopt Fibre Channel. We believe the hyperscalers may work with vendors such as NetApp to develop additional software capabilities to address the changing workloads relating to AI/ML/GPU workloads in the future - on top of IP/Ethernet infrastructures.