In a briefing with Rakuten Mobile today, we learned two neat things: It is experimenting with 3GPP on satellite, and it hopes to announced a full-stack Rakuten Communications Platform (RCP) customer as early as next quarter. The company also shared some plans that it has for improving coverage to 96% by the end of the summer '21, and that it believes it has a 50% total cost of ownership advantage for its 5G infrastructure versus a traditional network operator.
So, what's so important about "3GPP on satellite?" If satellites are able to communicate with all cell phones and other cellular devices, this would mean that coverage could be enabled where we might need to have placed macro base stations. If we don't need macro base stations everywhere as satellites provide that coverage in sparse areas, or maybe even along highway routes, then a future cellular operator might be able to build its network with far fewer macro towers and rely more on a "barbell" approach, with small cells providing high throughput in busy areas and satellites providing coverage between busy areas. This would reduce demand for 5G base stations. Rakuten expects that its satellite partner, AST, may offer satellite coverage for Japan at the end of 2023 or the beginning of 2024 - that is a ways off. But, this means that in 3 or so years, the need for base stations may be considerably reduced.
Also, Rakuten spokesperson, Tareq Amin, said he thinks it is possible that Rakuten may announce its first RCP customer as early as next quarter. We published about RCP in November 2020, around when the team first started making RCP known to the public. This means that a division of a mobile operator, Rakuten Mobile, may be selling its know-how, technology and services to another telecom operator, presumably outside of Japan. This is a big deal in that most operators buy from vendors and systems integrators, not from others who are in the same business as them. It is also a big deal because cloud companies like Amazon, Microsoft and Google all want to sell their cloud services to operators, too. And, if RCP gets there first, and sells its full stack (radio, core, billing, orchestration, OSS) it would represent a first-ever full stack services deal.
Disaggregation has been around in the networking space for nearly a decade, especially in hyperscalers data center Ethernet Switches. The insatiable demand for bandwidth, control, and cost reductions drove the white box from the Google science project to the dominant consumption model for Ethernet Switching in the hyperscalers. The high-scale SP Router market has lagged. Broadcom’s launch of Jericho gave the industry its first truly merchant ASIC, with companies like DriveNets providing the operating system, networking services and orchestration to create the compelling solution and ecosystem.
Traditional Routing Doesn’t cut It
Up through and including the current generation of 100 Gbps based routers, the western market had to choose between three options (Cisco, Juniper, and Nokia), all vertically integrated with little flexibility other than a service provider having multiple vendors deployed to keep pricing pressure and innovation moving. Cloud providers might have done the same thing; however, the price points, port density, and availability of products were not where they needed to be. Can you imagine the Cloud today with 4-8 ports of 100G per box and $10,000 price points per port over millions of ports? It just would not have worked financially or space/power-wise to continue down that path. The Cloud, without legacy infrastructure, became a market dominated by ASIC capabilities and software, and branded system vendors have been playing catch-up ever since.
High-Scale Network Transformation to Disaggregation
As Telco SPs become more cloud-like in design and procurement and we enter the next wave of product availability, there is a tremendous opportunity for the high-scale router market to go through its own disaggregation transformation. Our end-user interviews indicate a strong preference to move in this direction during the next two product upgrades (400 Gbps x56 Gbps SERDES and 400/800 Gbps x112Gbps SERDES).
While hyperscaler Cloud providers will fully embrace SP Router disaggregation with the 400 Gbps upgrade cycle starting next year, the rest of the Cloud and traditional SP industry is not that far behind. Disaggregated routers with OS, ASIC, and optics purchased individually and potentially from multiple suppliers will become more common - both 1RU routers used at the edge as well as larger high-scale routers used for core and aggregration that are based on clusters of white boxes.
Disaggregated Routing Growth
We expect the market for Disaggregated Routing to grow substantially during the next several years as RFPs and plans turn into production traffic and revenue (Figure 1), replacing traditional routers. Disaggregated routing and the consumption model associated with it will become a significant portion of the Router market over the next several years and create a significant opportunity for vendors that embrace the transformation.
Disaggregated Routing Market Players
Several vendors, including Cisco have entered the disaggregated routing space. Another contender is DriveNets, a startup who’s software is used by AT&T to build a disaggregated core network. The highest-scale backbone in the US is now running on a disaggregated network model.
Facebook revenue grew robustly in 2Q20, and its CAPEX guidance remained consistent for 2020 compared to previous revisions in the last two quarterly results.
Facebook’s results were counter trend to our expectations that many advertisers would pull back spending do to COVID-19 based on lack of supply (no need to advertise consumer staples) or from consumer spending put on pause (cars that people don’t need while sheltering at home). We believe Facebook benefited from more time on the platform and from the targeting of specific adds on areas of discretionary spending that did grow like sports equipment (good luck finding a bike, kayak, or other social distancing sports gear) and WFH (consumers shifting patterns of spending more for their residents while WFH or just making the home more comfortable due to extended hours in it).
Facebook, like Google, is under government scrutiny for its scale and size. We are closely monitoring the trends of government oversight from the US government and well as other countries like Australia which is forcing Facebook to pay for news as well as Microsoft’s potential purchase of Tik Tok (as of writing this over the weekend, they were still pursuing them). It seems like the duopoly here is not preferred by most governments at this stage and expect election results to polarize the losing party against social media companies into 2021.
Google, the largest US Hyperscaler by revenue, reported Search and Social results that declined Y/Y for the first time while IaaS revenue grew nearly $1B Y/Y. We were a little surprised at Facebook’s robust growth compared to Google’s. Google’s results were in line with our overall expectations for Search and Social decline in 2020 as consumers and advertisers resetting to the new normal. We expect more targeted ads throughout 2020 as consumers live and work from home, and many students live and study from home during the fall semester.
Google has made big bets and investments in IaaS, and we continue to see AI as an area where they will attack AWS and Azure. It is unclear if IaaS is compatible with the culture withing Google, which could put an upper limit on the verticals and companies Google can sell to. During 1H20, Google was surpassed by Amazon in our supply chain interviews as the company with the most influence on the technological direction of industry-wide future products.
We see a passing of the guard as AWS CAPEX is now much higher than Google’s, and the supply chain sees Amazon as more significant revenue potential. We expect this change to reverberate throughout the supply chain, primarily based on how each Cloud provider uses custom or semi-custom semiconductors in their data center infrastructure. This is something we are happy to talk about as we prepare our 2Q20 results and our fall readouts.
-- Alan Weckel, Founding Analyst, 650 Group
Cloud Revenue Differs Greatly Between Search and IaaS as 2Q20 results Affirm 650 Group Forecast Projections
Over the next five days, we will highlight each of the US Hyperscalers and the results they had during 1H20 and 2Q20. Today we will start with the overall trends in the market. US Hyperscaler revenue grew 20% in 2Q20 compared to a year ago, setting a new record.
US trade war activities, mainly against Huawei, caused significant lead time increases in many critical components for Cloud data center build-outs during the quarter as the 5G battle against China is having ripple effects into the Cloud supply chain. Custom ASIC and semi-custom ASIC development in the Cloud continue to expand with multiple new initiates around #AI #ML., #SmartNICs, accelerators, and #CPUs underway. This is not to mention Amazon Web Services (AWS) getting into #6G with a $10B investment in project #Kuiper for low earth satellites in direct competition with @SpaceX Starlink. There are over 50 custom ASIC projects in the Cloud. Each one has implications on the supply chain and immediate potential to shift market share from each Cloud provider.
Our overall projections for data center spend in switching, servers, and storage remain relatively unchanged since our previous forecast. Current results affirm our forecasts as we shift to vendor performance over the next two weeks, which we expect to be dependent on each company’s vertical and enterprise exposure.
We attended two separate presentations made by Ciena last week and have reflected on the comments made by the company. In summary, Ciena advocated using both pluggables like ZR and ZR+ as well as high-performance optical transport systems (its main business) together to construct cost-effective networks. The mix and match recommendation serves Ciena well, in that, substantially all of its revenues are high-performance systems, and coherent pluggables are a substitute threat to its business. If its customer base wanted to adopt pluggables but continued buying systems from Ciena, it would be logical for the customers to consider both systems and pluggables. Ciena argued that its pluggables would be superior to competitors, as well, highlighting its unique DSP, PIC and packaging as best in class. We find the pitch could be compelling if Ciena’s pluggables are better, and would play to Ciena’s advantages.
Much of Ciena’s recent growth has come from cloud hyperscalers. Hyperscalers currently use Ciena’s systems equipment for the Data Center Interconnect (DCI) use-case - to connect one data center to another. We have forecasted that hyperscaler DCI networks will move rapidly towards coherent pluggables, once available, substituting for high-performance systems. In Ciena’s presentation, it agreed with our assessment that short-haul DCI is the first place where pluggables will be put to work, displacing optical transport systems. Our view is that, at the market level, the metro optical transport systems' revenue path in 2021 and beyond will decline based on the transition to pluggables-use by DCI networks. Ciena is wisely hedging its bets by offering both pluggables and systems. But, we don’t think pluggables-related revenues will offset the potential loss of systems revenue, especially if the move towards pluggables is fast. One thing that Ciena has in its favor during this transition is it took first revenues on its 800 G class of systems equipment in its April 2020 fiscal quarter; its early 2020 launch could put Ciena back in the driver's seat again with customers who demand very high-performance optical links. With the inclusion of 800 G systems, Ciena’s systems offerings will be more competitive than it was entering 2020, and more competitive than 400 G class of pluggables that are the primary topic of this article. At present, only Ciena’s competitor, Infinera, has planned a 2020 launch of 800 G class of equipment.
Here are some specifics from Ciena’s two presentations (alternative link) last week. Ciena’s view is that single Span Data Center Interconnect (DCI) and High Capacity Access (Metro) are the most likely markets to adopt 400 Gbps ZR or ZR+ optical modules. It says that multi-span metro may have some use for ZR/ZR+, and long-haul and subsea won’t leverage these pluggables in the near to medium term. We agree with this assessment. Ciena will be offering coherent pluggables in two ways, as part of its packet networking and optical systems portfolio, as well as offering them through its Microsystems business for use in 3rd party equipment.
Ciena shared its assessment of its capacity versus reach comparison of its pluggables (56 GBd) and its high-performance systems (95 GBd). In the capacity graph the company shared during its presentations, you can see that Ciena’s tests show that coherent pluggables generally have half the reach or half the speed. The company advocates for “mixing bauds,” which means that for networks that are more complex than simple point to point DCI networks – for instance, ones that have lots of ROADMs, it makes sense to use high-performance metro DWDM systems as well as switches/routers with coherent pluggables. By “mixing bauds,” Ciena says it expects to 100% coverage of complex metro/regional networks (typical of telcos).
The company explained that it uses four major components in making 400 G coherent pluggables:
At NetApp's analyst meeting today, CEO George Kurian sees opportunity in selling HCI (introduced F2Q18, 4-5 months ago), AFA, share-taking in SAN, and public cloud software and services. Every large customer NetApp talks to, according to Kurian, is using multiple cloud service providers and/or SaaS services and most are using the hybrid cloud, which means using workloads both on the customer premises and public cloud. According the company's marketing and sales executives, the company's sales and marketing strategy is focused on leveraging the company's entrance to the cloud services software market.
Substantial future announcements that were made by NetApp:
Summary of presentations
Joel Reich, EVP Products and Operations discussed trends in data center flash:
• NVMe over Fabrics
• Storage-class memory as cache
• Persistent memory in server
• Quad level cell NAND
Reich made some interesting comments:
Brad Anderson, SVP and GM, Cloud Infrastructure BU, said that NetApp’s “Converged” (selling NetApp storage with non-NetApp servers) FlexPod business is now at a $2B run rate and >4,000+PB shipped. The company recently initiated a Fujitsu partnership on March 26, 2018. Anderson also said that NetApp’s Hyperconverged product, which has only been selling for the past 4-5 months, hit its financial targets in the first full quarter of shipments. He also said that the HCI product is based upon on recently-acquired SolidFire technology and conceded that the company is hiring people with virtualization capabilities to further augment the product line. HCI customers that were discussed during this presentation were: ConsultelCloud (Austrialian SaaS company) and Imperva (security company).
Anthony Lye, SVP Cloud Data services BU, joined a year ago and is responsible for the company’s efforts to build software that runs on and with public cloud services. He describes this software as one that operates above the storage layer, to allow customers to manage their data, whether in cloud, SaaS applications or on premises. It offers backup, disaster recovery, and for securing data, and then binds those services and data in context of applications and business policy using the orchestration Engine. OnCommand is product name. The underlying technology NetApp uses is called ONTAP Data Management, which Mr Lye explained was separated from its engineered systems (hardware) and port it to public clouds five years ago. We remember when NetApp announced its plans to separate ONTAP as a software for the cloud at its analyst meeting a few years ago when Kurian took over as CEO.
Lye explained that “later this year,” NetApp will release cloud-based OnCommand performance management/monitoring tool to manage workloads in hybrid cloud environment.
Henri Richard, EVP Worldwide Field and Customer Operations said "Cloud is soon to be GA.” Richard explains this as its “Cloud Volume” product. Richard explained that what is new this year is the hyperscaler relationships, starts a demand creation engine for the sales organization.
Jean English, SVP Chief Marketing Officer said the company will focus on “cloud first” to reach new “global” buyers (e.g. multinational organizations), will lead with HCI and Cloud to enterprises.
Ron Pasek, EVP and CFO explains that FY18 is almost over and the company is beating FY18 plans (low-single digits growth), driven by flash. The CFO said that the new accounting rule, ASC 606 impact to guidance will be immaterial to the P&L , though will result in slightly higher product revenue recognition.
Additionally, Pasek said that a year ago, he said revenue growth will be “low-single digit growth” (FY18-20) and now he is saying “mid-single digit growth,” driven by Flash, HCI, cloud data services. Pasek said that in FY19 cloud data services will represents one point of growth. (As an aside, we calculate FY19 cloud data services revenue, using the “one point of growth” metric at $60M, based on the latest quarter of total revenues, F3Q18 which was $1.52B, multiplied by 4, then multiplied by 1%). So, cloud services revenue is expected to grow to FY19 of $60M and reach FY21 targets of $400-600M. The company declined to state its FY18 cloud data services revenue when asked by the audience, so we take it that it is small.
The OCP Summit 2018 hit record attendance and we can can summarize the theme as that of continued disaggregation of network/server functions. Examples of demonstrations, presentations and proposals associated with disaggregation are as follows:
Apple Inc. announced plans to accelerate spending in the United States, citing $350 billion of spending over the next five years. The company has cited recent tax rules and its status as being the largest US taxpayer. The company specifically earmarked "over $10 billion" for "investments in data centers across the US." We estimate that this will add about $2 billion more per year than the company was already spending, which the company says has resulted in datacenters in seven US states, including North Carolina, Oregon, Nevada, Arizona and a planned project in Iowa. Based on these estimates, we believe Apple's US datacenter spending rate will now challenge the capital spending rates of Facebook. The company also announced plans to build a Reno, Nevada datacenter.
This capital spending acceleration on datacenters has been timed with the completion of its Cupertino-based mega-campus, which was a significant capital expenditure.
With Apple's datacenter plans are clearly accelerating, it is poised to tap suppliers for more datacenter equipment. We expect that the main suppliers of network equipment will be fighting hard for Apple's business. Examples of such suppliers competing for the new capital spending plan will likely be, in optical equipment, Nokia, Ciena, Finisar, in routing, Nokia, Cisco, and in switching, Cisco, Broadcom, and Arista. It is possible that with Apple's increasing scope of datcenter building, it may seek to bring more equipment design in-house, more similar to larger datacenters, including Facebook, Microsoft, Amazon and Google. Additionally, as the datacenters become more numerous and larger, it will almost certainly require that Apple will implement different network architectures.
The market is in a period of rapid adoption of higher speeds led by the hypserscalers. The industry used 2016 and 2017 to adopt 25 Gbps and 100 Gbps port speeds based on 25 Gbps SERDES technology. As we enter 2018, those same hyperscalers are about to adopt 50 Gbps, 200 Gbps, and 400 Gbps port speeds based on 50 Gbps SERDES at a record shattering pace. In the data center alone, there are now eight unique port speeds, with countless more unique variations of form factor and pluggable distance.
The market will need additional bandwidth beyond what is currently available today. Several of these technologies were highlighted at the OIF Forum conference. 100 Gbps SERDES will help drive the industry towards that goal. Looking forward, 100 Gbps SERDES will help drive wave two of 400 Gbps, which will help enable Ethernet to extend its reach well outside of short reach data center distances. At the same time, it will also have a long life, with use cases ranging from enterprise to service provider.
The big question often asked is why after so many years for the market to adopt 10 Gbps, will we suddenly see a more rapid pace of adoption going forward?
There are many reason why, but we should look at a few things are different this time. First, the hyperscalers are a new type of customer. Hyperscalers truly bring a new scale to networking and compute in a way that makes the traditional SPs look small. Second, SDN, the hyperscalers have done something unique here that often gets overlooked that is occurring right now, in the second half of this decade. Hyperscalers are increasing the utilization rate of their compute and networking resources. For compute, this is approaching 100% utilization so the industry is in a period where hyperscalers, using SDN are able to grow network bandwidth at a pace faster than what the CPU is scaling.
This more rapid pace will not continue forever, but is one of the reasons why innovation over the next several years will occur more rapidly than historic norms and why it will be important for the industry to think about how to invest across speeds and technologies in order to better leverage existing investments. If not, the pace of innovation will simply be too much to recoup investment in the compresses timelines we are currently in.