Today, Amazon announced that it will acquire eero, a consumer mesh WiFi equipment company that as of 3Q18 had 13% revenue share. In 3Q18, the consumer mesh WiFi market measured just over $150M, which was up just over 34% Y/Y. The number one player by revenue was NETGEAR in 3Q18, followed very closely by Google, who had retained the number one spot for the 5 quarters before 3Q18. Now, with Amazon's acquisition of eero, just three players will have well over 3/4th of the consumer mesh WiFi market. What's interesting here is that two Internet titans, Google and Amazon, are attempting to disrupt the consumer networking market that up till 2015 was dominated by hardware players such as NETGEAR, Linksys, TP-Link, D-Link (consumer WiFi vendors) and adjacent players such as Technicolor, Arris, Huawei, ZTE and Nokia (Broadband Customer Premises Equipment vendors).
So, what does it mean that now both Amazon and Google are battling for primacy in the home networking market?
It is complementary to their interactive speaker business. Both Amazon and Google have introduced various hardware products for the home, but most successful have been both of their interactive speaker products, which for Amazon has been the Echo and Dot and for Google Home. These speakers are generally in an "always-on" mode, which allow them to listen to all sounds nearby, and which also means they are generally always connected to the WiFi devices in the home. By always being connected, these speakers consume much of the available WiFi bandwidth in the home, deteriorating the available spectrum for other devices. One obvious solution, which is being made available by wireless chip giant, Qualcomm, is to integrate WiFi chips with speaker chips. That's the direction that both Amazon and Google may pursue - to integrate Home with Google WiFi and Echo with eero. This will mean that multiple WiFi mesh devices will also represent multiple interactive speakers in the home, all while combating the over-use of WiFi spectrum in the home.
These Internet giants can, and probably will, attempt to overwhelm the market with low prices, subsidized by primary businesses. We already see that Google's price for a 3-pack is 37% lower than eero's comparable system. Our working theory is that Google has been selling close to no margin and that eero has been experiencing a 30's percent margin. This is probably not good news for the following companies who either do have gross margins above 30% or we assume do, like NETGEAR, TP-Link, D-Link, and others mentioned above.
Western Digital Corporation, also known as WDC, held its investor meeting yesterday. We highlight four important disclosures: (a) it expects HDD to survive, despite flash's advantages, (b) WDC's plans to use RISC and launch a new memory interface, (c) an update on current business trends, and (d) the company's plans for its storage systems business.
First, since the company owns both Hard Disk Drive (HDD) and Flash divisions, it has an interest in keeping investors informed of the relative competitiveness between the two. Two and a half years ago, WDC closed on its acquisition of flash pioneer Sandisk. Around that time, the company was asserting that there is, and the company anticipates it will remain the case, that HDDs will retain at least a 10-times advantage versus flash on a cost per bit ($/GB) stored basis. At the company's meeting yesterday the company reiterated its view that this 10x advantage will continue at least till the year 2022.
The 10x $/GB differential is important because, if true, it means HDDs will not go away for many applications, especially for storage of big data at hyperscalers. Consider this - storage systems vendors such as Pure Storage are announcing software systems that allow premises-based flash storage to be seamlessly moved to the cloud for longer-term storage, which means that cloud hyperscalers like Amazon Web Services (AWS) are likely to be HDD customers for a very long time. Nevertheless, flash will keep on taking share from HDD. The company has been reducing its manufacturing capacity for HDD, with significant layoffs and factory shutdowns over the past several years. In fact, it plans significant ongoing cost cutting in its HDD manufacturing, it the range of 15-25% Y/Y decline next year.
The company also announced an ambitious plan to transition from licensed CPU cores to RISC based cores, for which it expects to pay no licensing and/or royalty fees. WDC says it ships over 1 billion CPU cores per year, so this is a significant shift. Also, the company plans to introduce an open-sourced memory interface called OmniXtend memory fabric, which will pit it against Intel (using DDR4, etc.) who has historically launched its own interfaces. WDC is now a very big player in storage, having built through organic growth and acquisitions. It has more market might and these new initiatives have a better opportunity to launch than in the past.
Additionally, the company said that its near-term business trends are under pressure due to cyclical market fluctuations. The memory semiconductor industry has historically endured a boom/bust cycle and now the company is explaining it has entered the bust part of that cycle. Since its earnings call on October 25, 2018, it says conditions have deteriorated somewhat more, partly the result of hyperscalers continuing to reduce inventories and partly due to mobile phone companies still reducing forecasts of demand. The company expects 2019 will see flash demand below the historical range and that hyperscalers should see a return to growth in 2H19.
Lastly, the company's Data Center Solutions group, which employs a vertically integrated strategy to compete with traditional storage systems companies such as DELL-EMC, NetApp, HPE, Pure Storage and others, just experienced a record quarter on a revenue basis and is approaching break-even in its operations. The company has the goal of becoming a top 5 player in data center solutions, which we take to mean, it is planning to take share from the current players. The group has experienced 17x revenue growth from Q1FY19 compared to Q1FY16, according the presentation (of course, the group has made acquisitions that bolster this number), has shipped 3 Exabytes in the months in calendar 2018 (which isn't over yet) and has shipped 8,500 systems and platforms since inception. The company plans to experience "double digit" revenue growth rates for this unit in the future.
There were two main announcements, a new relationship with Google Cloud Platform and a new flash device - the AFF A800. Also, in our interviews with NetApp, we learned about the future of Fibre Channel at the hyperscalers.
Google. Google Cloud Platform now integrates NetApp Cloud Volumes as a drop-down menu capability as part of the Google console. This allows enterprise customers, for instance, to use Cloud Volumes to manage their data on Google's cloud service while simultaneously managing their data on premise. This relationship with Google now rounds out the NetApp relationships with the main hyperscalers - it already has in place relationships with both Amazon (AWS) and Microsoft (Azure). NetApp Cloud Volumes on Google Cloud Platform is currently available as a "preview" capability (sign up at www.netapp.com/gcppreview) and is expected to go to commercial status by the end of 2018. Customers will pay Google for the use of NetApp Cloud Volumes.
AFF A800. New flash hardware available from NetApp, which besides having impressive density and low-latency capabilities supports NVMe-over-Fibre Channel. Of course, the product also supports 100 Gbps Ethernet. From a historical standpoint, it is interesting that NetApp, a company whose heritage was driven by storage over Ethernet, is touting Fibre Channel. But, that's what its customers are asking for in order to accelerate their on-premise workloads such as database (Oracle), ERP (SAP) and other mission-critical enterprise workloads. In our interviews with NetApp, we were told that Fibre Channel is growing faster than Ethernet - this makes sense given the company's foray in recent years to flash and low-latency workloads.
Fibre Channel at the hyperscalers? We asked about what is going on with the hyperscalers' architecture to adapt to AI/Deep Learning workloads. NetApp executives explained that AI workloads are different from traditional workloads; they are random, low latency workloads connecting to GPUs. This type of workload, we were told by NetApp, works very well when attached to Fibre Channel. From NetApp's perspective, if customers want to run AI workloads fastest, they would likely do so on-premise, using Fibre Channel. Yet, many customers run their workloads on hyperscalers, all of which use Internet Protocol and the underlying Ethernet infrastructure. We have always been skeptical that hyperscalers would adopt Fibre Channel. We believe the hyperscalers may work with vendors such as NetApp to develop additional software capabilities to address the changing workloads relating to AI/ML/GPU workloads in the future - on top of IP/Ethernet infrastructures.
At NetApp's analyst meeting today, CEO George Kurian sees opportunity in selling HCI (introduced F2Q18, 4-5 months ago), AFA, share-taking in SAN, and public cloud software and services. Every large customer NetApp talks to, according to Kurian, is using multiple cloud service providers and/or SaaS services and most are using the hybrid cloud, which means using workloads both on the customer premises and public cloud. According the company's marketing and sales executives, the company's sales and marketing strategy is focused on leveraging the company's entrance to the cloud services software market.
Substantial future announcements that were made by NetApp:
Summary of presentations
Joel Reich, EVP Products and Operations discussed trends in data center flash:
• NVMe over Fabrics
• Storage-class memory as cache
• Persistent memory in server
• Quad level cell NAND
Reich made some interesting comments:
Brad Anderson, SVP and GM, Cloud Infrastructure BU, said that NetApp’s “Converged” (selling NetApp storage with non-NetApp servers) FlexPod business is now at a $2B run rate and >4,000+PB shipped. The company recently initiated a Fujitsu partnership on March 26, 2018. Anderson also said that NetApp’s Hyperconverged product, which has only been selling for the past 4-5 months, hit its financial targets in the first full quarter of shipments. He also said that the HCI product is based upon on recently-acquired SolidFire technology and conceded that the company is hiring people with virtualization capabilities to further augment the product line. HCI customers that were discussed during this presentation were: ConsultelCloud (Austrialian SaaS company) and Imperva (security company).
Anthony Lye, SVP Cloud Data services BU, joined a year ago and is responsible for the company’s efforts to build software that runs on and with public cloud services. He describes this software as one that operates above the storage layer, to allow customers to manage their data, whether in cloud, SaaS applications or on premises. It offers backup, disaster recovery, and for securing data, and then binds those services and data in context of applications and business policy using the orchestration Engine. OnCommand is product name. The underlying technology NetApp uses is called ONTAP Data Management, which Mr Lye explained was separated from its engineered systems (hardware) and port it to public clouds five years ago. We remember when NetApp announced its plans to separate ONTAP as a software for the cloud at its analyst meeting a few years ago when Kurian took over as CEO.
Lye explained that “later this year,” NetApp will release cloud-based OnCommand performance management/monitoring tool to manage workloads in hybrid cloud environment.
Henri Richard, EVP Worldwide Field and Customer Operations said "Cloud is soon to be GA.” Richard explains this as its “Cloud Volume” product. Richard explained that what is new this year is the hyperscaler relationships, starts a demand creation engine for the sales organization.
Jean English, SVP Chief Marketing Officer said the company will focus on “cloud first” to reach new “global” buyers (e.g. multinational organizations), will lead with HCI and Cloud to enterprises.
Ron Pasek, EVP and CFO explains that FY18 is almost over and the company is beating FY18 plans (low-single digits growth), driven by flash. The CFO said that the new accounting rule, ASC 606 impact to guidance will be immaterial to the P&L , though will result in slightly higher product revenue recognition.
Additionally, Pasek said that a year ago, he said revenue growth will be “low-single digit growth” (FY18-20) and now he is saying “mid-single digit growth,” driven by Flash, HCI, cloud data services. Pasek said that in FY19 cloud data services will represents one point of growth. (As an aside, we calculate FY19 cloud data services revenue, using the “one point of growth” metric at $60M, based on the latest quarter of total revenues, F3Q18 which was $1.52B, multiplied by 4, then multiplied by 1%). So, cloud services revenue is expected to grow to FY19 of $60M and reach FY21 targets of $400-600M. The company declined to state its FY18 cloud data services revenue when asked by the audience, so we take it that it is small.
Broadcom joined both Innovium and Nephos by publicly announcing 12.8 Tbps fabrics with its Tomahawk 3 product line. We love new data center silicon from all vendors, it is something we track closely and we see these as a disruptive technologies to the networking ecosystem and an enabler of next generation cloud architectures. There will be many more such announcements in 2018. Here are some of our takeaways as we enter 2018.
More rapid innovation cycle – Even noted in the Broadcom's Tomahawk 3 press release, we see the demand requirements of the hyperscalers as driving a more rapid cycle of silicon over the next couple generations. Tomahawk 3 is being introduced less than the typical 24 months we see separating prior between generations of data center fabric semiconductors. This will put significant pressure on parts of the supply chain, especially on optics vendors. Optics vendors are still ramping for 100 Gbps and now must support both OSFP and DD-QSFP for 400 Gbps, essentially doubling their product diversity needs. Not only are there more form factors, but there are also different variations of distance and specifications that increase the complexity.
What next – We see two waves of 400 Gbps, the first being based on 56 Gbps SERDES, the second coming in the 2020 timeframe based on 112 Gbps SERDES. We believe 800 Gbps is not that far off in the horizon as hyperscalers like Amazon and Google continue to grow. We note that the hyperscalers are about to be 3-4 generations ahead of the enterprise. This type of lead and technology expertise really changes the conversation around Cloud. We saw this at Amazon re:Invent with their Annapurna NIC, the Cloud is doing things that just aren’t possible in the enterprise, especially around AI, machine learning, and other new applications that take advantage of the hyperscalers size.
2018, the Year of 200 Gbps and 400 Gbps – In 2018 we will see commercial shipments of both 200 Gbps and 400 Gbps switch ports. We see significant vendor share changes because of this. Simply put the Cloud, especially the hyperscalers will be that much bigger by the end of 2018 and they buy a different class of equipment then everyone else. This will continue to cause the vendor landscape to evolve.