We attended the Qualcomm Wi-Fi 6 event held in San Francisco today. Representatives from partner companies who attended included HPE Aruba, Cisco, Commscope, Boingo, Netgear, Rivet Networks, AMD, and Microsoft. The principal announcement at the event was that Qualcomm announced its Networking Pro Series Platforms which are focused on Wi-Fi 6 capabilities, semiconductor systems which are in initial stages of availability and expected to be available on systems in coming months and quarters. The new Networking Pro Series chip systems hit four price points generally segmented by the number of antennas (more at the higher end) and are primarily targeted to the enterprise market, though we learned that some of the high-end consumer ("prosumer") vendors plan to use these chips as well. The new Networking Pro products have unique features compared to previous Wi-Fi 6 chips introduced from Qualcomm, including upstream MU-MIMO and upstream OFDMA and the design claim is that these can support 1,500 simultaneous users both upstream and downstream.
In the past, it could be said that Wi-Fi and cellular compete in some markets. We found it interesting that Qualcomm said that it expects that Wi-Fi 6 mesh products will be the way to get 5G millimeter-wave signals indoors. Several Qualcomm executives echoed the message that Wi-Fi and cellular are complementary, even though many Qualcomm service provider and cellular equipment partners do not subscribe to this point of view.
Qualcomm shared some impressive numbers. It ships approximately 1B Wi-Fi device chips per year at a run-rate; it has shipped over 4B Wi-Fi chips since 2015; and by comparison, had shipped 1/2B chips by 2010. It has shipped Wi-Fi chips with MU-MIMO capabilities to a total of 0.75B client devices. Qualcomm claims it has found that Target Wait Time (TWT) can improve cell phone battery life by as much as 60%.
HPE Aruba President, Keerti Melkote, presented and shared with the audience that it had won the Wi-Fi project to the nearby Chase Center, where the NBA's Warriors play and that it should operating soon. Additionally, Melkote emphasized that Aruba had recently begun shipping its price-competitive Instant ON product and the take-up has been strong. Cisco SVP Engineering, Anand Oswal, primarily discussed Cisco's Open Roaming initiative that focuses on seamless and secure public Wi-Fi onboarding. It was interesting that Cisco did not focus its comments on Wi-Fi 6. Morgan Kurk, CTO Commscope and acting President of Ruckus Networks spoke about the benefits of Wi-Fi 6 to venues, primary and secondary educational institutions, including AR & VR, 1:1 and online assessment use cases. Derek Peterson, CTO Boingo, a Wi-Fi/cellular venue services provider shared that it is now serving 1B consumers per year. Its goal is to get 100 MHz to each user, and that it will reach this goal by using all available spectrum, licensed and unlicensed. Peterson also shared its observations of the benefits from using Wi-Fi 6 at its trial that began in April of 2019 at the John Wayne Airport in Irvine, CA. Morgan Teachworth, Head of Hardware Platforms of Cisco Meraki, shared observations of several events it has been involved with, including the US Open Pebble Beach 2019 event, where, to its surprise, upload traffic exceeded downlink traffic. David Henry, SVP Connected Home Products from Netgear, hinted that it plans to introduce its Wi-Fi 6 mesh product, saying wait for details next week. We also learned that Netgear would leverage the highest end Qualcomm Network Pro chips intended for enterprise-class devices.
VK Jones, VP of Technology, Qualcomm Atheros, spoke about future products and standards work. He said by 2020, we should expect 6 GHz, and, by 2022, 802.11ax Release 2 features including scheduling and spatial re-use to improve old device capabilities. 6 GHz requires a third-party service provider to coordinate what frequencies each access point uses.
We witnessed two significant OCP Summit announcements:
a) open sourcing of semiconductor design
b) improved server system security.
First, Microsoft announced a new compression standard called Project Zipline. The intellectual property for this compression is being made available as open source Verilog (RTL), which will allow others to program semiconductors like FGGAs. The claim is that Project Zipline outperforms existing compression by more than 20%. This is the first semiconductor design member contribution.
Second, and continuing a theme started at last year’s OCP show, Microsoft announced improvements to Project Cerebus, which is a systems level design and specification to improve server security. Last year, the project addressed boot time security for the CPU. This year’s announcement addresses other subsystems associated with servers, such as accelerators, storage and NICs. We see this project as a reaction to recent concerns of data center hardware compromises that came to light in the public eye over the last couple years.
There were two main announcements, a new relationship with Google Cloud Platform and a new flash device - the AFF A800. Also, in our interviews with NetApp, we learned about the future of Fibre Channel at the hyperscalers.
Google. Google Cloud Platform now integrates NetApp Cloud Volumes as a drop-down menu capability as part of the Google console. This allows enterprise customers, for instance, to use Cloud Volumes to manage their data on Google's cloud service while simultaneously managing their data on premise. This relationship with Google now rounds out the NetApp relationships with the main hyperscalers - it already has in place relationships with both Amazon (AWS) and Microsoft (Azure). NetApp Cloud Volumes on Google Cloud Platform is currently available as a "preview" capability (sign up at www.netapp.com/gcppreview) and is expected to go to commercial status by the end of 2018. Customers will pay Google for the use of NetApp Cloud Volumes.
AFF A800. New flash hardware available from NetApp, which besides having impressive density and low-latency capabilities supports NVMe-over-Fibre Channel. Of course, the product also supports 100 Gbps Ethernet. From a historical standpoint, it is interesting that NetApp, a company whose heritage was driven by storage over Ethernet, is touting Fibre Channel. But, that's what its customers are asking for in order to accelerate their on-premise workloads such as database (Oracle), ERP (SAP) and other mission-critical enterprise workloads. In our interviews with NetApp, we were told that Fibre Channel is growing faster than Ethernet - this makes sense given the company's foray in recent years to flash and low-latency workloads.
Fibre Channel at the hyperscalers? We asked about what is going on with the hyperscalers' architecture to adapt to AI/Deep Learning workloads. NetApp executives explained that AI workloads are different from traditional workloads; they are random, low latency workloads connecting to GPUs. This type of workload, we were told by NetApp, works very well when attached to Fibre Channel. From NetApp's perspective, if customers want to run AI workloads fastest, they would likely do so on-premise, using Fibre Channel. Yet, many customers run their workloads on hyperscalers, all of which use Internet Protocol and the underlying Ethernet infrastructure. We have always been skeptical that hyperscalers would adopt Fibre Channel. We believe the hyperscalers may work with vendors such as NetApp to develop additional software capabilities to address the changing workloads relating to AI/ML/GPU workloads in the future - on top of IP/Ethernet infrastructures.
The OCP Summit 2018 hit record attendance and we can can summarize the theme as that of continued disaggregation of network/server functions. Examples of demonstrations, presentations and proposals associated with disaggregation are as follows:
It was great to catch up with old friends and make new friends at OCP this year. The show was highly successful with attendance at the Facebook and Microsoft booths so large that it was difficult to move around. On the switch side, most of the announcements were incremental to the market, but with new chips on the horizon, and a delay in 100 Gbps because of supply constraints, we see this as a temporary pause ahead of what will likely be some bigger announcements in 2018.
There were many highlights at OCP, but three things caught our eye while walking the show floor on both days.
• Microsoft’s project Olympus server is about to transition Microsoft away from High-Density servers and towards Rack servers. This is more in line with what other Tier 1 cloud providers are doing. We note the smart-NIC is still a multichip solution, one that could be further reduced in future generations. They also announced ARM based servers and joined Facebook on announcements in machine learning and AI optimized compute. We see this change in Cloud architectures as a good sign for the industry. The market is quickly moving into more use cases, which will help drive growth beyond just moving workloads away from the premise market.
• The white box vendors were in force at the show. Edgecore showed various Fixed and Modular form factors. We note that some of these boxes are modified for larger Cloud customers with the inclusion of large SSDs or memory. We have a pretty good sense of what is using these additions, but that is a topic for a more detailed report. We also saw Quanta and Delta with large presences on the show floor.
• This year we saw many software announcements around OCP. Arista announced their containerized EOS operating system (cEOS). We saw Apstra and Cumulus active at the show as well running into many other software vendors in attendance. OCP has done a good job at straddling the hardware/software boundary, but clearly the software needed to run these networks is quickly evolving as well.