We witnessed two significant OCP Summit announcements:
a) open sourcing of semiconductor design
b) improved server system security.
First, Microsoft announced a new compression standard called Project Zipline. The intellectual property for this compression is being made available as open source Verilog (RTL), which will allow others to program semiconductors like FGGAs. The claim is that Project Zipline outperforms existing compression by more than 20%. This is the first semiconductor design member contribution.
Second, and continuing a theme started at last year’s OCP show, Microsoft announced improvements to Project Cerebus, which is a systems level design and specification to improve server security. Last year, the project addressed boot time security for the CPU. This year’s announcement addresses other subsystems associated with servers, such as accelerators, storage and NICs. We see this project as a reaction to recent concerns of data center hardware compromises that came to light in the public eye over the last couple years.
There were two main announcements, a new relationship with Google Cloud Platform and a new flash device - the AFF A800. Also, in our interviews with NetApp, we learned about the future of Fibre Channel at the hyperscalers.
Google. Google Cloud Platform now integrates NetApp Cloud Volumes as a drop-down menu capability as part of the Google console. This allows enterprise customers, for instance, to use Cloud Volumes to manage their data on Google's cloud service while simultaneously managing their data on premise. This relationship with Google now rounds out the NetApp relationships with the main hyperscalers - it already has in place relationships with both Amazon (AWS) and Microsoft (Azure). NetApp Cloud Volumes on Google Cloud Platform is currently available as a "preview" capability (sign up at www.netapp.com/gcppreview) and is expected to go to commercial status by the end of 2018. Customers will pay Google for the use of NetApp Cloud Volumes.
AFF A800. New flash hardware available from NetApp, which besides having impressive density and low-latency capabilities supports NVMe-over-Fibre Channel. Of course, the product also supports 100 Gbps Ethernet. From a historical standpoint, it is interesting that NetApp, a company whose heritage was driven by storage over Ethernet, is touting Fibre Channel. But, that's what its customers are asking for in order to accelerate their on-premise workloads such as database (Oracle), ERP (SAP) and other mission-critical enterprise workloads. In our interviews with NetApp, we were told that Fibre Channel is growing faster than Ethernet - this makes sense given the company's foray in recent years to flash and low-latency workloads.
Fibre Channel at the hyperscalers? We asked about what is going on with the hyperscalers' architecture to adapt to AI/Deep Learning workloads. NetApp executives explained that AI workloads are different from traditional workloads; they are random, low latency workloads connecting to GPUs. This type of workload, we were told by NetApp, works very well when attached to Fibre Channel. From NetApp's perspective, if customers want to run AI workloads fastest, they would likely do so on-premise, using Fibre Channel. Yet, many customers run their workloads on hyperscalers, all of which use Internet Protocol and the underlying Ethernet infrastructure. We have always been skeptical that hyperscalers would adopt Fibre Channel. We believe the hyperscalers may work with vendors such as NetApp to develop additional software capabilities to address the changing workloads relating to AI/ML/GPU workloads in the future - on top of IP/Ethernet infrastructures.
The OCP Summit 2018 hit record attendance and we can can summarize the theme as that of continued disaggregation of network/server functions. Examples of demonstrations, presentations and proposals associated with disaggregation are as follows:
It was great to catch up with old friends and make new friends at OCP this year. The show was highly successful with attendance at the Facebook and Microsoft booths so large that it was difficult to move around. On the switch side, most of the announcements were incremental to the market, but with new chips on the horizon, and a delay in 100 Gbps because of supply constraints, we see this as a temporary pause ahead of what will likely be some bigger announcements in 2018.
There were many highlights at OCP, but three things caught our eye while walking the show floor on both days.
• Microsoft’s project Olympus server is about to transition Microsoft away from High-Density servers and towards Rack servers. This is more in line with what other Tier 1 cloud providers are doing. We note the smart-NIC is still a multichip solution, one that could be further reduced in future generations. They also announced ARM based servers and joined Facebook on announcements in machine learning and AI optimized compute. We see this change in Cloud architectures as a good sign for the industry. The market is quickly moving into more use cases, which will help drive growth beyond just moving workloads away from the premise market.
• The white box vendors were in force at the show. Edgecore showed various Fixed and Modular form factors. We note that some of these boxes are modified for larger Cloud customers with the inclusion of large SSDs or memory. We have a pretty good sense of what is using these additions, but that is a topic for a more detailed report. We also saw Quanta and Delta with large presences on the show floor.
• This year we saw many software announcements around OCP. Arista announced their containerized EOS operating system (cEOS). We saw Apstra and Cumulus active at the show as well running into many other software vendors in attendance. OCP has done a good job at straddling the hardware/software boundary, but clearly the software needed to run these networks is quickly evolving as well.