There were two main announcements, a new relationship with Google Cloud Platform and a new flash device - the AFF A800. Also, in our interviews with NetApp, we learned about the future of Fibre Channel at the hyperscalers.
Google. Google Cloud Platform now integrates NetApp Cloud Volumes as a drop-down menu capability as part of the Google console. This allows enterprise customers, for instance, to use Cloud Volumes to manage their data on Google's cloud service while simultaneously managing their data on premise. This relationship with Google now rounds out the NetApp relationships with the main hyperscalers - it already has in place relationships with both Amazon (AWS) and Microsoft (Azure). NetApp Cloud Volumes on Google Cloud Platform is currently available as a "preview" capability (sign up at www.netapp.com/gcppreview) and is expected to go to commercial status by the end of 2018. Customers will pay Google for the use of NetApp Cloud Volumes.
AFF A800. New flash hardware available from NetApp, which besides having impressive density and low-latency capabilities supports NVMe-over-Fibre Channel. Of course, the product also supports 100 Gbps Ethernet. From a historical standpoint, it is interesting that NetApp, a company whose heritage was driven by storage over Ethernet, is touting Fibre Channel. But, that's what its customers are asking for in order to accelerate their on-premise workloads such as database (Oracle), ERP (SAP) and other mission-critical enterprise workloads. In our interviews with NetApp, we were told that Fibre Channel is growing faster than Ethernet - this makes sense given the company's foray in recent years to flash and low-latency workloads.
Fibre Channel at the hyperscalers? We asked about what is going on with the hyperscalers' architecture to adapt to AI/Deep Learning workloads. NetApp executives explained that AI workloads are different from traditional workloads; they are random, low latency workloads connecting to GPUs. This type of workload, we were told by NetApp, works very well when attached to Fibre Channel. From NetApp's perspective, if customers want to run AI workloads fastest, they would likely do so on-premise, using Fibre Channel. Yet, many customers run their workloads on hyperscalers, all of which use Internet Protocol and the underlying Ethernet infrastructure. We have always been skeptical that hyperscalers would adopt Fibre Channel. We believe the hyperscalers may work with vendors such as NetApp to develop additional software capabilities to address the changing workloads relating to AI/ML/GPU workloads in the future - on top of IP/Ethernet infrastructures.
We attended the Deutsche Bank Tech conference this week and met with a ton of companies. It is always interesting to see the difference in questions from investors vs. those directly in the industry. During the conference, each company put spin and had different definitions of Data Center Interconnect (DCI) that helped address their specific portfolio. This is very similar to the early Cloud days where every vendor and component manufacturer said they sold into the Cloud. Fast forward to today, and very few vendors sell to the Cloud. We see a similar end game with many suppliers being squeezed out of the DCI market as it matures.
The lack of clarity created confusion amongst the investors as they went from session to session and we think is a short term negative to the market.
We are very excited to have holistic DCI coverage. One that looks at legacy approaches around Optical and the new approach of using switching and routing. We are hopeful that the market will move towards one consistent definition of DCI as that will be better for the market itself and the suppliers in that market, but see that as unlikely as many vendors seem to be digging into a definition that is self serving and more focused on legacy products vs. what customers will want in the future.
We look forward to many future conversations on DCI.
Tomorrow, at 8:30 AM, we are presenting at the Flash Memory Summit 2017 and will share our views on the storage infrastructure market. We expecting growth in segments such as hyperconverged, All Flash Arrays, and SDS. We expect growth from customer groups such as Cloud Service Providers, as well as Telecom Service Providers, while traditional enterprises are expected to experience declines.
From a technology standpoint, we are bullish on NVMe technology as well as 3D Xpoint and expect that Hard Drive based systems will experience long, slow declines.
For those in attendance at Flash Memory Summit (#FMS2017), we will be presenting slides. If you are interested in learning more about our views on the storage infrastructure market, please contact us.
Aerohive, a leading enterprise-class WLAN vendor, announced changes to its product and services pricing that is intended to get its foot in the door of more customers. Here is what it has done:
In our interview with management about the new product (AP122) and new service (Connect) announcement, we learned that the company expects many of its prospective customers will opt to chose the "Select" service level over time because there are more features than those available from Connect. Additionally, we learned that the company will be charging somewhat more for its services and software and somewhat less for its hardware, taken as a measure across its entire product line. We see this change as being consistent with its price-aggressiveness it just announced for its low end" of its product line, namely the AP122 and AP 130.
We expect that Aerohive's pricing moves will have an impact on the industry. Certainly, other well-featured products with aggressive price points have done well in the marketplace in recent years.