We attended the Wi-Fi NOW conference in Redwood City, CA this week and attended some interesting presentations. We are writing about our observations and notes from the Google, Quantenna, Mist Systems and Mojo Networks presentations.
Google Station presentation. "GOOGLE STATION: PUBLIC WI-FI TO CONNECT THE NEXT BILLION INTERNET USERS." Monica Garde and Erika Wool made an interesting presentation. The jist of the presentation, from our viewpoint, is that Google is partnering with service providers and enabling these service providers to monetize the Wi-Fi network through a revenue sharing system that is based primarily upon advertising. The company shared some statistics, which we have in the accompanying slide.
Quantenna presentation. James Chen, VP Product Line Management presented "GREAT INNOVATIONS PART ONE: MASSIVE MIMO & DUAL-BAND 802.11AX". Chen made the the case that 8x8 WiFi (that Quantenna calls Massive MIMO) outperforms 4x4 systems. For instance, in its tests, at 85 RSSI and through a wall, performance was 1.6x greater using 8x8 compared to 4x4. The company also made the case that Massive MIMO has greater throughput compared to non Massive MIMO, as well; the company has demonstrated >1 Gbps throughput in a typical home. The company showed that Massive MIMO alleviates the "Sticky Client" using a 1x1 Samsung Galaxy Tab Active2 device. The company did not talk about 802.11ax, unfortunately, other than to say that 8x8 is relevant for 802.11ax, as well.
Mojo Networks presentation. Mojo CEO Rick Wilmer made the point that simply enabling Cloud-managed Wi-Fi has been done already, implying this is cloud 1.0, and that this message is boring. The company explained that its cloud architecture is cloud 2.0 because it takes advantage of the capabilities in the cloud and enables - Cognitive Wi-Fi. Cognitive Wi-Fi, as far as Mojo is concerned, has to do with big data (store key client parameters and run ML algorithms) and smart edge APs. The company didn't go into deep science of ML/AI, but explained the ML workflow: 1-data collection, 2-training the classifier model, 3-trained model in action, 4-result.
Mojo explained that it has lots of data to perform Machine Learning on. It has 1/2M APs deployed. The company shared that using 1 week of data of a subset across only 4 verticals (enterprise, education, mfg, retail & hospitality): 237K clients, 31M associations, 400+ applications. Separately, the company said it has obtains 50M associations per week (in a press release). A significant amount of the data that is delivered to the cloud has been pre-processed in the Mojo APs; the APs cache 2 days of data. The point of these statistics, according to Mojo, is that it has more data than other Wi-Fi vendors to train its Machine Learning system on.
According to Mojo, using inference engine, automatically fixes everything possible. Wilmer says that this makes interacting with the User Interface less necessary because it takes care of problems automatically. Was Mojo serious or joking when it said, "the UI may disappear as we know it." Time will tell.
The company shared some other information that was interesting:
Mist Systems. Bob Friday, of Mist made a presentation on May 17, 2018. In addition to the content from his presentation, I interviewed other at Mist personnel at the show. The company claims it is focusing and having success in selling to large enterprises. We learned that Mist uses Broadcom WiFi chips and has a custom-designed Bluetooth antennae array (shown at the show). The company highlights its location services as a unique capability, and it draws upon its Bluetooth capabilities to deliver location. However, the company's main message is its AI capabilities; in some ways, it has become the poster-child for AI amongst startups in the networking industry. Mist's presentation at the show reiterated the same point - that it is an AI company.
Stepping back, Mist has been shipping commercially for a year now. In our observation and research, the company's efforts to take share from competitors has landed it on the map - over the past two quarters, its larger competitors have taken notice of Mist and see it competing at large enterprise accounts.
During the Q&A part of the presentation by Bob Friday, Mist CTO and founder was asked something that we found very interesting; the question was what kinds of algorithms does Mist use in its system, and do they all need to learn? The answer was to the effect that many different types of algorithms are used, linear optimization, decision tree analytics, neural networks, etc. Friday made the case that there are just certain things you just know about how a WiFi network will and should work, so why go an have a machine learn about it when you already know it. This begs the question - how necessary is AI in the first place, especially if the vendor and its IT workers or VARs have gobs of experience and can design and implement a Wi-Fi network right in the first place. Looking at the problem differently, what this means is that some vendors may have had different backgrounds than competitors and can design Wi-Fi systems that know how to work under a variety of working conditions. Friday was also asked another question - given that Mist is focusing so much on AI, does this mean that far fewer IT workers will become employed? Bob's answer was diplomatic, but probably true - he said that no, we'll need the same number of workers in the near-term, and that AI Wi-Fi will simply allow the same number of IT workers to make better decisions. Still, the question makes it clear - the audience is concerned about job loss as AI works its way into the IT industry.
Earlier this year, Aerohive issued a press release about its A3 software system. A3 is what we categorize as an Enhanced Network Access Control (ENAC) system; we calculate market share statistics on this market in our Security report series. The company is now getting ready to bring the product to market and is blitzing the media, so to speak. We were briefed and learned more about the product. To summarize, it checks the boxes necessary for us to include it in our ENAC report and we like that it has a common user interface to allow customers to perform device profiling, authentication / registration, compliance / remediation, device management, billing integration and network access control.
Our outlook for ENAC is positive and, today, three main vendors consist of the majority of share: Cisco (with ICE), Aruba (with Clearpass) and ForeScout. We learned that company has more aggressive pricing than these market leaders. If you look at why a product like A3 is important to Aerohive, recall that back when Aruba introduced Clearpass back in 2012/2013, it used it as a selling tool to get into its competitors' accounts (it also got high margin sales from Clearpass, too). Aerohive's A3 is positioned similarly - it operates with its competitors' equipment (including those from Cisco, Ruckus, Extreme and others). So, Aerohive has developed another means of selling to customers, by offering A3 to customers using non-Aerohive equipment.
Tomorrow, 650 Group's Alan Weckel will be a featured speaker on the NBASE-T hosted webinar, entitled "Growth of NBASE-T, Market Trends and Forecast." In this webinar, we will review Ethernet trends that related not only to Campus Switch ports, but also WLAN, computing and other devices. The NBASE-T ecosystem continues to expand, including most recently with broadband modem devices. We are excited about this market and hope you will attend.
There were two main announcements, a new relationship with Google Cloud Platform and a new flash device - the AFF A800. Also, in our interviews with NetApp, we learned about the future of Fibre Channel at the hyperscalers.
Google. Google Cloud Platform now integrates NetApp Cloud Volumes as a drop-down menu capability as part of the Google console. This allows enterprise customers, for instance, to use Cloud Volumes to manage their data on Google's cloud service while simultaneously managing their data on premise. This relationship with Google now rounds out the NetApp relationships with the main hyperscalers - it already has in place relationships with both Amazon (AWS) and Microsoft (Azure). NetApp Cloud Volumes on Google Cloud Platform is currently available as a "preview" capability (sign up at www.netapp.com/gcppreview) and is expected to go to commercial status by the end of 2018. Customers will pay Google for the use of NetApp Cloud Volumes.
AFF A800. New flash hardware available from NetApp, which besides having impressive density and low-latency capabilities supports NVMe-over-Fibre Channel. Of course, the product also supports 100 Gbps Ethernet. From a historical standpoint, it is interesting that NetApp, a company whose heritage was driven by storage over Ethernet, is touting Fibre Channel. But, that's what its customers are asking for in order to accelerate their on-premise workloads such as database (Oracle), ERP (SAP) and other mission-critical enterprise workloads. In our interviews with NetApp, we were told that Fibre Channel is growing faster than Ethernet - this makes sense given the company's foray in recent years to flash and low-latency workloads.
Fibre Channel at the hyperscalers? We asked about what is going on with the hyperscalers' architecture to adapt to AI/Deep Learning workloads. NetApp executives explained that AI workloads are different from traditional workloads; they are random, low latency workloads connecting to GPUs. This type of workload, we were told by NetApp, works very well when attached to Fibre Channel. From NetApp's perspective, if customers want to run AI workloads fastest, they would likely do so on-premise, using Fibre Channel. Yet, many customers run their workloads on hyperscalers, all of which use Internet Protocol and the underlying Ethernet infrastructure. We have always been skeptical that hyperscalers would adopt Fibre Channel. We believe the hyperscalers may work with vendors such as NetApp to develop additional software capabilities to address the changing workloads relating to AI/ML/GPU workloads in the future - on top of IP/Ethernet infrastructures.
Today Arista announced its entry into the campus market launching several products in the core and aggregation layer for campus switching. For years there has always been a blurry line on how one defines campus core, partially driven by the utility of the Cat 6500 installed base. But this has been changing as the Cat 6500 installed base gets refreshed with purpose built boxes.
For the most part, when an enterprise has two separate networks for the campus (user connectivity) and data center (server and compute access), campus core is counted in campus. When campus and data center are one network or the location is smaller the campus core is usually a portion of data center. This different deployment scenario has caused confusion in the exact size of the campus core market.
In our research we look at it in both use cases to better understand the too unique use cases as businesses truly look at it differently depending on their networking heritage and IT expertise.
What is happening now in the campus market is that campus switching is changing. Campus switching is transforming from a user connectivity role to an infrastructure role to support the change is how users connect and to support IoT which started with just cell phones and tablets, but is about to explode. With campus connectivity changing, so is the core and this is allowing enterprises to rethink their campus core.
Many customers will continue to see campus as a separate network, but many customers, especially when they are building a hybrid cloud data center are looking at merging the line between server compute and campus connectivity. We will see this architecture change at the same time the market moves towards MultiGig in the access layer and towards 25 Gbps and 100 Gbps.