We attended the Wi-Fi NOW conference in Redwood City, CA this week and attended some interesting presentations. We are writing about our observations and notes from the Google, Quantenna, Mist Systems and Mojo Networks presentations.
Google Station presentation. "GOOGLE STATION: PUBLIC WI-FI TO CONNECT THE NEXT BILLION INTERNET USERS." Monica Garde and Erika Wool made an interesting presentation. The jist of the presentation, from our viewpoint, is that Google is partnering with service providers and enabling these service providers to monetize the Wi-Fi network through a revenue sharing system that is based primarily upon advertising. The company shared some statistics, which we have in the accompanying slide.
Quantenna presentation. James Chen, VP Product Line Management presented "GREAT INNOVATIONS PART ONE: MASSIVE MIMO & DUAL-BAND 802.11AX". Chen made the the case that 8x8 WiFi (that Quantenna calls Massive MIMO) outperforms 4x4 systems. For instance, in its tests, at 85 RSSI and through a wall, performance was 1.6x greater using 8x8 compared to 4x4. The company also made the case that Massive MIMO has greater throughput compared to non Massive MIMO, as well; the company has demonstrated >1 Gbps throughput in a typical home. The company showed that Massive MIMO alleviates the "Sticky Client" using a 1x1 Samsung Galaxy Tab Active2 device. The company did not talk about 802.11ax, unfortunately, other than to say that 8x8 is relevant for 802.11ax, as well.
Mojo Networks presentation. Mojo CEO Rick Wilmer made the point that simply enabling Cloud-managed Wi-Fi has been done already, implying this is cloud 1.0, and that this message is boring. The company explained that its cloud architecture is cloud 2.0 because it takes advantage of the capabilities in the cloud and enables - Cognitive Wi-Fi. Cognitive Wi-Fi, as far as Mojo is concerned, has to do with big data (store key client parameters and run ML algorithms) and smart edge APs. The company didn't go into deep science of ML/AI, but explained the ML workflow: 1-data collection, 2-training the classifier model, 3-trained model in action, 4-result.
Mojo explained that it has lots of data to perform Machine Learning on. It has 1/2M APs deployed. The company shared that using 1 week of data of a subset across only 4 verticals (enterprise, education, mfg, retail & hospitality): 237K clients, 31M associations, 400+ applications. Separately, the company said it has obtains 50M associations per week (in a press release). A significant amount of the data that is delivered to the cloud has been pre-processed in the Mojo APs; the APs cache 2 days of data. The point of these statistics, according to Mojo, is that it has more data than other Wi-Fi vendors to train its Machine Learning system on.
According to Mojo, using inference engine, automatically fixes everything possible. Wilmer says that this makes interacting with the User Interface less necessary because it takes care of problems automatically. Was Mojo serious or joking when it said, "the UI may disappear as we know it." Time will tell.
The company shared some other information that was interesting:
Mist Systems. Bob Friday, of Mist made a presentation on May 17, 2018. In addition to the content from his presentation, I interviewed other at Mist personnel at the show. The company claims it is focusing and having success in selling to large enterprises. We learned that Mist uses Broadcom WiFi chips and has a custom-designed Bluetooth antennae array (shown at the show). The company highlights its location services as a unique capability, and it draws upon its Bluetooth capabilities to deliver location. However, the company's main message is its AI capabilities; in some ways, it has become the poster-child for AI amongst startups in the networking industry. Mist's presentation at the show reiterated the same point - that it is an AI company.
Stepping back, Mist has been shipping commercially for a year now. In our observation and research, the company's efforts to take share from competitors has landed it on the map - over the past two quarters, its larger competitors have taken notice of Mist and see it competing at large enterprise accounts.
During the Q&A part of the presentation by Bob Friday, Mist CTO and founder was asked something that we found very interesting; the question was what kinds of algorithms does Mist use in its system, and do they all need to learn? The answer was to the effect that many different types of algorithms are used, linear optimization, decision tree analytics, neural networks, etc. Friday made the case that there are just certain things you just know about how a WiFi network will and should work, so why go an have a machine learn about it when you already know it. This begs the question - how necessary is AI in the first place, especially if the vendor and its IT workers or VARs have gobs of experience and can design and implement a Wi-Fi network right in the first place. Looking at the problem differently, what this means is that some vendors may have had different backgrounds than competitors and can design Wi-Fi systems that know how to work under a variety of working conditions. Friday was also asked another question - given that Mist is focusing so much on AI, does this mean that far fewer IT workers will become employed? Bob's answer was diplomatic, but probably true - he said that no, we'll need the same number of workers in the near-term, and that AI Wi-Fi will simply allow the same number of IT workers to make better decisions. Still, the question makes it clear - the audience is concerned about job loss as AI works its way into the IT industry.
There were two main announcements, a new relationship with Google Cloud Platform and a new flash device - the AFF A800. Also, in our interviews with NetApp, we learned about the future of Fibre Channel at the hyperscalers.
Google. Google Cloud Platform now integrates NetApp Cloud Volumes as a drop-down menu capability as part of the Google console. This allows enterprise customers, for instance, to use Cloud Volumes to manage their data on Google's cloud service while simultaneously managing their data on premise. This relationship with Google now rounds out the NetApp relationships with the main hyperscalers - it already has in place relationships with both Amazon (AWS) and Microsoft (Azure). NetApp Cloud Volumes on Google Cloud Platform is currently available as a "preview" capability (sign up at www.netapp.com/gcppreview) and is expected to go to commercial status by the end of 2018. Customers will pay Google for the use of NetApp Cloud Volumes.
AFF A800. New flash hardware available from NetApp, which besides having impressive density and low-latency capabilities supports NVMe-over-Fibre Channel. Of course, the product also supports 100 Gbps Ethernet. From a historical standpoint, it is interesting that NetApp, a company whose heritage was driven by storage over Ethernet, is touting Fibre Channel. But, that's what its customers are asking for in order to accelerate their on-premise workloads such as database (Oracle), ERP (SAP) and other mission-critical enterprise workloads. In our interviews with NetApp, we were told that Fibre Channel is growing faster than Ethernet - this makes sense given the company's foray in recent years to flash and low-latency workloads.
Fibre Channel at the hyperscalers? We asked about what is going on with the hyperscalers' architecture to adapt to AI/Deep Learning workloads. NetApp executives explained that AI workloads are different from traditional workloads; they are random, low latency workloads connecting to GPUs. This type of workload, we were told by NetApp, works very well when attached to Fibre Channel. From NetApp's perspective, if customers want to run AI workloads fastest, they would likely do so on-premise, using Fibre Channel. Yet, many customers run their workloads on hyperscalers, all of which use Internet Protocol and the underlying Ethernet infrastructure. We have always been skeptical that hyperscalers would adopt Fibre Channel. We believe the hyperscalers may work with vendors such as NetApp to develop additional software capabilities to address the changing workloads relating to AI/ML/GPU workloads in the future - on top of IP/Ethernet infrastructures.
Keynotes at the NFV World & Zero-Touch Congress in San Jose, California were very interesting today. We share our observations and view of the main themes from these interesting presentations by Nokia, NEC/Netcracker, Google, CenturyTel. The main theme of these presentations, we think, is this: NFV/SDN is now deeply in the deployment and commercial phase, where compared to 3-4 years ago, it was just a concept.
Nokia. The company announced that its Airframe server platform, which is an OCP based design, comes available with either embedded acceleration or pluggable acceleration. This comment includes its software acceleration. The company explained that its Reefshark chipset can be equipped on the Airframe server and can perform better than a non-accelerated server:
In explaining functions that an Airframe with Reefshark can perform, the company gave a good example: massive MIMO beamforming can be assisted by the machine learning capabilities.
NEC/Netcracker. Enrique Gracia presented several uses cases of the NEC/Netcracker customers that related to NFV/SDN. He explained that 16 customers have deployed one or more of these uses cases.
Full Stack OSS/BSS/MANO. A customer deployed this system in 12 weeks to launch a VNF. The system managed both physical and virtual devices.
Expand to a new territory using VNFs from home region. A customer now delivers services to a customer outside the home territory by deploying the software and service from the network location at the home location. In this particular case, NEC/Netcracker and its customer do revenue sharing and VNFs include SD-WAN, virtual firewall and others. The service provider is expected to expand its customer addressable base by 40%, mainly targeting small/medium businesses in this non-home region. This system uses MANO, OSS, BSS and the marketplace. The company says in this case, time to revenue is expected to take 50% less time to deploy new VNFs in the future.
uCPE (Universal Customer Premises Equipment) deployment instead of branded hardware. The company worked with a service provider company to enable uCPE to be deployed as an alternative to Cisco, Juniper and others' gear.
Google Cloud. Vijoy Pandey, who represented Google Cloud, presented on the topic of using AI/ML to reconfigure its data center system. The company's cloud data center architecture has been evolving continuously since it was first introduced. Currently, the company is using its own AI/ML system to learn from current network traffic patterns in order to design its future network architecture.
CenturyTel. The company has deployed Broadcom based Ethernet switches using its own Network OS. These switches do their own packet forwarding. Additionally, the company has built its own orchestration system called VICTOR. It draws upon Ansible, NetCONF, uses the service logic interpreter from ONAP and uses parts of Open Daylight. The company plans to open source this development and the spokesperson Adam Dunstan said, perhaps jokingly, that this might be called ONAP-lite.
The OCP Summit 2018 hit record attendance and we can can summarize the theme as that of continued disaggregation of network/server functions. Examples of demonstrations, presentations and proposals associated with disaggregation are as follows:
Broadcom joined both Innovium and Nephos by publicly announcing 12.8 Tbps fabrics with its Tomahawk 3 product line. We love new data center silicon from all vendors, it is something we track closely and we see these as a disruptive technologies to the networking ecosystem and an enabler of next generation cloud architectures. There will be many more such announcements in 2018. Here are some of our takeaways as we enter 2018.
More rapid innovation cycle – Even noted in the Broadcom's Tomahawk 3 press release, we see the demand requirements of the hyperscalers as driving a more rapid cycle of silicon over the next couple generations. Tomahawk 3 is being introduced less than the typical 24 months we see separating prior between generations of data center fabric semiconductors. This will put significant pressure on parts of the supply chain, especially on optics vendors. Optics vendors are still ramping for 100 Gbps and now must support both OSFP and DD-QSFP for 400 Gbps, essentially doubling their product diversity needs. Not only are there more form factors, but there are also different variations of distance and specifications that increase the complexity.
What next – We see two waves of 400 Gbps, the first being based on 56 Gbps SERDES, the second coming in the 2020 timeframe based on 112 Gbps SERDES. We believe 800 Gbps is not that far off in the horizon as hyperscalers like Amazon and Google continue to grow. We note that the hyperscalers are about to be 3-4 generations ahead of the enterprise. This type of lead and technology expertise really changes the conversation around Cloud. We saw this at Amazon re:Invent with their Annapurna NIC, the Cloud is doing things that just aren’t possible in the enterprise, especially around AI, machine learning, and other new applications that take advantage of the hyperscalers size.
2018, the Year of 200 Gbps and 400 Gbps – In 2018 we will see commercial shipments of both 200 Gbps and 400 Gbps switch ports. We see significant vendor share changes because of this. Simply put the Cloud, especially the hyperscalers will be that much bigger by the end of 2018 and they buy a different class of equipment then everyone else. This will continue to cause the vendor landscape to evolve.