MACSec Helps Pave the Way to End-to-End Data Security
As consumers and businesses put more data in the Cloud, the importance of securing that data increases. In just the last year, we have seen advanced threats and attacks by various entities to hack into that data and hyperscalers push back with both public and private mechanisms. Securing that data goes beyond just basic encryption or securing a server, and the role of the network is critical to better protection of data.
Many cloud customers are looking at providing end-to-end security to ensure, as best they can, that data can not be compromised. MACSec plays an important role in the future on how networks talk to each other and how the secure transmission of data between different locations. Security is especially important with 400 Gbps, as Cloud providers adopt 400 Gbps, it is not only being used for transmission within the data center but also Data Center Interconnect (DCI). Cloud workloads will increasingly require secure connectivity between data centers.
Looking at the Ethernet Switch and Router markets, we project the percent of ports shipping with MACSec will increase significantly over our forecast horizon. We expect vendors will continue to offer versions with and without MACSec, but as we move forward in time and have more purpose-built offerings for the hyperscalers that some products will only ship with MACSec.
The additional features and functionality included in Ethernet switches and Routers are positive for the industry. It not only increases features which help grow ASPs and revenues, but it also increases the amount of Ethernet ports shipped by expanding the number of use cases. 400 Gbps DCI is a great example of feature and addressable market expansion.
By Alan Weckel, Founding Analyst, 650 Group.
There were two main announcements, a new relationship with Google Cloud Platform and a new flash device - the AFF A800. Also, in our interviews with NetApp, we learned about the future of Fibre Channel at the hyperscalers.
Google. Google Cloud Platform now integrates NetApp Cloud Volumes as a drop-down menu capability as part of the Google console. This allows enterprise customers, for instance, to use Cloud Volumes to manage their data on Google's cloud service while simultaneously managing their data on premise. This relationship with Google now rounds out the NetApp relationships with the main hyperscalers - it already has in place relationships with both Amazon (AWS) and Microsoft (Azure). NetApp Cloud Volumes on Google Cloud Platform is currently available as a "preview" capability (sign up at www.netapp.com/gcppreview) and is expected to go to commercial status by the end of 2018. Customers will pay Google for the use of NetApp Cloud Volumes.
AFF A800. New flash hardware available from NetApp, which besides having impressive density and low-latency capabilities supports NVMe-over-Fibre Channel. Of course, the product also supports 100 Gbps Ethernet. From a historical standpoint, it is interesting that NetApp, a company whose heritage was driven by storage over Ethernet, is touting Fibre Channel. But, that's what its customers are asking for in order to accelerate their on-premise workloads such as database (Oracle), ERP (SAP) and other mission-critical enterprise workloads. In our interviews with NetApp, we were told that Fibre Channel is growing faster than Ethernet - this makes sense given the company's foray in recent years to flash and low-latency workloads.
Fibre Channel at the hyperscalers? We asked about what is going on with the hyperscalers' architecture to adapt to AI/Deep Learning workloads. NetApp executives explained that AI workloads are different from traditional workloads; they are random, low latency workloads connecting to GPUs. This type of workload, we were told by NetApp, works very well when attached to Fibre Channel. From NetApp's perspective, if customers want to run AI workloads fastest, they would likely do so on-premise, using Fibre Channel. Yet, many customers run their workloads on hyperscalers, all of which use Internet Protocol and the underlying Ethernet infrastructure. We have always been skeptical that hyperscalers would adopt Fibre Channel. We believe the hyperscalers may work with vendors such as NetApp to develop additional software capabilities to address the changing workloads relating to AI/ML/GPU workloads in the future - on top of IP/Ethernet infrastructures.
The market is in a period of rapid adoption of higher speeds led by the hypserscalers. The industry used 2016 and 2017 to adopt 25 Gbps and 100 Gbps port speeds based on 25 Gbps SERDES technology. As we enter 2018, those same hyperscalers are about to adopt 50 Gbps, 200 Gbps, and 400 Gbps port speeds based on 50 Gbps SERDES at a record shattering pace. In the data center alone, there are now eight unique port speeds, with countless more unique variations of form factor and pluggable distance.
The market will need additional bandwidth beyond what is currently available today. Several of these technologies were highlighted at the OIF Forum conference. 100 Gbps SERDES will help drive the industry towards that goal. Looking forward, 100 Gbps SERDES will help drive wave two of 400 Gbps, which will help enable Ethernet to extend its reach well outside of short reach data center distances. At the same time, it will also have a long life, with use cases ranging from enterprise to service provider.
The big question often asked is why after so many years for the market to adopt 10 Gbps, will we suddenly see a more rapid pace of adoption going forward?
There are many reason why, but we should look at a few things are different this time. First, the hyperscalers are a new type of customer. Hyperscalers truly bring a new scale to networking and compute in a way that makes the traditional SPs look small. Second, SDN, the hyperscalers have done something unique here that often gets overlooked that is occurring right now, in the second half of this decade. Hyperscalers are increasing the utilization rate of their compute and networking resources. For compute, this is approaching 100% utilization so the industry is in a period where hyperscalers, using SDN are able to grow network bandwidth at a pace faster than what the CPU is scaling.
This more rapid pace will not continue forever, but is one of the reasons why innovation over the next several years will occur more rapidly than historic norms and why it will be important for the industry to think about how to invest across speeds and technologies in order to better leverage existing investments. If not, the pace of innovation will simply be too much to recoup investment in the compresses timelines we are currently in.
Broadcom joined both Innovium and Nephos by publicly announcing 12.8 Tbps fabrics with its Tomahawk 3 product line. We love new data center silicon from all vendors, it is something we track closely and we see these as a disruptive technologies to the networking ecosystem and an enabler of next generation cloud architectures. There will be many more such announcements in 2018. Here are some of our takeaways as we enter 2018.
More rapid innovation cycle – Even noted in the Broadcom's Tomahawk 3 press release, we see the demand requirements of the hyperscalers as driving a more rapid cycle of silicon over the next couple generations. Tomahawk 3 is being introduced less than the typical 24 months we see separating prior between generations of data center fabric semiconductors. This will put significant pressure on parts of the supply chain, especially on optics vendors. Optics vendors are still ramping for 100 Gbps and now must support both OSFP and DD-QSFP for 400 Gbps, essentially doubling their product diversity needs. Not only are there more form factors, but there are also different variations of distance and specifications that increase the complexity.
What next – We see two waves of 400 Gbps, the first being based on 56 Gbps SERDES, the second coming in the 2020 timeframe based on 112 Gbps SERDES. We believe 800 Gbps is not that far off in the horizon as hyperscalers like Amazon and Google continue to grow. We note that the hyperscalers are about to be 3-4 generations ahead of the enterprise. This type of lead and technology expertise really changes the conversation around Cloud. We saw this at Amazon re:Invent with their Annapurna NIC, the Cloud is doing things that just aren’t possible in the enterprise, especially around AI, machine learning, and other new applications that take advantage of the hyperscalers size.
2018, the Year of 200 Gbps and 400 Gbps – In 2018 we will see commercial shipments of both 200 Gbps and 400 Gbps switch ports. We see significant vendor share changes because of this. Simply put the Cloud, especially the hyperscalers will be that much bigger by the end of 2018 and they buy a different class of equipment then everyone else. This will continue to cause the vendor landscape to evolve.
New CTO - It was great to see and hear Juniper’s new CTO, Bikash Koley at NXTWORK. The message was very clear that new networks need to be built, not only from the speeds and feeds technology point of view, but from the operator point of view. Scale and simplicity are very important and only going to be more critical as billions of IoT devices dump traffic and data on the network. Juniper is looking to take the network it is building for some of the hyperscalers and Service providers and take help the tier 2/3 Cloud as well as the enterprise customer adopt a cloud architecture for the future.
Contrail – The big announcement was OpenContrail joining the Linux foundation. What we saw in talking to customers and listening to the talks was that Contrail has significant commercial adoption, definitely larger than perceived in the marketplace. We see the largest hyperscalers and service providers as having their own controllers, but see Juniper leveraging its expertise in this area into smaller Cloud and service providers. This will give Juniper the opportunity to build, operate, and transfer for many customers.
We were impressed with the caliber of customers that Juniper has. Juniper has premier Cloud, Service Provider, and Enterprise customers. We enjoyed listening to Twitter’s VP of engineering on stage as an example. We see Juniper as one of just a handful of vendors that can support a customer base with this breath and complexity. We note Juniper mentioned complexity and simplicity throughout the sessions; we only see networking getting more complex, especially as we move beyond 100 Gbps. It is up to vendors to help the human scale with that complexity and we saw Juniper give many good examples during NXTWORK 2017.
Today Nokia announced its new FP4 ASIC and 7750 SR Router. Playing the leapfrogging game on speeds, we saw 36 400 Gbs ports in a 2RU box that looks awfully similar to a spine switch and the further blurring of what a next gen router and switch really look like, especially in the Cloud.
We heard continued confusion over winning Cloud scale accounts. We note that a customer like Apple buys from multiple vendors and for multiple reasons. What Apple builds for their own consumption is not what they will deploy in a telco provider or peering location.
The debate between merchant silicon and custom ASICs continues to come up. While we are slightly in favor of merchant silicon, we note that the Cloud providers do not fear custom ASICs, they merely want to have standard APIs to control that equipment.
We note the Nokia ports are DDQSFP and not OSFP so we do not have a clear answer on form factor either. We now wait for the next product announcement with the only clear answer that we are in a phase of rapid innovation in order to keep up with the network traffic demands of the Cloud.
Today Broadcom announced Trident 3. The companies third major release of a chip that drove the merchant silicon revolution in the data center and started the white box movie in the Cloud. With Trident 3, all of Broadcom’s data center switching ASICs now support speeds of at least 3.2 Tbps per chip.
Trident 3 is impressive, but a few things about it really caught my eye. First, Trident 3 will offer five different skus, two of which are really focused on campus switching. One could see a 48-port 2.5 Gbps switch out of the X3 version of Trident 3 next year. We believe the Trident family moving into the campus will be significant for the industry once products begin to ship.
Second, native 25 Gbps ports. Trident is the most popular of Broadcom’s ASICs, especially in the enterprise, and with Trident 3, we expect the market to quickly move away from 10 Gbps/40 Gbps products and towards 25 Gbps/100 Gbps products. This product aligns well with our forecasts for this transition which we are excited to be publishing shortly. We still don’t see a bandwidth need in most enterprises for 25 Gbps, but the ability to future proof at the customer level and the ability to consolidate skus at a vendor level will make this compelling.
Third, we see a potential for both switch vendors and customers to benefit from one family of ASICs from the campus all the way to the data center. While it is too early to know the impact of this right after the announcement, we look forward to conducting interviews over the next few months to define this impact.
It was great to catch up with old friends and make new friends at OCP this year. The show was highly successful with attendance at the Facebook and Microsoft booths so large that it was difficult to move around. On the switch side, most of the announcements were incremental to the market, but with new chips on the horizon, and a delay in 100 Gbps because of supply constraints, we see this as a temporary pause ahead of what will likely be some bigger announcements in 2018.
There were many highlights at OCP, but three things caught our eye while walking the show floor on both days.
• Microsoft’s project Olympus server is about to transition Microsoft away from High-Density servers and towards Rack servers. This is more in line with what other Tier 1 cloud providers are doing. We note the smart-NIC is still a multichip solution, one that could be further reduced in future generations. They also announced ARM based servers and joined Facebook on announcements in machine learning and AI optimized compute. We see this change in Cloud architectures as a good sign for the industry. The market is quickly moving into more use cases, which will help drive growth beyond just moving workloads away from the premise market.
• The white box vendors were in force at the show. Edgecore showed various Fixed and Modular form factors. We note that some of these boxes are modified for larger Cloud customers with the inclusion of large SSDs or memory. We have a pretty good sense of what is using these additions, but that is a topic for a more detailed report. We also saw Quanta and Delta with large presences on the show floor.
• This year we saw many software announcements around OCP. Arista announced their containerized EOS operating system (cEOS). We saw Apstra and Cumulus active at the show as well running into many other software vendors in attendance. OCP has done a good job at straddling the hardware/software boundary, but clearly the software needed to run these networks is quickly evolving as well.