The Infrastruggle

The Need for Ethernet Speed

The move to 100GbE is happening, and sooner than you might think.

As an admitted space exploration aficionado and long-time NASA fan, I have to admit that one of the disappointments of my life was the fall-off in interest in manned space flight in the later years of the Apollo program and in later missions of the Space Shuttle program. The diminished interest in Apollo after No. 11, (the one that landed on the moon) and in the space shuttle almost after the very first mission, was revealed in an interesting paper published recently documenting mentions of missions in publications like the New York Times. As mission numbers clicked forward, news editors and readers began to treat them as commonplace and routine. In the world of "if it bleeds, it leads," stories of numerically later missions were buried.

The same thing, it seems, has happened around Ethernet. Long a stalwart of contemporary distributed computing, the technology of the world’s most pervasive networking standard has undergone many changes, usually connoting speed improvements, over the years. The latest are 40GbE and 100GbE.

Standards Fatigue
Originally described by IEEE in 2010 as 802.3ba, noteworthy as the first standard that covered two speeds of Ethernet in one document, subsequent refinements in 802.3bg (2011), 802.3bj (2014) and last year’s 802.3bm elicited scarcely a yawn in the trade press. Like space shuttle missions, it seemed that "standards fatigue" had set in.

Perhaps it's the way 100GbE is framed. Companies are just now digesting the previous "next big thing" in Ethernet: 10GbE. Chipsets to support 10GbE on the motherboard, along with 10GbE switch ports, have proliferated this year; in the main, it seems to support the growing network traffic density brought about by server virtualization. With up to 75 percent of workloads virtualized, it was bound to reshape the capacity and bandwidth demands of the networks connecting "edge devices" and switches to network aggregators: "distribution" switches and "core" switches. So, 10GbE has finally gone mainstream after so many years.

Suddenly, the switch and NIC peddlers are turning up the volume on the urgent need to deploy 40GbE and 100GbE technologies. Again, the obvious argument -- changing networking requirements -- is obvious to anyone steeped in the black art of wires and cables and interconnects. With all of that 10GbE traffic going on at the edge (within subnets and between servers), and with the proliferation of cloud switching (connections to distribution switches to and from cloud service providers), extra bandwidth is just the thing to keep the wheels turning.

Selling the Sizzle, Not the Steak
Only I sense a definitive lull in the interest level of the front office. How is this space shuttle mission really any different than the prior one? How is this moon landing going to collect rocks that are meaningfully different than those collected in the previous sampling mission?

Truth be told, organizations will need the additional bandwidth. That’s the steak on the menu, what you are mainly ordering 100GbE to get. What’s missing is the all-important "sizzle"; the sexy application that makes this upgrade so important.

When I survey the horizon, I really see memory stuff as the killer app. I’m not talking about flash: it has had its moment and shot its shot. I am talking about the new memory bus architectures being pushed to the fore by Intel and others that promise to support in-memory databases beyond the smallish analytics variety. OLTP in memory is what SAP is going for, as are Oracle and Microsoft. New buses will enable new support for these beasties. Plus, when you look at what some of the software-defined storage ISVs are doing right now, the need for 100GbE support becomes obvious.

Soon, it won’t be enough just to have Non-Volatile Memory Host Controller Interface Specification (NVMexpress or NVMe) on your server motherboard. You are going to want to extend the operational surface of the architecture across multiple server heads to provide the biggest and most scalable hosting platform for Big Data analytics and OLTP processing outside of a mainframe. That means tightly coupling servers with the biggest, fattest interconnect pipe you can buy, capable of doing iSER or iWARP server memory coupling. A poor man’s supercomputer.

No Buck Rogers in the 21st Century?
That’s the sizzle that goes with the 100GbE steak. Without it, you are just selling more bandwidth. And you will have as much of a chance of holding management interest as you will have perpetuating a manned space flight program.

No interest, no bucks. No bucks, no Buck Rogers.

About the Author

Jon Toigo is a 30-year veteran of IT, and the Managing Partner of Toigo Partners International, an IT industry watchdog and consumer advocacy. He is also the chairman of the Data Management Institute, which focuses on the development of data management as a professional discipline. Toigo has written 15 books on business and IT and published more than 3,000 articles in the technology trade press. He is currently working on several book projects, including The Infrastruggle (for which this blog is named) which he is developing as a blook.

Featured

Subscribe on YouTube