The Cranky Admin

Life on the Edge

Edge computing is becoming a buzzword in IT, with good reason.

It is generally assumed that readers of Virtualization (and Cloud!) Review know what public cloud computing is. Amazon launched Amazon Web Services (AWS) more than a decade ago, and it has become popular enough that when most people say "the cloud" in tech, they mean the public cloud offerings of the big three cloud vendors.

A decade is a long time, and now we must get ready for the edge.

"The edge" is the latest bit of buzzword bingo, and is likely to become as big a deal and as common a term in IT as the cloud. (Though if the publishers of Virtualization & Cloud Review decide we need to become Virtualization & Cloud & Edge Review, I may have to start a protest movement.)

The edge is exactly what it sounds like: a solution in which compute and storage are offered hyperlocally. This is being driven by the proliferation of Internet of Things (IoT) devices. Many proposed use cases for IoT devices rely on the ability to make quick decisions based on sensor data, decisions that may take too long if the data has to be trucked back to a centralized data farm and then back again.

The canonical example of an IoT device requiring decisions in timeframes frustrated by the speed of light is the driverless car. There are several factors driving what will eventual become the edge.

Connected Evolution
In Wired's excellent introductory article to the edge, 5G connectivity is only briefly mentioned. This is a shame, as the very same design principles that underlie many of the discussions about 5G are related to -- and indeed, inextricable from -- the edge.

Function dictates form, and form dictates function. Just as the public cloud is the result of the form of the data networks that existed at the turn of the millennium, the existence of the public cloud is driving the design of the 5G networks. In turn, the capabilities and design of the 5G networks are already influencing discussions and design considerations for edge services.

When the public cloud got born, organizational connectivity was monolithic. Even large enterprises usually had a few locations with a large connection to the Internet. Most smaller sites made do with lesser connectivity, often relying on upjumped consumer broadband solutions or dated ISDN lines.

Connectivity was largely asymmetrical. Upstream bandwidth cost a lot of money, but downstream bandwidth could be had relatively inexpensively. Putting a large amount of storage and compute capacity on the Internet's backbone made a lot of sense.

If your applications and data could be located there, where network connectivity is effectively unlimited, you maximize the likelihood that customers and employees alike will be able to access your IT resources in an acceptable timeframe. For added bonus points, the public cloud provider can add capacity as required with minimal effort on your part, and you don't even have to worry about setting up new datacenters or trunking in more connectivity. It's all Someone Else's Problem.

As a consequence, businesses were created around the world to take advantage of all this compute and storage, parked on top of the fastest Internet pipes in the world. Streaming video quickly overwhelmed fixed and mobile networks alike.

Fibre to the curb and fibre-to-the-premises gained support in industrialized nations, and replacement of the aging copper pipes began. As broadband capabilities increased, knowledge workers increasingly began to work from home. Smaller businesses became able to push their IT up to the cloud, too. We collectively became used to the freedom this allowed, and started to take our computing with us.

Smartphones become a gateway drug. Soon they were far more than an MP3 player, cellular phone and Internet browser rolled into one. With a simple add-on they became credit card readers. They were augmented reality devices. Smartphones livestreamed the revolution on Periscope, and combined to form a planetary real-time traffic network.

This led to the requirement for hyperlocality in the design of 5G networks.

Bandwidth
This high-capacity mobile network in turn unlocks capabilities that simply wouldn't exist with the previous network design. Like connected driverless cars.

Connected driverless cars talk among themselves. Ultimately, they'll talk to everything. The lamp post will tell them "Hey, I'm a lamp post, please don't crash into me." The street lights will tell them how many seconds until the traffic signal changes. State-sponsored sensor networks will relay information about vehicle and pedestrian positioning throughout the fabric, allowing cars to see around corners and get data even on non-connected entities.

All these machines will be talking among themselves, and to our smartphones, and helping other machines make decisions. The edge is all about making available hyperlocal storage and compute capacity in exactly the same way that 5G networks make available hyperlocal cells, and for many of the same reasons.

It's easy to see why a car might have some decisions it needs to make that are latency-sensitive. The difference between 15ms and 100ms can be a lot if you're trying to decide what to do in order to avoid crashing in to something. On the face of it, latency is about bringing the compute required to make those decisions correctly closer to the devices that need it.

It will be some time before a car has the compute capacity to make those kinds of decisions while factoring in the maximum amount of available information. I'm sure that national standards boards will require driverless cars not be sold without the ability to make good decisions based only on its own sensor data, but cars could be making better decisions if they were able to also crunch all this other data that these other machines are screaming into the void.

The closer you can get the compute capacity for that decision-making to the car itself, the quicker the decision. That's the speed of light in action. But by bringing the decision close to the car, you're also eliminating the need for the back-and-forth to more centralized datacenters like the public cloud.

This also frees up network space on the backhaul network. Less congested networks also help lower latency. This means that cars which aren't near an edge compute node can call out for decision-making help and get an answer back more quickly than if the network were saturated.

The Local Computer You Don't Own
The question, of course, is where do we put these edge nodes? How close to IoT devices do they really have to be? In the case of driverless cars, which will have some pretty heavy-duty onboard compute capacity of their own, the edge nodes they'd need to farm out to would be the sort of heavy lifters that need considerable rack space. Think of one or two datacenters per city as being "hyperlocal" in this case.

When it comes to other IoT devices, however, hyperlocal likely will mean what it says on the tin. When we get down to ambient-radiation-harvesting picosensors networked using ultra-low-power connectivity barely better than smoke signals, these units aren't exactly going to be making any decisions of their own.

Here, I expect the first wave of edge devices to be boxes about the size of a broadband router that connect over the local WiFi to an application running in the cloud. These units will offer local decision-making to sensors in range, aggregate results and fire that back up the hivemind. Once the first few standards wars have been fought, the inevitable monopoly rises and then is toppled by open source something or other (10-15 years, give or take), expect this to eventually be built into common "always on" devices like an actual broadband router, or even a TV.

The key to the edge is that you won't own it. Oh, you may have to pay for it. You probably will have to provide at least some of it with power and networking for free. You may even be legally be responsible for what other people do using the compute capacity it provides. But like the public cloud, you won't own it or be able to control any of the underlying infrastructure.

How local the edge gets depends on how our technology evolves. The edge will inevitably range from the addition of smaller, task-specific datacenters to handle latency-sensitive workloads like connected cars to sensor aggregation and command units powering everything from security systems to sprinklers.

The edge will be the extension of the subscription-fee-driven cloud, not the resurgence of organization-owned on-premises IT. It is already taking shape, piece by piece.

Hello, Alexa.

About the Author

Trevor Pott is a full-time nerd from Edmonton, Alberta, Canada. He splits his time between systems administration, technology writing, and consulting. As a consultant he helps Silicon Valley startups better understand systems administrators and how to sell to them.

Featured

Subscribe on YouTube