Edge and the Cycle of Cloud Innovation

Piece written by CEO of StackPath, Christopher (Kip) Turco.

 

Computing innovation has always followed a cycle.

From its beginnings when workloads were centralized and processed by “big iron” mainframe computers in the 1950s – 1970s; to becoming decentralized and split between servers and local, desktop workstations in the 1980s – 1990s; to once again becoming centralized in the 2000s with workloads processed in “big cloud” data centers

Well, another 20 years have passed, and we are back to decentralized computing. The Edge. Instead of processing workloads in data centers out in the middle of nowhere they are now split between core and edge providers through wired and wireless networking.

Some may have been surprised at this shift, but we weren’t. Afterall, it is a cycle and cycles, by their very definition, repeat. In 2015 StackPath began building a cloud computing platform at the edge of the Internet to help our customers and partners build their edge.

And we weren’t alone. “Edge” is the new black. Everywhere you turn, providers, users, journalists, analysts, and others are buzzing about “edge computing” and the changes it offers for all kinds of businesses – gaming, media and entertainment, healthcare, retail, fintech – you name it.

Unfortunately, all this buzz is more noise than signal right now. Even with 5G, IoT, and the impact of COVID-19, it is not yet taking off as it should. We can’t even agree on what we mean by “edge”. One article talks about edge computing as though it is another term for locally processed workloads. Another article says edge computing is any processing, storage, and networking that happens at the telco. One company’s self-described edge “platform” is a SaaS product deployed in a handful of hyper-scale public cloud data centers while another company’s is a colocation solution providing ping, power, and pipe in micro-sized footprints.

 

 

All this inconsistency springs from a fundamental misconception: the edge is a “where”. As if you can draw a line on a diagram of the Internet —maybe between public cloud providers and Internet Exchanges, or around the base of a cell tower — and say that line is “where” the edge is. This misconception leads to yet another: that the edge is owned by the “whos” that own those “wheres.”

But the edge is not defined by “where” or “who.” The edge is, or should be, defined by “when.” And I do not mean what era, as in the 2020s. By “when” I am talking about the first or last step the data takes when traveling between an end device and an Internet-connected workload.

That means the “where” and “who” of the edge is relative: the physical location of data’s first/last step will vary from user to user, device to device, and even moment to moment. For example, when I’m using my smartphone at the office, a web app is the edge. And when I’m using that same smartphone in the car, a cell tower is the edge.

All of this might seem like a distinction more than a difference, but it has enormous implications for how edge computing can – and should – reshape the Internet and cloud. Thinking of the edge as a fixed “where” or “who” leads the industry to innovate ways to segment and shift workloads from one point on a map to another. But understanding that the edge is relative should inspire far greater innovation to enable workloads to optimize across all points on a map, from each moment to moment, and each user to user.

And the next wave of applications such as nextgen gaming, the Metaverse, smart buildings, smart cities, and autonomous vehicles will require super-fast response to events and platforms that provide highly reliable, high-performance, and secure infrastructure closer to end-users.

We are at the onset of the next computing innovation cycle and, although only time will tell, I think we’re going to be here for a while. Its time to take it to the edge.