Edge computing is a new type of infrastructure that's been gaining traction in the tech world because it allows to cut back network latency and transfer costs. In our new article, we investigate how and why the edge can be used with Kubernetes, the world's most popular container orchestration platform.
In the race for customer attention, seconds can mean the difference between winning or losing. Businesses seeking a headstart are exploring the edge, a new type of infrastructure proliferating in the tech world. It links on-site and cloud computing, increasing processing speed and reducing latency.
As infrastructure, the edge needs software to run. Our suggestion? Kubernetes. Not only is it free, open-source, and robust, but also battle-tested and widely adopted by enterprises.
In this article, we will explain in further detail what Kubernetes and the edge are, how businesses can and do benefit from using them in combination, and why we think it’s a match made in heaven.
By now most of you probably know this, so we’ll keep it short and simple. Kubernetes, Greek for helmsman or pilot, is a container orchestration platform developed by Google and released as open-source in 2014. Inspired by its predecessor Borg, it quickly gained traction in the community. Between January 2015 and July 2016, the number of commits made to Kubernetes rose from 7 to over 30 thousand (Fig. 1). It was evidently filling a gap in the market.
Today, Kubernetes reigns supreme as the leading container management software in the world. Think about it. Can you name an alternative? The numbers back it up: in a survey by Red Hat, 88% of respondents declared they were using Kubernetes for container management.
At Clurgo, we understand the power and impact of Kubernetes and have employed it in several projects: optimizing the electronic delivery system of a leading mail-handling company; upgrading bonds management for a bank; or working on a Kubernetes-operated customer loan system that is going to become a standard in a large part of Scandinavia.
Suffice it to say that Google’s open-source helmsman is here to stay. Not only that, it’s already expanding to a new and exciting area: edge computing.
When you wait for a web page to load, how long does it take before you get antsy? A minute? Probably less than that. According to HubSpot, the first five seconds are the most important. Every second longer within that range raises the likelihood of the user leaving the site by 5%. Beyond 10 seconds, it gets past 100%.
This goes to show that internet users don’t like to be kept waiting. Unfortunately, modern infrastructure doesn’t exactly indulge them. Even though most services are hosted in the cloud, the “cloud” is ultimately a physical data center that has to be located somewhere. The farther it is from your computer, the longer it takes for the signal to travel there and back, and the load time increases.
Edge computing aims to solve this issue by reducing the distance between computers to servers (Fig. 2). The idea is to introduce an extra layer of computing units (or nodes), parallel to the cloud but physically closer to the user. These nodes can perform service delivery or computation, as well as provide immediate storage and caching. They do not replace the cloud but work in tandem with it. In fact, you can think of them as the cloud’s envoys, representing its kingdom in the neighboring lands – at, as it were, the edge of its territory.
The customer’s attention span, important as it is for business, doesn’t top the list of reasons for using the edge. Safety does. Think of green-powered smart homes, electric vehicles, or even modern aircraft. Loaded with electronics that communicate with the cloud, they can put human life and well-being at risk if that communication is delayed by as much as a millisecond. That’s why edge computing is a crucial element in the design of devices that augment human operations by processing and computing input from the environment in real-time. The edge ensures minimal to no latency and instantaneous feedback between human and machine, making the users safe and sound.
There are 10 million electric cars on the roads today. Most, if not all, are equipped with some kind of technology that connects to the cloud. And we’re only talking about cars. The advent of the Internet of Things means there will soon be tens of billions of devices online. Imagine the strain on the wireless and cloud infrastructures, as well as on businesses in terms of transfer costs, if all those devices kept sending out all the data they collected. The edge takes the strain off, decreasing the costs of maintenance and service fees that would otherwise skyrocket.
Now that we can see the numerous advantages of edge computing, let’s look at some examples of using it in concert with Kubernetes.
As we’ve established, Kubernetes means “pilot”. More appropriately, however, we should refer to it as “autopilot”. Because what would be the use of having millions, or even billions, of autonomous computing units if they still needed to be handled by people? A robust orchestration platform must work hand in hand with the edge so all the savings made on infrastructure maintenance and other areas don’t go to waste. Kubernetes is a great fit for that role not only because it’s free, open-source, and widely tested in many production environments, but also because it’s easily scalable. 10 million electric cars? More IoT devices than there are people on Earth? No problem for our helmsman.
Companies from such diverse industries as telecommunication, aviation, and smart electronics are recognizing the potential of Kubernetes to run at the edge. Deutsche Telekom uses Kubernetes to operate over ten thousand of its edge nodes. Next to no human intervention is necessary because the platform takes care of all the updates and syncing. The US Department of Defense installed Kubernetes in its Dragon Fly spy planes to run machine learning algorithms on the aircraft’s computer in a real edge-computing fashion. The spy plane flies at high altitudes, collecting espionage data and sending it to military centers in near real-time. Finally, Tesla employs Kubernetes to handle edge units embedded in the devices connected to its renewable power grid. In cooperation with the cloud, the units ensure the right balance between the supply and demand of energy in Tesla’s grid, so that there are no blackouts or overloads. Because of the intermittent and fluctuating nature of renewable energy, the process requires careful and continuous orchestration of data from a multitude of devices in real-time.
The significance of Kubernetes and edge computing will likely increase with time for all those industries that process large quantities of data in real-time. Apart from the ones mentioned above, these may include retail, automotive, space, streaming, and travel businesses. As the Internet of Things becomes more and more widespread and the edge more and more indispensable for it, Kubernetes may be well set to dominate yet another type of infrastructure.
Automating business and technology is an alluring prospect, promising gains in profits and efficiency. But even the most advanced robot needs an expert to install, program, maintain, and oversee it. If we have managed to convince you of the merits of Kubernetes and you would like to implement it in your business, whether in the cloud, on-premises or at the edge, do not hesitate to drop us a line. Clurgo would love to look at your case and help you leap to the future.
As Kubernetes definitely deserves more attention than just one article, this post marks the start of our series on the platform and its applications in business. Stay tuned, the next articles are coming up soon.