What Is Edge Computing And Why Is The Future

We are in the cloud computing era, but we are also being pushed towards another technology called the “edge.”

Those unfamiliar with the edge computing may wonder what is edge computing and how will it reshape data and networks.

 

What is Edge Computing

The idea of Edge is to do computing or processing near the data source. The data will simply be processed near the closest to the point of its collection, so that data becomes actionable.

Instead of relying on the cloud at one of a dozen data centers to do all the work Edge relies on a distributed computing network. The word edge in this context means literal geographic distribution.

So does it mean that the future will be cloudless? In contrast, it simply means that the cloud is coming to you.

Whilst Cloud computing has been increasingly prevalent, this year we expect to see more Edge computing come to the fore. Companies will have their technologies in the field, next to their equipment and use IoT to connect the Cloud and physical worlds together.

This can be used for image analysis and recognition which can be used to identify any problems, quality issues or security threats.

One of the biggest contributors to the rise of edge computing is the ongoing growth of the Internet of Things (IoT). 2019 promises to be the year that edge computing becomes a reality. A recent report from Grand View Research predicts the global edge computing market will reach $3.24 billion by 2025, with a 41% compound annual growth rate.

Edge Computing offers several benefits and the most important ones are the following.  

 

Benefits of Edge Computing

The demand for Edge Computing grows exponentially but driven by several considerations. Let’s discuss them below in detail.

Latency

It seems that data transmission is faster when its sources are in the same network. One great driver for edge computing is the speed of light. The brief moments after you click a link before your web browser starts to actually show anything is in large part due to the speed of light.

If a Computer A needs to ask Computer B, half a globe away, before it can do anything, this delay is perceived as latency by the user of Computer A.   

Multiplayer video games implement numerous elaborate techniques to mitigate true and the perceived delay between you shooting at someone and you knowing, for certain, that you missed.

Voice assistants typically need to resolve your requests in the cloud, and the roundtrip time can be very noticeable.

Your Echo has to process your speech, send a compressed representation of it to the cloud.

The cloud has to uncompress that representation and process it which might involve pinging another API somewhere, maybe to figure out the weather, and adding more speed of light-bound delay.

And then the cloud sends your Echo the answer, and finally, you can learn that today you should expect a high of 85 and a low of 42.

By solving the proximity problem, you solve the latency problem. The on-device processing approach ensures that only non-critical data is sent over the network and that critical data can be acted upon immediately.

That is important for latency-sensitive applications, such as autonomous vehicles, where having to wait milliseconds may be untenable.

 

Security & Data Sovereignty

In geographies where compliance and data residency are critical, it can be a requirement to keep data local.

And with IoT data often representing important IP of an enterprise, it can be attractive to keep it at the edge rather than move it to the cloud or a remote data center.

It might be weird to think of it this way, but the security and privacy features of an iPhone are well accepted as an example of edge computing.

Simply by doing encryption and storing biometric information on the device, Apple offloads a ton of security concerns from the centralized cloud to its diasporic users’ devices.

But the other reason this feels like edge computing and not personal computing is that while the computing work is distributed, the definition of the computing work is managed centrally.

You didn’t have to cobble together the hardware, software, and security best practices to keep your iPhone secure. You just paid $999 at the cellphone store and trained it to recognize your face.

The management aspect of edge computing is hugely important for security. Think of how much pain and suffering consumers have experienced with poorly managed Internet of Things devices.

When all of your data must eventually feed to its cloud analyzer through a single pipe, the critical business and operating processes that rely on actionable data are highly vulnerable.

As a result, a single DDoS attack can disrupt entire operations for a multinational company. When you distribute your data analysis tools across the enterprises, you distribute the risk as well.

While it can be argued that edge computing expands the potential attack surface for would-be hackers, it also diminishes the impact on the organization as a whole. Another inherent truth is that when you transfer less data, there is less data that can be intercepted.

The proliferation of mobile computing has made enterprises much more vulnerable because company devices are now transported outside of the protected firewall perimeter of the enterprise.

Analyzing data locally means, it remains protected by the security blanket of the on-premise enterprise. Edge computing also helps companies overcome the issues of local compliance and privacy regulations as well as the issue of data sovereignty.

 

Bandwidth

Keeping the data close to where it is generated and performing local computation on it avoids the need to provision (and pay for) the substantial network bandwidth to move it up to the cloud.

Security isn’t the only way that edge computing will help solve the problems IoT introduced. The other hot example mentioned a lot by edge proponents is the bandwidth savings enabled by edge computing.

For instance, if you buy one security camera, you can probably stream all of its footage to the cloud. If you buy a dozen security cameras, you have a bandwidth problem.

But if the cameras are smart enough to only save the “important” footage and discard the rest, your internet pipes are saved.

Almost any technology that’s applicable to the latency problem is applicable to the bandwidth problem. Running AI on a user’s device instead of all in the cloud seems to be a huge focus for Apple and Google right now.

But Google is also working hard at making even websites more edge-y. Progressive Web Apps typically have offline-first functionality.

That means you can open a “website” on your phone without an internet connection, do some work, save your changes locally, and only sync up with the cloud when it’s convenient.

Google also is getting smarter at combining local AI features for the purpose of privacy and bandwidth savings.

For instance, Google Clips keeps all your data local by default and does its magical AI inference locally.

It doesn’t work very well at its stated purpose of capturing cool moments from your life. But, conceptually, it’s quintessential edge computing.

 

Applications

Edge computing can be considered for use cases where devices have poor connectivity. With a world of IoT, we are most likely in the case of poor networks.

Other use cases have to do with latency-sensitive information processing. This is ideal for situations when real-time is a requirement.

The most telling use case is self-driving cars which run through location not covered by any network while requiring real-time processing.

How to implement it?

There are some standards to implement Edge computing. The most famous is Fog Computing.

The term was created by Cisco in 2014. Fog enables repeatable structure in the Edge computing concept.

Therefore, enterprises can push computing out of centralized systems or clouds for better and more scalable performance. Please refer to the Cisco document for further details.

In most cases, Edge cannot do without Cloud.

Cloud still plays a critical role in enabling new levels of performance where significant computing power is a requirement to effectively manage vast data volumes from machines.