What is Edge Compute

Edge computing is a distributed computing framework that brings applications closer to the point of data creation such as IoT devices (cameras, microphones, and sensors), smartphones, or computers. This proximity to the consumption point can deliver strong business and application benefits, including faster insights, improved response times and better bandwidth availability. 

In simpler terms, edge computing shifts computing from the cloud to places that are closer to the application or device. Another way to look at this is that the app requesting compute augmentation, such as a user’s computer or IoT device, will run the compute at an edge server located at a cell tower. Shifting compute to a closer location minimizes the amount of long-distance communication that has to happen between a client and server.

What differentiates edge computing from other computing models? The first computers were large, bulky machines that could only be accessed directly or via terminals that were basically an extension of the computer. With the invention of personal computers, computing could take place in a much more distributed fashion. For a time, personal computing was the dominant computing model. Applications ran and data was stored locally on a user’s device, or sometimes within an on-premise data center.

Cloud computing, a more recent development, offered a number of advantages over this locally based, on-premise computing. Cloud services are centralized in a vendor-managed “cloud” (or collection of data centers) and can be accessed from any device over the Internet.However, cloud computing can introduce latency because of the distance between users and the data centers where cloud services are hosted. 

Edge computing moves computing closer to end users to minimize the distance that data has to travel, while still retaining the centralized nature of cloud computing.

To summarize:

  • Early computing: Centralized applications only running on one isolated computer
  • Personal computing: Decentralized applications running locally
  • Cloud computing: Centralized applications running in data centers
  • Edge computing: Centralized applications running close to users, either on the device itself or on the network edge

What are the benefits of edge computing?

Edge computing helps minimize bandwidth use, which is finite and costly. As IoT devices such as cameras become smaller and higher definition, the amount of data they produce is increasing dramatically. This combined with people working from home and streaming video calls are taxing the internet connectivity. Even from the business side, every camera can be considered a live video stream and we all have experienced bandwidth bottlenecks when there are too many people in the house streaming video. In terms of industry reports, Statista predicts that by 2025 there will be over 75 billion IoT devices installed worldwide. 

Another significant benefit of moving processes to the edge is reduced latency. Augmented reality, virtual reality, and real time applications demand lower latency compute. Every time a device needs to communicate with a distant server somewhere, that creates a delay which can range from annoying to unacceptable. 

For example, consider two coworkers in the same city collaborating on a video call. They would notice delays or service outage if their video packets were sent out over the open internet and interference were to occur. If the video application runs at the cell tower and routes the packets locally to the city, the application response time appears faster and doesn’t experience as much network congestion issues. The duration of these delays will vary based upon their available bandwidth and the location of the remote server, but these delays can be avoided altogether by bringing more processes to the edge.

Secure localized data processing is another feature of edge computing. Processing data near the source removes any security risk of transporting the data over the open internet. Additionally, users who need to authenticate to a physical location will have to authenticate to a CBRS radio that has a known and unmoveable physical location. 

To recap, the key benefits of edge computing are:

  • Decreased latency
  • Decrease in bandwidth use 
  • Increase in data, application, and user security

What are Edge Compute Use Cases? Edge computing can be incorporated into a wide variety of applications, products, and services. A few possibilities include:

  • Security system monitoring: Facial recognition via a large number of cameras for access control. All recognition is run on site and response times are near instantaneous. 
  • IoT devices: Smart devices that connect to the edge can provide more efficient user interactions.
  • Self-driving cars: Autonomous vehicles need to react in real time, without waiting for instructions from a server.
  • More efficient caching: By running code on a CDN edge network, an application can customize how content is cached to more efficiently serve content to users.
  • Medical monitoring devices: It is crucial for medical devices to respond in real time without waiting to hear from a cloud server.
  • Video conferencing: Interactive live video takes quite a bit of bandwidth, so moving backend processes closer to the source of the video can decrease lag and latency.

0 Comments

Related Posts

The New Internet
The New Internet

In December 2016, Peter Levine gave a seminal presentation titled “The End of Cloud Computing''. He foretold the rise of “edge computing” as IoT devices and machine learning applications would start to produce more data and require more compute power. Peter recognized...

read more
What is CBRS?
What is CBRS?

CBRS is an acronym for Citizens Broadband Radio Service. CBRS is a band of frequencies between 3550 MHz and 3700 MHz that the US Federal Communications Commission (FCC) has recently opened up to general users.  Access to CBRS is available through a tiered system,...

read more