Real-Time Edge Data Insights: How Edge Computing Solves for Latency at Scale

by SWIM Team, on Nov 28, 2017 12:11:38 PM

How Edge Computing Solves for Latency at Scale

Latency is a primary concern for most industrial applications. Timely delivery of insights about impending part failures, misrouted assets, misplaced tools, and other insights can have significant impact on business operations. Insights that are delivered too late lead to inefficiencies such as equipment failures and costly downtime. Last year, French energy management and automation giant Schneider Electric went so far as to declare that “latency is the enemy.” But Schneider also proposed a solution for “how Edge Computing can combat” latency in industrial environments. In this post, we’ll explore the ways latency can affect industrial applications and how Edge Computing optimizes for, and in some cases eliminates, the causes of latencies that can bog down traditional cloud applications.

What is latency?

There’s a reason why Schneider indicated Edge Computing was the solution industrial latency concerns, and it has to do with how latency is incurred by applications. PC Magazine defines computing latency as “any delay or lapse in time... between initiating a request in the computer and receiving the answer.” They classify latency into one of four categories: Data latency, Disk latency, Channel latency, and Network latency. Data latency refers “to the time between a query and the results arriving at the screen or the time between initiating a transaction that modifies one or more databases and its completion.” Data latency can occur locally, or be distributed between local machines and the cloud (e.g. updating a cloud database). In this way, Data latency can be considered a composite of Disk, Channel, and Network latency.

REDI_URL

According to PC Magazine, “disk latency is the time it takes for the selected sector to be positioned under the read/write head.” Typically disk latency is minimal, in the order of a few milliseconds. “Channel latency is the time it takes for a computer channel to become unoccupied in order to transfer data,” which is dependent on how many other processes are occurring simultaneously on shared resources. Lastly, “network latency is the delay introduced when a packet is momentarily stored, analyzed and then forwarded” over a network. Both Channel and Network latency are affected by activities such as multiplexing, and symptoms include buffering times. Specifically, network latency is the type of latency that Schneider has declared the enemy.

Disk latency is a function of read/write times, and so regardless of where compute occurs, disk latency is going to be roughly the same and comparatively negligible. However, proximity of compute to where data is generated can greatly affect data, channel, and network latency. In a traditional cloud applications, data is generated and forwarded to the cloud for central processing. Along the way, data incurs channel latency (e.g. multiple data sources feeding into a single IOT gateway device before transmitting to the cloud) and network latency, first via a local protocol (LoRa, WiFi, etc.) and then again via an internet network connection (broadband, 4G, etc.). Once central processing has occurred, the response is subject to the same latencies on the return trip. Consequently, data latency can be precious seconds, minutes, or even hours based on network performance and the distance of compute from the data source, as well as queue lengths.

How does Edge Computing solve for latency?

Edge computing circumvents this roundabout process, providing the resources to compute real-time edge data insights at the data source, which ensures minimal channel and data latency and eliminates the need to incur network latency before insights are computed. This has a profound impact on overall data latency, the totality of which can now be measured in milliseconds, orders of magnitude less than traditional cloud architectures. Channel latency is improved, as each edge device is responsible for processing data unique to that device. This prevents (or in the case of multiple sensors on the same device, reduces) the need for multiple data sources competing for shared compute resources. Furthermore, by processing raw data at the edge, only reduced, structured data is ever forwarded to the cloud. In this way, Edge Computing is able to deliver better distribution of compute resources, freeing up channel cycles both at the edge and further up the application stack.

However, the biggest benefit of Edge Computing architectures involves reducing network latency. Edge Computing applications don’t need to incur network latency before reacting to data insights, because there is no need to transmit raw data to a cloud for processing. Instead, Edge devices are empowered to respond to their own real-time edge data insights locally, without waiting for responses from the cloud. Time-critical processes can be handled at the edge, and further analysis is performed in the cloud for processes that are not time-bound. This significantly decreases overall data latency, leading to a more efficient distribution of labor between local and cloud-based compute resources. By optimizing for proximity to the datasource in time-critical use cases, Edge computing can help achieve significant latency advantages.

Learn More
Learn how SWIM uses Edge Computing to deliver real-time edge data insights with millisecond latency for industrial and other real-time applications.

Topics:Machine LearningSWIM SoftwareEdge AnalyticsEdge Computing

Comments