Four Ways Edge Computing Helps Distribute Risk
by Brad Johnson, on Oct 27, 2017 3:15:00 PM
Edge Your Bets
When it comes to operations, downtime is an ugly word. Downtime means lost productivity, lost revenue, and costly schedule delays. That’s why it’s especially frustrating for operations professionals when technology systems are the cause for unplanned downtime. For example, in the pharmaceutical industry, massive volumes of operational data from RFID and other Supply Chain equipment can cause ERP systems to overload, shutting down production lines and costing millions of dollars in lost productivity. New Edge Computing architectures can help to reduce this data at the source. By distributing compute power across already deployed devices, operators can offload the burden of data deduplication and refining from centralized processors, reducing the risk of overwhelming business systems.
In the post, we discuss four of the primary benefits to deploying an Edge Computing architecture in industrial or Smart City environments, and describe how adopting an edge strategy can reduce risk for operations professionals.
One clear advantage of Edge Computing is that the proximity of compute to field devices minimizes the latency of insight delivery. Because data generation and processing are co-located, real-time edge data insights can be acted on without incurring network latency. No network latency means that automated responses are triggered faster and human operators are alerted sooner. What does this look like in the real world? Let’s continue with the example of pharmaceutical manufacturing facility. In a pharmaceutical manufacturing facility, an edge-based Machine Learning instance might observing product temperatures and use its model to predict when temperatures might exceed a threshold. Because both the observation and prediction occur on the edge, it can immediately notifying an operator or triggering an automated corrective action before products are damaged. Edge computing architectures are highly responsive, enabling the fastest possible response to anomalies and ensuring maximum uptime.
Similarly, Edge Computing architectures are better equipped to deal with fluctuating device counts or configurations. As devices are added to the system, digital twins are created and synced with the system automatically, based on relevant context. As devices are removed, their digital twin becomes dormant. As devices are replaced, whether via upgrades or for maintenance, they can sync with an already existing digital twin and retain context from the device that had been replaced. In all these situations, the central business system adapts autonomously, unaffected by the changes in the real world. This makes Edge Computing ideal for dynamic environments like warehouses and distribution centers, where individual assets are tracked continuously. As assets flow in, out, and around a facility, edge computing ensures that centralized business systems can keep up.
Edge Computing enables individual devices to function at separate units, which prevents the failure of a single device from affecting the performance of the whole system. In other words, because edge nodes are decoupled from the application itself, edge architectures are inherently fault tolerant. Should an anomaly take down an node in the network, the rest of the system will continue to operate unaffected. This makes edge architectures ideal for environments which require high resiliency. Fluctuations in data volume, dropped network connections, crosstalk, and other issues can affect the performance of devices on a network. By dealing with each device individually, and by having an application interface with a digital twin instead of a physical device, Edge Computing architectures are able to overcome the logistical challenges of multiple devices and device states operating on the same network at the same time.
Because each edge instance, the “piece” of software deployed on an individual device, is a granular unit, adding a new device is as simple as connecting to the network. Adding more devices does not increase the complexity of an edge-based application. Instead, each device processes its own data locally and updates a digital twin. Centralized applications can then use digital twins to create a digital model of real-world devices, and can compose aggregate data streams for functional units, or individual regions on the fly. As traditional database deployments scale, bottlenecks cause buffer queues to increase in length, slowing total processing times. However, with Edge Computing architectures, the system is distributed and processing occurs in parallel. This means that latency does not increase as the system scales, eliminating buffer times and reducing the risk of high data volumes from adversely affecting performance.
Learn how SWIM uses Machine Learning to power analytics at the edge, enabling a resilient and highly-available streaming data analytics engine for industrial applications.