What does a Real-World Edge Computing Deployment Look Like?
by Brad Johnson, on Dec 19, 2017 2:05:18 PM
Edge Computing is at a stage where discussions about how Edge Computing can solve business problems happen in the abstract. Vendors tout their benchmark statistics, system integrators roll out new capabilities, but enterprises are left to guess how technologies like Edge Computing will work with their machines and other business objects. In this post, we’ll use the example of a manufacturing facility to demonstrate the concrete differences between Edge Computing/Hybrid and cloud-only OT applications. We look at the Part, Machine, Composite, and System levels to see how Edge Computing affects how applications interact with the entire hierarchy of an industrial deployment.
1. Edge Computing at the Component Level
It makes sense to start at the most granular level when discussing Edge Computing, as this is precisely the benefit adopting an Edge Computing solution. Edge Computing enables the processing of data from an individual part sensor locally. For example, a valve in a cooling pump may be monitored by an edge-enabled OT application. The sensor responsible for this valve will transmit state updates about the valve (e.g. “open” or “closed”). In the case of Edge Computing, local compute will enable the machine that contains this valve will “know” the state of that valve. If the machine expects the valve to be open, but it is not, then the machine can act. Any sensed anomalous behavior or indication that the cooling pump is about to fail can trigger a notification to a local operator or prompt a machine to shut down, preventing a larger catastrophe. This is only possible due to the proximity and latency benefits of Edge Computing.
If a central cloud application was responsible for monitoring that same valve, the machine which contained the part wouldn’t “know” anything. Data from the sensor would be pushed to the cloud, analyzed, and then results would be returned to an operator or application. Because all raw data is sent over the network, much more bandwidth is consumed and much more latency is incurred. This leads to a less efficient and significantly more costly application, if it is even feasible to monitor all parts in a given system using a cloud-only architecture at all.
2. Edge Computing at the Subsystem Level
The automatic distribution of edge services is beneficial when considering the Subsystem level. Edge applications can more efficiently compose multiple part sensor into aggregated streams, which deliver insights for an entire subsystem. For example, multiple valves may comprise of a single coolant pump. Data streams from individual sensors for each valve are processed in parallel, while edge services representing the coolant pump subsystem further process the refined valve sensor streams. What results is a real-time streaming data architecture that can be forked and drilled-down dynamically at runtime, providing a consistent stream of data to OT applications.
3. Edge Computing at the System Level
By following the same pattern of modelling real-world systems within an edge architecture, it is made possible to model entire systems, such as manufacturing plants, logistics facilities, and refineries. Just as multiple sensor streams can be composed into an aggregated subsystem datastream, multiple subsystems can be composed into a datastream representing the entire system. For example, the streams from subsystems such as cooling pumps, HVAC systems, manufacturing equipment, etc. can be aggregated locally or in the cloud, in order to provide a single stream representing an entire system or facility. All of this occurs in parallel, and therefore overall latency, even for the aggregate stream for the entire system remains in the low milliseconds. In this way, edge architectures are ideal for distributed real-time applications.
Learn how SWIM uses Edge Computing to deliver real-time edge data insights with millisecond latency for industrial and other real-time applications.