Five Ways Real-Time Edge Analytics Can Save the Day

by SWIM Team, on Sep 28, 2017 2:28:14 PM

Swim ESP™ provides edge analytics capabilities for brownfield industrial environments, enabling businesses to extend existing sensor networks into the real-world and leverage the compute power of already-deployed devices. Swim applies Machine Learning at the network edge, taking advantage of computing resources on deployed devices to perform real-time edge analytics. Swim automatically builds a distributed, secure, resilient fabric that enables connected devices to share insights with each other, spreading knowledge through the fabric similar to the way knowledge spreads in the real world.

Save the Day_No URL_image

Significantly, this spread of knowledge can occur with or without a connection to the internet. This makes Edge Computing architectures an ideal candidate for remote locations, or other instances where an internet connection is not available or reliable. According to the IDC, by the year 2018, 40% of IoT data will be stored, processed, analyzed and acted upon at the edge of the network.

Here are some examples where employing an Edge Computing architecture can ensure a system stays up, even when the internet is down:

1. Resilient and Available, Even without Internet Access

External factors like the recent natural disasters in Mexico and the Caribbean can bring down entire communications systems for prolonged periods of time. According to a statement from the Georgia Institute of Technology, Edge Computing architectures provide “a new way of gathering and sharing information during natural disasters that does not rely on the internet.” The ability to continue gathering insights from connected devices, even without an internet connection “will give a huge advantage to first responders.” This is because, regardless of whether there is an available internet connection, Edge Analytics systems remain resilient and available.

Edge Computing is suited to conditions where internet is unavailable or spotty because by running analytics at the edge, field systems and devices can continue to perform their normal functions locally. Devices are able to maintain internal consistency within the edge network, and virtual twins of devices can update cloud systems as internet becomes available, however frequently that occurs. The key advantage of Edge Computing in these situations, is that the edge devices themselves can perform the “aggregation of information and filtering [that] lowers the data overhead requirements” for environments with limited internet access.

2. Higher Resolution Data Means Full Context for Operators

Often, the limits of a network can govern which data is sent to business systems, and how frequently those systems are updated. This can lead to important operations information being lost in the wash; critical insights can either be obscured in a sea of irrelevant, raw data or not included in samples being transmitted to centralized systems.

By performing intelligent analytics at the edge, Machine Learning can be applied to all operational data, regardless of network restrictions. This allows for intelligent filtering of operational data by edge devices. Because Swim ESP-enabled devices can “share insights” throughout the network, Swim’s Machine Learning always has access to the full context to all the devices on a network, and apply this context to the filtering of sensor data. By being able to perform Machine Learning on the entire dataset, without needed to forward all raw data over the network, Swim ESP ensures that all critical insights are forwarded to business systems, regardless of network restrictions.

3. Reduce Strain on Network and Business Systems

For businesses in highly regulated industries such as Pharmaceuticals, Food & Beverage, and Defense, compliance obligations often require the preservation of edge device data for audit and regulatory purposes. This prevents the use of random sampling or aggressive filtering of raw edge device data to lessen burden on networks and centralized applications. Due to the high volumes of industrial sensor data, business systems (ex. ERP systems) that maintain compliance obligations can be overwhelmed, potentially halting operations until a centralized system has caught up.

Real-time edge analytics provides an alternative to transmitting all raw data over a network, allowing for intelligent filtering of raw sensor data at the edge, and enabling businesses to retain full time-series data and context without needed to transmit irrelevant or “noisy” data that can obscure significant events. Because Edge Analytics systems intelligently filter data for significant events, burden on the network is lessened and centralized business systems receive full sensor logs without needing to sort through redundant or irrelevant “noise” data.

4. Provide A “Local Cloud” for Remote Locations

In some environments, establishing a reliable internet connection is simply not feasible for geographic, environmental, or other practical reasons. These remote or challenging environments are ideal use cases for performing analytics at the edge. Because Edge Analytics is not reliant on maintaining a consistent internet connection, deployed field devices can operate their own “local cloud” application, sharing data and context amongst each other, and autonomously responding to context based solely on local information.

This creates an environment when devices can observe and react to continuous insights at the edge, even if cloud or other business applications are only updated in bursts over cellular networks, or manually synched by a field technician. Furthermore, this means that multiple remote edge-cloud deployments can operate in parallel (with or without internet access), while asynchronously updating a central management system whenever internet access is available in each edge location.

5. Even When an Edge Node Fails, the Rest Keep Working

Industrial systems are hyperconnected, and in many instances the failure of one node can bring an entire system to a halt. However, Edge Analytics systems employ a “virtual twin” model, where the state of any given device is available regardless of the availability of the physical device. In other words, even though a physical device (for example, an RFID reader) is unavailable, other devices can still communicate with the failed device’s virtual twin.

This means that although the physical system is not operating a whole, as far as the rest of the system is concerned, all necessary context is available and working systems can continue operating uninhibited. When a failed device is repaired or replaced, it can resync with the virtual twin of its predecessor and ensure continuity within the system, as if there was never an interruption in the first place.

Learn More

Learn how SWIM uses Machine Learning to power analytics at the edge, enabling a resilient and highly-available streaming data analytics engine for industrial applications.
Topics:Machine LearningStateful ApplicationsSWIM SoftwareIndustrial IOTEdge AnalyticsDigital Twin

Comments