Machine Learning Makes IoT Edge Data Useful for Predictive Maintenance

by Brad Johnson, on Jan 3, 2018 12:10:43 PM

One of the biggest challenges with monitoring the health of industrial equipment is in transforming torrents of raw, unstructured machine data into useful insights about maintenance needs, system performance, and anomaly detection. Traditional methods of “store everything now, ask later” can overwhelm networks and expose bottlenecks which delay the discovery of insights until it is too late. In order to capture insights about machine health and react to those insights in a timely manner, it is critical that machine data be processed at the edge.

Predictive Maintenance_No URL

Using Machine Learning to identify insights within raw, streaming machine data allows edge instances to reduce the amount of data that must be sent over the network. Data volume is compressed at the source, while downstream applications continue to receive the full resolution of machine data from IoT sensors. Processing this machine data at the edge reduces the burden on networks and, more importantly, ensures that machine data is transmitted and stored in a denser, structured data format. Storing machine data in a denser format greatly reduces the total amount of storage capacity needed to operate enterprise applications.

Solutions which process equipment health data at the edge can process streaming data from multiple sensors in parallel, much more efficiently than traditional cloud systems. This makes edge architectures well suited for predictive maintenance and an ideal fit for real-time applications.

How do Machine Learning algorithms “know” which data points are useful?

Most of the time, the sensors which monitor industrial equipment are “dumb” in the sense that they are “unaware” of the data which they are forwarding to intelligent downstream systems. This means, in the case of traditional cloud-centric architectures, that all data generated by sensors is forwarded over the network for central processing. Because the sensors are dumb, they must forward all data data to the cloud, and only there will it be processed to remove redundant or irrelevant data.

However, in edge architectures, these insights are computed locally. Computing locally enables machines to instantly react to insights as they are computed, without incurring network latency. This ensures that operators are alerted to maintenance needs in milliseconds, instead of minutes or even hours. Furthermore, in cases of impending system failure, edge-based systems can shut down systems in advance of costly malfunctions. But how do “intelligent” edge-based systems “know” which data to process at the edge?

There is a common misconception that storing every bit of data generated is critical, as key insights may be buried in the mountains of data. But this ignores the fact that not all data is created equal. The vast majority of industrial sensor data is “noise” data comprising of duplicate, redundant, corrupt, or otherwise unuseful data. Machine Learning algorithms can trim this first layer of data at the edge, freeing up vast amounts of bandwidth in local networks and immense amounts of compute capacity in cloud and other database systems.

Learn More
Learn how SWIM uses Edge Computing to deliver real-time edge data insights with millisecond latency for industrial and other real-time applications.

Topics:SWIM SoftwareMachine LearningEdge AnalyticsEdge Computing

Comments