It’s Not Edge vs. Cloud, It’s Fast vs. Slow

by Brad Johnson, on Jul 3, 2018 5:16:51 PM

I’ve read (and written) a lot about edge computing over the past year, and it seems like the edge story is continually pitched as being in competition with cloud technologies. For years, cloud providers have touted connectivity and big data analytics as a panacea for building distributed applications. So it’s only natural that edge computing, a highly decentralized alternative to cloud’s centralized approach, would be positioned as antithetical to the cloud. But in reality, it’s not a zero-sum game.

Far more often, distributed applications have both cloud and edge components. Which makes sense! Computing at the edge delivers real-time insights where data is generated. These insights can inform maintenance strategies, optimize business processes, monitor machine health, and provide context for operators. Meanwhile, cloud analytics can perform big data analysis on historical records to identify inefficiencies in aggregate. The reality is that not all data should be treated the same, and it makes sense to process different data in different places. The key to deciding if edge or cloud computing is an appropriate solution, is to identify whether the problem set calls for fast or slow insights. So why are slow insights sufficient for some problems, but not for others?

Let’s use predictive maintenance as an example. Typically, maintenance schedules are planned for ahead of time. The schedules are derived from a multitude of data sources, including logs of machine usage, past maintenance schedules, manufacturer recommendations, etc. These are all historical data sources, where insights are derived from looking into the past. Big data analytics are ideal for applications dealing with historical data, as latency isn’t a factor in deriving useful intelligence.

However, there are other, real-time data sources which may supercede insights generated from historical data. For example, suppose there are process indicators or anomalies (such as from a vibration sensor) which suggest a given machine or part is about to fail. These insights are time sensitive; after the part fails it’s no longer possible to prevent catastrophe. As soon as such an anomaly is observed, immediate action can be taken to repair or replace the anomalous part in order prevent unplanned downtime. The big data-derived maintenance schedule, no matter how many variables are accounted for, may not be able to predict such a failure. However, because edge computing enables the processing of data locally, these insights are exposed before sending data to the cloud for post hoc analysis. This is a far more efficient means of handling sensor data, as insights are continuously identified and operators can be notified in situ. Fast data scenarios such as these are precisely the class of data analytics problems where edge computing provides significant new value.

Learn More

Learn how SWIM EDX enables edge computing to power predictive maintenance applications for industrial machinery and other connected systems.

Topics:SWIM EDXEdge ComputingStateful ApplicationsMachine LearningIndustrial IOTEdge AnalyticsManufacturingAsset Tracking

Comments