Top 3 Innovations Shaping Edge Computing in 2018

by Brad Johnson, on Feb 15, 2018 12:33:39 PM

Enterprises and public sector organizations are drowning in real-time data from equipment, assets, suppliers, employees, customers, and city and utility data sources. The hidden insights buried within this data has the potential to optimize production, transform efficiency, and streamline flows of goods and services - but finding  insights cost effectively remains a challenge. It is important to cost-effectively learn on data at the “edge” as it is produced. Enterprises need an architecture for learning on time-series data at the edge, using commodity hardware to create an efficient fabric of edge devices.

This approach flies in the face of the received wisdom of cloud-based, big-data solutions to ML problems. These big data solutions typically require complex, cloud-hosted ML solutions which are expensive, slow and unsuited to real-time data. But there is more than enough compute resource at “the edge” to cost-effectively analyze, learn and predict from streaming data on-the-fly, avoiding the need to transport it, store, clean, label and learn in the cloud. As big data systems struggle to handle data overload at the edge, edge computing architectures can filter and transform data locally, creating an intelligent bridge to centralized cloud systems. This post highlights three technology innovations that will shape edge computing projects in 2018.

Science_No URL.png

Edge computing will be shaped by these 3 innovations 2018:

  1. Machine Learning: Improvements in edge computing technologies have made it cost-effective to perform machine learning locally, at the edge. Furthermore, advances in self-training, unsupervised machine learning approaches are ideal for edge use cases. Local edge computing can transform large volumes of low-value data into low volumes of high-value insights: saving bandwidth, and avoiding unnecessary storage and cloud processing.
  2. Digital Twins: Used in application frameworks from Erlang to Orleans, the distributed actor model enables the representation of real-world objects as actors or “digital twins”. Digital twins enable the efficient modelling of complex real-world systems and ensure that application state remains consistent throughout an application. Digital twins can learn locally - on their own real world data - to predict future performance.
  3. Streaming APIs: REST-based architectures are not optimized for real-time data streams. With REST-based architectures, major development effort is spent slowing down edge data with queues and buffers so that it can be processed by downstream systems. High volumes of streaming data make it difficult for REST-based applications to reconcile state, leading to inconsistencies and stale insights. Streaming links to data sources enable real-time architectures which can maintain universal consistency, ideal for edge deployments.

Learn More

Learn how SWIM uses edge intelligence to deliver real-time insights from the dark data generated by connected enterprise systems.

Topics:Machine LearningStateful ApplicationsSWIM SoftwareIndustrial IOTEdge AnalyticsManufacturingDigital TwinSWIM AIEdge ComputingFog Computing

More...

Subscribe