Companies in a variety of industries are attempting to leverage data being generated at the edge. Whether it’s related to equipment, motion, position, the environment or other types of information, the data is often vast and is generated at speed. Traditional approaches have been to manage connectivity at the edge, while performing any complex analysis or learning in the cloud. Along the way to being useful in a cloud application, the data must be collected, transmitted, transformed, processed, filtered, stored, and analyzed. This creates a variety of challenges, especially if the goal is not to react from the data, but to make decisions in real time.
Today’s cloud solutions are are logistically ill-suited and insufficient for supporting real-time, data-rich edge applications. There are four primary challenges for supporting edge application:
- Scalability: Databases must scale to account state changes from new devices. As device growth outpaces the ability for databases to scale, performance diminishes while operating costs become prohibitive.
- Consistency: Message brokers must scale to account for the increased message volume created by new devices. As message volume outpaces the ability for the message broker to scale, queues form and buffering occurs, causing state information in the database to become stale and the application becomes internally inconsistent.
- Application latency: Edge data must be filtered and transformed into a consumable format for downstream applications. Furthermore, this data transformation must occur in a timely manner to be useful. Multiple network hops, message buffering, and database queues introduce significant latency, causing insights to grow stale before they are ever uncovered.
- Network availability: Edge applications may include multiple proximity networks, each comprising of several “edge” devices. In order to support these complex systems, cloud solutions backhaul data to the cloud for any intelligent operations. If a network fails and cloud solutions become disconnected from edge devices, “dumb” devices become unable to perform their designated operation, causing costly downtime.
Solving these challenges requires a different approach, which optimizes for latency and scale by processing where the data is generated. Edge computing is not a panacea. It often makes sense to leverage both edge computing and cloud computing, taking advantage of the strengths of each technology, in order to more efficiently manage high volumes of opaque, unstructured edge data. But having a strategy for edge computing can help solve for the challenges of data-rich environments by delivering applications which remain consistent and performant at scale, while remaining available, even without an available internet connection. When combined with downstream applications for further post-hoc analysis, edge computing can help companies maximize the value derived from their edge data.Learn More
Learn how SWIM uses edge intelligence to deliver real-time insights from the dark data generated by connected enterprise systems.