Simon Crosby: Takeaways from the O'Reilly AI Conference
by Simon Crosby, on Sep 12, 2018 12:14:13 PM
Last week I presented SWIM.AI Edge Intelligence at the O’Reilly AI conference in San Francisco. It was a terrific opportunity to learn from real users about their challenges delivering value from their AI projects.
My pitch was simple and well received: SWIM makes delivering edge intelligence easy because it lets streaming data build the model on-the-fly – at the edge, avoiding a ton of complexity and infrastructure cost for traditional solutions. Here’s a simple comparison:
The top line shows what’s wrong with the current approach. Users I spoke to are stuck, and don’t know how to make progress:
- The first problem is a reliance on the stateless REST service model for processing streaming data from “things” at the edge. Each time an event arrives it is processed – requiring a database read of the old state, processing of old and new, and a database write of the result. This takes tens or hundreds of milliseconds, slow compared to CPU and memory speeds at the edge.
- The second problem is over-reliance on big-data infrastructure. Typically streaming data is only ephemerally valuable so storing it for a long time is expensive, it forces analysis into a batch-style processing model, and the data is of low value. It is better to analyze such data on-the-fly.
- The final challenge is the “mythical data scientist / DevOps team”. Few organizations have the skills to build data processing pipelines, clean data, design and train models, select analytical tools, and deploy and manage them. Worse still, each deployment (each different process, factory, or city) likely needs its own complex data pipeline and model.
SWIM.AI addresses these.
- Our user defines the entities in their environment (eg: Traffic Intersections, Compressors, Assembly Robots) that deliver sensor data. Along with the definition of each entity is a simple programmatic description of the analysis to be performed on the data. This definition is as simple as defining object types in Java, with methods for data transformation.
- When the solution is deployed, SWIM dynamically builds a digital twin model of the real-world, from the data, linking twins based on simple rules derived from the real world - to build an accurate, stateful representation of a complex environment. The model represents each real-world entity as an in-memory digital twin that statefully processes updates from its sibling, including any required analysis, learning or prediction. It also represents the relationships between entities as links between them that allow them to share data.
Note that the same solution will work without changes in a different deployment (process, factory, or city) because what matters is the entities and their relationships, which are automatically derived from the data.
- The algorithms or packages used for analysis/learning are up to you to define: SWIM uses whatever you specify – any of the built-in algorithms, open source (eg: Spark, Tensor Flow) or even proprietary packages. At its simplest, one could simply de-noise data and perform stateful reduction from data to semantically relevant events.
- SWIM’s stateful edge processing model is fast and affordable – requiring a tiny fraction of the resources of a big-data solution, delivering results in real-time, and without the complexity of legacy set-ups.
That’s it. SWIM builds a situation-specific real-time, stateful model from data, at the edge. A digital twin is created dynamically to process data for each entity, and simple rules let twins link and share updates and analyses. Complex analytical and learning can be applied to individual twins or across them - enabling them to collaborate to analyze data streams or learn on-the-fly – as needed – for example using past performance to predict future trends. Each twin delivers its analysis via a streaming API or to real-time browser “cards”.
In short, SWIM bypasses the dev, ops, and data science challenges of edge intelligence, effectively turning devices into data scientists – or at least, building data science twins for entities in the real world.
Learn more about how SWIM can improve the performance of your distributed, real-time streaming applications.