Swim ESPTM is a streaming data analytics engine that reasons on-the-fly, taking advantage of a distributed learning architecture to gain efficiency, ensure security and high availability, and to permit rapid and reliable dissemination of learned insights. Swim ESP is self-training and operates in real-time at the edge, training and learning continually from data streams, on-the-fly.
Swim ESP offers a simple approach to Machine Learning for applications that require real-time response based on learned insights. Swim applies learning at the network edge where it is embedded in the real world – for example on Unmanned Aerial Vehicles (UAVs), where virtual twins learn based on their inputs and context, and share what they have learned with other virtual components using an efficient pub-sub protocol. Swim automatically builds a distributed, secure, resilient fabric that enables virtual twins to share insights with each other, spreading knowledge through the fabric in a manner that perfectly mimics the spread of knowledge and influence in the real world.
- Swim sifts through noisy, bursty data streams in real-time, right where the data is generated. It analyzes and enriches the data, and learns insights that can be forwarded to applications, reducing the data rate by orders of magnitude while enhancing semantic relevance and preserving context.
- Swim can learn on commodity device hardware, taking advantage of Moore’s Law to deliver powerful insights on the fly. By way of example, Swim ESP can learn and predict the queue length at the traffic lights in a small city on a device that costs about $100.
- Swim is self-training. Training helps the Machine Learning system to classify data from the real world. It typically involves presenting the MachineLearning algorithm(s) with data representative of the real world that has been labeled by an expert. Because it continually analyzes the data stream, Swim continuously tests hypotheses about the relationship of historical data to the present, and makes projections for the evolution of the monitored environment. If the current hypothesis (the trained state of the Machine Learning algorithm) is valid, the observed state of the system will closely match the hypothesis and the model is reinforced. If not, the hypothesis is adjusted accordingly.
- Swim deconstructs the learning problem into sub-problems that are embedded in the physical topology of the real world. Knowledge of the physical topology (context) simplifies the learning problem because locality is often fundamental to correlation. Swim learns the contextual details that matter, helping to train the Machine Learning model and making local learning more accurate.
- Local learning is key to a fast, contextual response. Swim uses context and stateful real-time learning to ensure rapid response from the learning system.
- Local learning is efficient: Because Swim deconstructs the global learning problem into many small, efficient, distributed (local) learning problems, training is simplified. Swim helps each digital twin (each light) to predict its own traffic conditions for each part of each day – a substantial reduction in complexity that allows the system to learn on “full resolution data” – high rate, time-series linked contextual feeds – and to cheaply store valuable insights rather than high rate data of questionable value. Moreover, Swim can observe the accuracy of its own predictions, to trigger re-training if needed.
Industrial Data Applications: Digital Twin for IOT
- Swim can include context in its learning. In the traffic prediction scenario, for example, Swim can use known time-series knowledge such as a scheduled local sports event, both for learning and for prediction.
- Swim learns about itself using learning on data gathered through introspection. It can identify security issues or abnormal traffic levels on the network, and self-configure to minimize the complexity of solution deployment.
Learn how Swim ESP implements Machine Learning at the edge to enable streaming data analytics engine.