"Edge AI" Isn't a Market, But...
by Simon Crosby, on May 28, 2021 8:45:00 AM
“Edge AI” is no more of a market than “Edge Linear Regression”. AI at the network edge (or anywhere else) is useless unless it’s applied to a specific problem, which means that while ML is a powerful tool, it will be delivered to the "edge" users in the form of smarter things - from phones to complex machinery. And of course the major edge device vendors are smart - they know how to put better algorithms on the CPUs and GPUs of equipment.
AI (it’s just different math) in SwimOS is one of many types of algorithms a developer can use to analyze a stream of data, with the goal of learning and predicting on the fly. But crucial to any solution are questions like “What are you trying to predict?”, “How is the model trained, deployed, run, managed?”, and “What’s its accuracy / robustness?”. The answers inexorably lead toward a discussion of the specific application, the processes that generate the measured variables, their statistical properties and the workflow to manage the ML lifecycle. ML is a set of analytical algorithms, and you should expect every CS trained engineer to understand its uses and limitations.
Hardware acceleration to support model training or inference are sometimes needed – so a market for these capabilities exists, just as it does for CPUs and memory. Expect future generations of every widget to be smarter and more attuned to its environment and users.
But this is very different from value-centric end-user needs like “Predict the future state of every light at an intersection” or "optimize my assembly line". And even though there is a component market – say for TPUs - that isn’t the only way that a solution could use ML. For example a vendor might embed a CPU and GPU in a machine controller, or a vendor of smart edge gateways could incorporate a TPU, or a vendor whose software runs “near the data” could simply use CPU cycles on that device. In every case, the user and vendor is trading off the value of embedding better algorithms in their edge solution versus naively sending all data to “the cloud” for storage, training and analysis, often to solve a bigger problem: "Optimize traffic flow in the city" requires continuous analysis and learning from thousands or millions of data streams from different things: in road loops, lights, buttons, cameras etc. This is where Swim shines. You can easily see the limitations of a smarter camera here - it can't help what it can't see.
Whether or not the “edge” is an appropriate locus for smarter solutions is a tradeoff. Smarter things at the edge can help solve little problems, quickly. But system optimization requires analysis of data from lots of different sources, often from devices from many vendors. This is where deeper insights can be found. For me, the “edge” is where countless streams of data come from (for your application) and we want Swim to be the first to get its fingers on the data, because Swim will ensure real-time analysis, learning and prediction from that point on. Almost every Swim customer deploys the product in a hybrid cloud.
Back to ML: Solution vendors of all types will incorporate ML in their products because it makes them more useful, but remember that end-users want outcomes: A manufacturer wants a more efficient assembly line built from many smarter components. Using ML in smarter equipment might yield a 10% savings, but they might get 20% by shipping data from many different components from the edge to the cloud (or their private cloud), where deeper analysis and training could improve the models.
I suspect that future smart devices will come with cloud services - even services that are optimized (say) to a particular factory. So the next wave of industrialization will likely be a major driving factor for industrial 5G (private 5G). We ought to expect future "edge applications" like optimized inventory management, to include both edge based data gathering and cloud based analysis, training and perhaps data storage. The future of Edge <anything> includes a cloud.