Enterprise Computing at the Edge

by Simon Crosby, on Apr 10, 2018 1:46:33 PM

Enterprises increasingly look to the cloud to deliver on their mainstream infrastructure and app needs: SaaS and IaaS make sense because they are cost effective, flexible and often more secure than their on-prem equivalents.

So, the “cloud first” approach might seem reasonable for another use case: collecting and processing high volumes of data from the enterprise “edge” – data from IT and OT assets, customers, partners, employees and IoT devices.  It may be seductive to think that big-data and cloud solutions can address every data need, but the story just doesn’t add up. The cloud is the right place to process data that’s delivered to cloud-native apps (eg: Netflix, Facebook).  But if you’re drowning in data from equipment or assets that you operate or monitor, and you want insights that can help you quickly up your operational game, a cloud-first approach won’t cut it.

Here's why:

  1. Moving masses of data to the cloud is expensive
  2. Data storage sounds cheap, but costs add up – and you don’t know how long to keep it
  3. Cleaning, labeling and filtering data requires effort – and may be impossible if your data is “gray” – poorly understood, noisy, or repetitive
  4. Analysis and learning is typically post-facto, requires an iterative approach, and yields results on a batch time-scale unrelated to the real-time nature of the data, limiting its value

Cloud based analysis is fine for data from cloud-native apps.  For the rest – industrial data, smart city data for use by citizens, data from on-prem IT assets and systems, external data delivered to on-prem systems, data from production systems that need a quick local response, and sensitive data subject to regulations - we must solve the problem differently.

Enterprise_No URL

Perhaps an example will help:  Traffic lights and sensors for 250 intersections in a small city generate about 4TB per day.  Cloud costs to clean, analyze and learn on the data exceed $5,000 per month. But even at this expense the analysis still can’t help with prediction: Will a given intersection be green or red in one minute (it’s raining, Memorial Day, Tuesday, at 10:07:33 and the basketball team lost on Saturday)?

Sadly, many orgs make the mistake of believing an on-prem big-data solution can help.  It can’t, for many of the reasons above. But IT is burdened with the care and feeding of the big-data infrastructure, and finding data scientists to make sense of the data.   Many orgs take a shortcut – using blobs of open source and some consultants to cobble it together. Inevitably, in our experience, these attempts fail. They are too expensive, solutions are too fragile, and they take too long  to deliver insights of marginal value.

SWIM transforms fast data into big insights at the enterprise edge, solving complex analysis and learning problems on-the-fly so you don’t have to get into big data.  SWIM helps IT deliver real-time insights to business stakeholders as a service - via automatically generated UIs or APIs - so they can respond immediately.

SWIM solves edge data problems on standard edge devices using a new form of edge learning and the SWIM fabric.  But how? My next blogs will discuss these in more detail.

Learn More

Learn how SWIM EDX uses edge intelligence to deliver real-time insights from the dark data generated by connected enterprise systems.

Topics:Edge ComputingSWIM EDXFog ComputingSWIM AIDigital TwinEdge AnalyticsSWIM SoftwareMachine LearningSWIM Inc.Smart CitiesTraffic Management

Comments