Scaling the Edge: Building Enterprise Applications that Scale Automatically

by Brad Johnson, on Feb 9, 2018 10:37:17 AM

Businesses are rapidly adding new devices, tags, sensors, automation, software and compute elements to help track and optimize business performance – but are rarely able to act on the data generated by them. Centralized cloud systems struggle to scale when enterprise applications aggregate data from multiple local networks, and then must sync state throughout the application. Enterprises want to improve efficiency, cut costs, prevent failures, track assets and improve safety – driving a need to analyze and act on streams of data from their assets, customers, users, infrastructure and operations. However, the cloud-first paradigm is insufficient to deliver maximum value from data analytics investments. The time and cost of shifting data to a central location, storing it, and writing new apps to analyze it before a decision can be taken results in only a small subset of new data being made available, and often delivers insights too late to act on them.

In order to ensure that data is processed efficiently and that insights can be acted upon in a timely manner, it’s necessary to move application intelligence closer to the data source. Edge computing takes advantage of the compute resources at the network edge by distributing application logic to each edge node. Processing occurs in parallel, on each edge device as data is generated, eliminating the buffering and network backhauling common in cloud-first applications. Furthermore, data volumes are filtered at the edge, which also significantly reduces application latency.

Scaling_No URL.png

Edge Computing Scales Real-Time Enterprise Data Applications

Edge computing solves the scalability issues present in cloud-first enterprise applications. As more edge nodes are added, cloud-first analytics platforms must scale to accommodate the increase in data volumes. This means that database components like Kafka, Cassandra, MongoDB, and others must all be scaled independently and while still working together. Considering that many enterprise IT and OT staffs lack expertise in cloud/big data/analytics architectures, this presents a major challenge. However, edge computing distributes compute horizontally across each edge device. Scaling to accommodate new devices is as simple as distributing any relevant application logic to the new edge node, even at runtime. Edge nodes automatically filter edge data, using machine learning to identify relevant insights for further processing by downstream systems. These machine learning instances can be self-training, eliminating the need for enterprises to hire skilled data scientists in order to make sense of dark edge data.

This concept proves to be extremely useful for real-world deployments. For example, in the automotive industry, edge intelligence can filter auto data at the edge in order to analyze vehicle usage, identify key events, and perform other vehicle telematics, thus requiring significantly less data to be sent over cellular networks. The end result is that vehicle manufacturers can track more data points about vehicle usage, while sending lower volumes of data over cellular networks, cutting a major cost of vehicle telematics. These same benefits can be achieved in manufacturing, supply chain & logistis, Oil & Gas, or any other environment where high volumes of data is created. By intelligently filtering data at the edge, business insights can be acted on sooner, both locally and by downstream systems. Considering the cost and performance advantages of edge computing, it’s clear that edge computing is the logical path to scaling real-time enterprise data applications.

Learn More

Learn how SWIM uses edge intelligence to deliver real-time insights from the dark data generated by connected enterprise systems.

Topics:Machine LearningStateful ApplicationsSWIM SoftwareIndustrial IOTSWIM Inc.Edge AnalyticsManufacturingAsset TrackingAsset ManagementSWIM AIEdge ComputingFog Computing

More...

Subscribe