The default architecture in most IIoT deployments sends everything to the cloud: sensor reads a value, publishes over MQTT, broker forwards to cloud ingest, processing pipeline runs, alert fires, notification arrives. Under good network conditions, that round trip takes somewhere between 800 milliseconds and 3 seconds depending on your cloud region, broker configuration, and processing pipeline design.
For trend dashboards and maintenance planning, 3 seconds is fine. For a safety interlock on a press with a 50ms reaction time requirement, it's not even in the same conversation. The question isn't whether to use cloud processing — it's which decisions need to be made locally versus remotely.
The Latency Stack in Practice
Let's be specific about where time goes. A sensor reading on a Modbus TCP network takes 5-15ms to poll, depending on network conditions and response size. A wireless sensor on 900MHz ISM band has a transmission latency of 10-50ms depending on mesh hop count and channel congestion. MQTT publish from a gateway to an on-premises broker: 1-5ms. MQTT publish to a cloud broker over a WAN connection: 15-80ms depending on geographic distance and link quality.
Cloud-side processing — ingestion, rule evaluation, alert generation — adds another 50-500ms depending on your pipeline architecture. Alert delivery to a mobile notification: another 500-2000ms for push delivery. Total: you're realistically looking at 1-3 seconds from sensor trigger to human notification under normal operating conditions.
That's the latency budget for a "someone should know about this" scenario. For a "the machine needs to stop now" scenario, you're working with hard real-time constraints that cloud architectures can't meet. PLCs and safety systems handle these cases directly, as they should. But there's a middle tier — scenarios that need faster response than cloud latency allows but don't require PLC-level hard real-time — where edge processing is the right answer.
What Edge Processing Actually Means
An edge gateway in an IIoT deployment is typically a ruggedized industrial computer — ARM or x86, fanless, DIN-rail mounted — running in the same control cabinet or field panel as the PLCs it interfaces with. It has direct Ethernet or serial connectivity to field devices, and WAN connectivity to the cloud or data center.
Processing on the edge means running alert evaluation locally on the gateway before data leaves the facility. A vibration sensor threshold crossing can trigger a local relay output in under 50ms without the data ever leaving the building. A temperature rising-edge alert can sound a local alarm through the gateway's digital output in under 100ms. None of this requires network connectivity.
Edge processing also handles a second problem: what happens when the WAN link goes down. A cloud-dependent architecture has zero visibility and zero alerting capability during an outage. An edge-first architecture maintains full local monitoring and alerting indefinitely — the cloud side just stops receiving data until connectivity is restored, at which point the gateway uploads buffered readings to fill the gap.
Local Preprocessing: Edge Compute as a Filter
High-frequency sensor data is expensive to transmit and store at full resolution. A vibration sensor sampling at 1kHz generates 86 million data points per day. Storing and querying 86 million data points per sensor per day across 200 vibration sensors is a significant infrastructure challenge.
Edge preprocessing addresses this. Instead of transmitting raw samples, the edge gateway computes summary statistics — RMS, peak, crest factor, frequency domain features — and sends those upstream at a much lower rate. You might sample at 1kHz locally, compute a 10-second window RMS, and publish one value every 10 seconds to the cloud. Storage and bandwidth drop by three orders of magnitude. The raw samples are retained locally on a circular buffer for post-event investigation when an anomaly is detected.
The same pattern applies to analog inputs. A 4-20mA pressure sensor connected to a 16-bit ADC sampling at 10Hz generates 10 readings per second. Most of those readings are nearly identical under stable operating conditions. Edge-side deadband filtering — only transmitting a value when it changes by more than a configured threshold — can reduce transmission volume by 80-95% without losing meaningful process information.
The Offline Resilience Problem
Industrial environments have unreliable connectivity. Cellular backhaul in remote facilities cuts out during storms. Corporate WAN links go down for maintenance. VPN tunnels time out and don't reconnect cleanly. A sensor data pipeline designed around constant cloud connectivity will lose data and alerting capability whenever the network is impaired.
The right architecture assumes the WAN link will fail and designs accordingly. Local persistent storage on the gateway — typically a small SSD — buffers data during outages. When connectivity is restored, the gateway delivers buffered data in order, filling gaps in the cloud-side time series. Alert history is maintained locally so that events during the outage are not lost.
We size edge gateway storage based on the expected outage duration, data volume, and acceptable data loss. For a 200-sensor deployment sampling at 1Hz, buffering 7 days of data at full resolution requires approximately 35GB of local storage — modest by any measure. For high-frequency deployments with preprocessed summaries, the buffer requirement is much smaller.
Where Cloud Processing Still Wins
None of this means cloud processing is wrong. For cross-facility analytics — comparing energy consumption patterns across twelve plants, identifying fleet-level anomalies, building predictive models on historical data from thousands of assets — cloud infrastructure is clearly the right tier. You need centralized storage, flexible compute, and tooling for data scientists that doesn't make sense to deploy on every edge node.
The model we'd recommend: edge processing handles all time-critical decisions and all local alerting. Cloud processing handles analytics, reporting, model training, and anything that requires cross-facility context. Data flows up from edge to cloud continuously when the link is healthy, and the cloud layer is a consumer, not a dependency, for critical operations.
Need offline-resilient sensor monitoring?
SensorVault's edge agent runs full alert evaluation locally and buffers up to 7 days of data during outages. Everything works with or without cloud connectivity.
See the Architecture