The marketing sheets for wireless IoT sensors almost always lead with battery life. "5-year battery life!" is a more compelling claim than "adequate sampling rate for most applications." But in practice, that 5-year figure typically assumes a sample interval of 15 minutes or longer — which is fine for average temperature monitoring in a warehouse and completely inadequate for bearing fault detection on a high-speed spindle.
The tradeoff is real and it's physics-based. You can't engineer your way around it entirely. But you can make much smarter decisions about it than most deployment teams do.
Where the Power Actually Goes
A wireless sensor node has three main power consumers: the sensor itself, the microcontroller, and the radio. Their relative contributions depend heavily on the sensor type and sampling frequency.
For a simple temperature/humidity sensor using an I2C interface, the sensor draw is negligible — typically 10-100 microamps during measurement, completing in under 10ms. A low-power microcontroller like the Nordic nRF9161 in sleep mode draws around 2 microamps. The radio is the expensive part: a single LoRa transmission at +20dBm output power draws approximately 125 mA during the transmission window, which might last 50-500ms depending on payload size and spreading factor.
With a 15-minute sample interval and 250ms transmission time, the LoRa radio is active for 250ms out of every 900 seconds — a duty cycle of 0.028%. The average current draw from the radio across the full cycle works out to about 35 microamps. Total node average current including sensor and microcontroller: around 50 microamps. With a 3.6V lithium primary cell rated at 3,600 mAh (e.g., a Tadiran SL-2770 D-cell), the calculated lifetime is approximately 8.2 years.
Change the sample interval to 1 minute: duty cycle goes to 0.42%, average radio current rises to about 525 microamps, total node average around 540 microamps. Battery life: approximately 9 months. Same sensor, same radio, same battery — just 15x more frequent sampling, and you've gone from 8 years to under a year.
Vibration Sensors Are a Different Problem
The analysis above applies to slow-changing process variables. Vibration monitoring is a fundamentally different challenge because the physics of fault detection require much higher sampling rates.
Bearing faults generate characteristic defect frequencies that depend on the bearing geometry and shaft speed. For a typical 6205 bearing on a 1750 RPM shaft, the ball pass frequency (outer race) is approximately 58.5 Hz. To detect this frequency reliably, you need to sample at least 2× the Nyquist rate — so 117 Hz minimum, and in practice 1kHz or higher to get clean frequency domain analysis.
At 1kHz sampling with a MEMS accelerometer like the STMicroelectronics IIS3DWB (rated to 6kHz bandwidth), the accelerometer itself draws approximately 450 microamps in normal operating mode. You need to sample for long enough to get a representative FFT — at minimum 1 second, and ideally 10-30 seconds to average out noise. Transmission of 10,000 samples at 16-bit resolution requires about 2.5 kB, which at LoRaWAN data rates takes many seconds.
The practical solution is to sample at 1kHz locally for a burst window (say, 10 seconds), compute the frequency domain features on-device using a fixed-point FFT, and transmit only the summary — RMS, peak, crest factor, and the amplitudes at a few characteristic defect frequencies. This reduces the data transmission from 2.5 kB to perhaps 50 bytes. Battery impact drops from prohibitive to manageable.
Even so, vibration sensors running hourly burst samples typically see battery lives of 2-3 years on a pair of D-cell lithium primary cells, rather than the 5-8 years achievable with simpler sensor types. That's a realistic expectation that deployment teams should plan maintenance intervals around.
Adaptive Sampling: The Partial Solution
One approach to the tradeoff is adaptive sampling: sample slowly under normal conditions and switch to higher frequency when something interesting is happening. An accelerometer with a wake-on-motion threshold can monitor for vibration exceeding a setpoint — say, 1.5 g in any axis — and trigger a burst sampling window when the threshold is crossed, while sleeping at minimal power the rest of the time.
This works well for detecting events that have a clear onset — impact events, shock loads, start/stop transients. It works less well for detecting gradual bearing degradation, where the fault signature develops slowly over weeks or months. A bearing in early-stage wear might not trigger a wake-on-motion threshold until it's already past the point where a planned replacement would have been cost-effective.
The right answer for bearing monitoring is usually scheduled burst sampling — hourly or twice-daily — rather than event-triggered sampling, precisely because the faults you're looking for are gradual rather than sudden.
Protocol Choice Matters for Battery Life
LoRaWAN's spread spectrum modulation gives it exceptional range at low power — up to 15 km in open terrain, 2-5 km in industrial environments. But the low data rates (250 bps to 50 kbps depending on spreading factor) mean large payloads take a long time to transmit, consuming more energy per byte than a faster protocol.
Bluetooth Low Energy 5.x has a much higher raw data rate (up to 2 Mbps) and very low overhead for short-range applications. For sensors within 30-50 meters of a gateway, BLE often results in lower energy per byte than LoRa because the transmission window is so much shorter, even though the radio power draw during transmission is similar. The tradeoff is range — if your gateway density isn't sufficient for BLE coverage, you end up burning more power with retransmissions or losing data.
Cellular-based sensors (LTE-M, NB-IoT) have the highest power draw per transmission but don't require any infrastructure beyond a cellular carrier. For remote sensors where installing a local gateway is impractical, cellular is often the only option — and the power budget just becomes a constraint you design around, typically with a larger battery or a small solar panel.
What to Spec When You're Designing a Deployment
The decision process we'd recommend: start from the fault detection requirement. What is the fastest-changing parameter you need to catch? What's the maximum time you can tolerate between a fault developing and being detected? Work backward from that to your minimum effective sampling rate. Then size the battery for a maintenance interval you can operationally commit to — 1 year, 2 years, 5 years — and choose the sensor hardware that meets both constraints. If no product on the market meets both, you've identified a real tradeoff that requires a design decision, not a marketing compromise.
Sizing wireless sensors for your deployment?
SensorVault's team has worked through this tradeoff across dozens of facility types. We'll help you match sensor specs to your actual monitoring requirements.
Talk to the Team