Every Sensor, Every Tick: InfluxQL Analytics on IoT Data at Scale
Millions of IoT sensors sending high-frequency data. Real-time dashboards querying across thousands of devices. This is where InfluxDB v1 starts to buckle—and where HyperbyteDB runs without compromise.
The IoT Query Problem
Your factory has 50,000 sensors. Each reports temperature, pressure, and vibration every second. Your monitoring team needs real-time dashboards showing per-device trends, rolling averages across the last hour, and instant alerts when any sensor deviates from its normal range.
InfluxDB v1 handles ingestion fine—until you try to query it. High cardinality explodes your memory. Aggregations across thousands of devices time out. Your team starts downsampling data, losing the granularity that made the sensors valuable in the first place.
What Changes With HyperbyteDB
HyperbyteDB uses Parquet storage with an embedded ClickHouse query engine. Your Telegraf configs, line protocol, and InfluxQL queries work unchanged. The difference is under the hood:
- Parquet columnar storage means queries on millions of rows scan only the columns they need
- ClickHouse query engine executes DERIVATIVE(), moving averages, and subqueries on billions of points in milliseconds
- High cardinality handling via Parquet and ClickHouse works without schema rewrites or memory limits
Your IoT deployment stays on InfluxQL. No PromQL rewrites. No downsampling pipelines. No new query language to learn.
Queries That Work on Full-Resolution Data
These InfluxQL patterns run directly on your raw sensor data:
-- Rolling average across each device
SELECT mean(temperature)
FROM sensor_readings
WHERE time > now() - 1h
GROUP BY device_id, time(1m)
-- Anomaly detection: derivative spike
SELECT derivative(value)
FROM vibration_readings
WHERE time > now() - 15m
GROUP BY device_id
-- Cross-device aggregation
SELECT percentile(95, temperature)
FROM sensor_readings
WHERE time > now() - 1h
GROUP BY facility_id, device_typeThese queries run on full-resolution data—not downsampled approximations. Your operators see actual sensor behavior, not averaged estimates that hide the spikes that matter.
What Actually Changes
Migration is a URL swap:
- Update your Telegraf output URLs from
http://influxdb:8086tohttp://hyperbytedb:8086 - Grafana dashboards connect to the same InfluxQL endpoint
- Historical data exports from InfluxDB v1 load into HyperbyteDB if you need it
Master-master replication is included. Your IoT data writes to both nodes simultaneously. If one goes offline, ingestion continues without gaps.
What Stays the Same
- Line protocol format for sensor ingestion
- Telegraf collector configs
- Grafana data source configuration
- InfluxQL query syntax
- Retention policy definitions
Your team writes IoT data the same way. Your dashboards query the same way. The infrastructure just scales.