One URL to Drop In: InfluxDB v1 Compatibility in HyperbyteDB
Our strongest signal for product-market fit isn't a benchmark or a feature list—it's that teams drop in HyperbyteDB by changing one URL in their existing Telegraf configs. Here's what we validated and what to watch.
Our strongest signal for product-market fit isn't a benchmark or a feature list—it's that teams drop in HyperbyteDB by changing one URL in their existing Telegraf configs.
I've been running time-series databases for years. The usual migration story goes: export this, transform that, rewrite your writers, pray. With HyperbyteDB, the migration story is "change the URL and test." That's it.
The InfluxDB v1 wire format is the product
HyperbyteDB speaks InfluxDB line protocol natively on /write and /query. You don't need a special SDK, a translation layer, or a compatibility shim. The same HTTP endpoints your Telegraf agents, custom writers, and existing dashboards already hit are there.
To move from InfluxDB OSS to HyperbyteDB:
# Before (telegraf.conf)
urls = ["http://localhost:8086"]
# After
urls = ["http://localhost:8086"] # Same URL—HyperbyteDB listens on 8086
Run CREATE DATABASE mydb via the v1 /query endpoint, then point your writers at it. Retention policies, measurements, tags, and fields work as you expect.
What we validated
We've tested this against:
- Telegraf — The standard InfluxDB output plugin works without modification. Batch sizes, write intervals, and timestamps transfer directly.
- Custom HTTP writers — Any tool that POSTs line protocol to
/write?db=<name>works. We tested with Pythonrequests, Gonet/http, and bashcurlscripts. - Query compatibility —
SELECT,GROUP BY,LIMIT, and InfluxQL functions work on the v1/queryendpoint. Grafana datasources pointing at InfluxDB v1 switch over by changing the URL.
The storage layer is different—ClickHouse under the hood, not TSM files—but the wire protocol is the surface area. We kept it identical.
Ingestion benchmark numbers
These are Criterion measurements on Linux x86_64 (April 2026), 1000-point batches. Run cargo bench on your hardware—we provide these as reference, not guarantees.
Line protocol path
| Benchmark | Throughput | What it measures |
|---|---|---|
parse_1000 |
~1.84 M points/s | Line protocol parsing only (no I/O) |
parse_plus_metadata_1000 |
~1.56 M points/s | Parse + RocksDB metadata preparation |
metadata_plus_wal_append_1000 |
~1.05 M points/s | Parse + metadata + WAL append to RocksDB |
Columnar MessagePack path
| Benchmark | Throughput | What it measures |
|---|---|---|
metadata_plus_wal_append_1000 (Point path) |
~2.0 M points/s | Parse + metadata + WAL |
fast_metadata_plus_wal_append_1000 |
~2.6 M points/s | Fast metadata path + WAL |
decode_to_parquet_1000 |
~9.9 M points/s | MessagePack decode + Parquet conversion |
All numbers above exclude HTTP server overhead, authentication, replication, and query processing. They're the ChinFlux library layer only, measured with Criterion on temp directories. Real-world throughput via HTTP will differ based on network, concurrency, and cluster configuration.
What to watch when cutting over
A few things we've seen in practice:
- Batch sizing — HyperbyteDB handles batches similarly to InfluxDB OSS, but if you're coming from InfluxCloud or a heavily tuned InfluxDB Enterprise setup, you may have different sweet spots. Start with your current batch size and tune from there.
- Time precision — Nanosecond precision is supported. If you're using second or millisecond precision in InfluxDB, that transfers directly.
- Retention policies — RP creation via InfluxQL works, but the underlying Parquet storage behaves differently than TSM compaction. RPs are honored on reads, but data lands in columnar Parquet files instead of TSM segments.
- Existing data — You'll need to export from your current InfluxDB and import into HyperbyteDB. We don't migrate TSM files directly.
Try it yourself
The fastest path to testing is Docker:
docker pull ghcr.io/hyperbyte-cloud/hyperbytedb:latest
docker run -d \
--name hyperbytedb \
-p 8086:8086 \
-v hyperbytedb-data:/var/lib/chinflux \
-e CHINFLUX__SERVER__BIND_ADDRESS=0.0.0.0 \
ghcr.io/hyperbyte-cloud/hyperbytedb:latest
curl -sSf http://localhost:8086/health
Then point Telegraf or your custom writer at http://localhost:8086.
To run the ingestion benchmarks locally:
# Line protocol path
cargo bench -p chinflux --bench ingestion_line_protocol
# Columnar path (MessagePack)
cargo bench -p chinflux --features columnar-ingest --bench ingestion_columnar
Reproducible benchmarks are part of the product. We think you should be able to validate claims, not take them on faith.
The short version
HyperbyteDB accepts your InfluxDB v1 line protocol, stores it in Parquet via ClickHouse, writes through RocksDB WAL, and exposes the same HTTP API. The only configuration change may be the URL. If you're evaluating storage backends and have existing tooling, this is the lowest-friction cutover we know how to offer.