Before you trust our numbers, run this.

Benchmark numbers mean nothing without context. Here's how to run HyperbyteDB's own ingestion benchmarks, what each stage actually measures, and the environment metadata you need before comparing anything.

Marketing benchmarks are theater. They run on undisclosed hardware, cherry-pick favorable metrics, and assume you'll never check. We're not going to ask you to take our word for it.

HyperbyteDB ships with Criterion-based ingestion benchmarks you can run in under five minutes. Here's how, what each number means, and what you need to record before drawing any conclusions.

The two benchmarks

The standard benchmark covers the InfluxDB v1 line protocol path, which is what you'll hit if you're doing a URL-swap migration from Telegraf or any other v1-compatible client:

cargo bench -p chinflux --bench ingestion_line_protocol

If you enable the columnar-ingest feature, you also get the optional MessagePack columnar path:

cargo bench -p chinflux --features columnar-ingest --bench ingestion_columnar

Line protocol: reference numbers

These are from one Linux x86_64 run (April 2026, rustc 1.94.x). Re-run on your hardware before comparing anything.

Benchmark Time Throughput What it measures
parse_1000 ~544 µs ~1.84 M points/s Line protocol → Vec<Point>, no I/O
metadata_plus_wal_append_1000 ~950 µs ~1.05 M points/s Parse + RocksDB metadata + WAL append

Columnar: reference numbers

The columnar path avoids Point expansion until necessary. It hits different numbers:

Benchmark Time Throughput What it measures
metadata_plus_wal_append_1000 ~509 µs ~2.0 M points/s Parse + metadata + WAL (Point path)
fast_metadata_plus_wal_append_1000 ~378 µs ~2.6 M points/s Fast metadata + WAL append
decode_to_parquet_1000 ~101 µs ~9.9 M points/s Decode + columnar batch → Parquet

What batch size 1000 means

All throughput figures are for 1000 rows per batch. This is configurable in benches/ingestion_line_protocol.rs (adjust the BATCH constant) and defaults to 1000 in the HTTP soak test as well. Larger batches amortize per-request overhead; smaller batches reduce latency. Your mileage depends on your write pattern.

What's excluded

These benchmarks isolate Chinflux costs: parse → metadata → WAL on a temp RocksDB instance. They do not include:

  • HTTP layer (Axum)
  • Authentication
  • Cluster replication
  • ClickHouse chDB query work

They also don't reflect headline rates from systems with different storage engines or write pipelines. Run them to understand our overhead, not to project end-to-end system throughput.

Record your environment

Before comparing numbers—against us, against yourself, against anything—capture this:

# Git SHA
git rev-parse HEAD

# Rust compiler
rustc -V

# CPU
lscpu | grep 'Model name'
# or
cat /proc/cpuinfo | grep 'model name' | head -1

# RAM
grep MemTotal /proc/meminfo

# Disk type (if WAL I/O matters)
lsblk -d -o NAME,ROTA,TYPE | grep nvme

Save this output alongside your benchmark results. Without it, comparisons are noise.

Why this matters for adoption

HyperbyteDB's InfluxDB v1 compatibility means your Telegraf config, Grafana datasource, and existing scripts need only a URL change. That's the pitch, and it's a real one—but it only closes if you can verify the underlying throughput before committing.

With these benchmarks, proof is one cargo bench away. No contact form. No sales call. No PDF with undisclosed methodology.

Pull the image, run the bench, decide.