InfluxDB v1 Compatibility in HyperbyteDB: What Works, What Doesn't, and Why

How HyperbyteDB's InfluxDB v1 compatibility works end-to-end: line protocol, /write, /query, InfluxQL, Prometheus metrics—and where things diverge.

If you're already running Telegraf, Grafana, or any InfluxDB v1 client, HyperbyteDB is designed to slot in by changing one URL. This post walks through what that means precisely—what works out of the box, what requires minor adjustments, and what the underlying architecture means for your workloads.

The Compatibility Surface

HyperbyteDB exposes the InfluxDB v1 HTTP API on port 8086. The endpoints match what your existing tooling expects:

  • POST /write – ingest line protocol, supports precision and gzip
  • GET/POST /query – execute InfluxQL, supports epoch and multiple statements
  • GET /ping and GET /health – liveness and health checks
  • GET /metrics – Prometheus scrape endpoint on the same port

Your Telegraf output config, your Grafana datasource, your custom scripts—none of those need rewrites. Point them at http://your-hyperbytedb:8086 instead of your InfluxDB host, and the protocol handles the rest.

Line Protocol

HyperbyteDB accepts standard InfluxDB line protocol verbatim:

cpu,host=server1,region=us-east value=42 1700000000000000000
sensor,device=foo,temp_c=23.5 humidity=65.1

Precision suffixes (u, ms, s, etc.) work as documented. Timestamps are parsed and stored in ClickHouse-native format for query performance.

InfluxQL Coverage

The query layer covers the commands most telemetry pipelines depend on:

  • DDL: CREATE DATABASE, CREATE USER, SHOW USERS, retention policy management
  • SELECT with aggregates: mean, count, sum, percentile, derivative, non_negative_derivative, moving_average, and more
  • Time grouping: GROUP BY time(), FILL modes, ORDER BY, LIMIT
  • SHOW commands: SHOW MEASUREMENTS, SHOW TAG KEYS, SHOW TAG VALUES, SHOW SERIES
  • Continuous queries: CREATE CONTINUOUS QUERY, DROP CONTINUOUS QUERY

Grafana's InfluxQL query builder will generally work without modification. Telegraf's output plugin writes directly to /write and needs no changes.

Prometheus Metrics

HyperbyteDB exposes Prometheus metrics at GET /metrics on the same port as the HTTP API—no separate port to configure in Prometheus scrape configs. Node-level metrics (CPU, memory, disk) plus HyperbyteDB-specific internals (query latency, WAL throughput, replication lag) are available out of the box.

Known Differences

Compatibility is broad, but the storage engine is architecturally different. Here's where behavior may diverge:

  • Storage format: HyperbyteDB uses Parquet files and ClickHouse, not TSM shards. This is why ingest and query performance differ from InfluxDB OSS—it's not a bug, it's a different trade-off. TSM compaction and retention policies map to Parquet partitioning and ClickHouse TTLs.
  • fill(previous) and fill(linear): HyperbyteDB implements these via ClickHouse's INTERPOLATE clause. Boundary behavior at series start/end may differ from InfluxDB's exact semantics. For strict compatibility, prefer fill(null) or fill(none).
  • SELECT INTO with regex: Source measurement matching with regex (e.g., SELECT * INTO :MEASUREMENT FROM /^cpu/) is not supported. List explicit measurement names instead.
  • Permissions model: HyperbyteDB has admin and non-admin roles, but not per-database GRANT/REVOKE. If your use case requires fine-grained database-level access control, plan accordingly—single-user or admin-only is the current model.
  • Floating-point edge cases: ClickHouse's floating-point handling at boundaries may produce marginally different results than InfluxDB for edge-case queries. For critical alerting or billing, validate against your previous results during migration.

What the Benchmarks Actually Measure

The parse-only ingest benchmark (~1.84M points/s) measures how fast HyperbyteDB can consume line protocol before touching storage. The WAL path (~1.05M points/s) adds the write-ahead log. The columnar fast WAL path (~2.6M points/s) represents a different write path optimized for throughput.

What you'll see in production depends on query mix, data shape, replication settings, and hardware. A synthetic benchmark measures one dimension; a running system with Grafana dashboards, continuous queries, and cluster replication does not. The numbers are real, but treat them as a ceiling, not a guarantee.

The Short Version

HyperbyteDB is not a rewrite of InfluxDB—it's a different engine that speaks the same protocol. If your tooling speaks InfluxDB v1, you can point it at HyperbyteDB and write data within minutes. The query layer covers the majority of InfluxQL used in production telemetry. Where behavior diverges, the differences are documented and mostly affect edge cases.

If you're running a migration, test your specific query patterns against the known differences above. Most teams find the compatibility surface covers their workloads; the ones that hit edge cases usually find workarounds within an afternoon.