Your Old Data Isn't a Migration Blocker: Exporting Historical Time-Series from InfluxDB v1 to HyperbyteDB

Most migration guides cover updating Telegraf, Grafana, and your scripts. But historical data—months or years of it—gets left behind with no clear path forward. Here's how to export that data from InfluxDB v1 and load it into HyperbyteDB.

You've updated your Telegraf configs. Changed the URL in Grafana. Pointed your scripts at the new endpoint. The "change one URL" story works for live data—but your historical data is still sitting in InfluxDB OSS with no obvious exit path.

This post closes that gap. Here's how to export historical time-series from InfluxDB v1 and load it into HyperbyteDB using the same line protocol that powers live ingestion.

Why historical data feels like a migration blocker

When teams evaluate switching from InfluxDB v1 to HyperbyteDB, the client-side migration is straightforward. The /write and /query endpoints accept the same line protocol and InfluxQL that existing tools generate. Telegraf, Grafana, and most scripts need nothing more than a URL update.

But historical data carries operational context—dashboards tuned to specific time ranges, anomaly detection baselines, capacity planning records. Leaving it behind isn't always acceptable, even if new data flows to the new system.

The good news: exporting from InfluxDB v1 produces line protocol, and HyperbyteDB's /write endpoint consumes line protocol. The two systems meet in the middle with no transformation step.

Step 1: Export from InfluxDB v1 with influx_inspect

InfluxDB ships with influx_inspect, a command-line tool for reading TSM shards. The export command dumps data as line protocol—exactly what HyperbyteDB needs.

# Export all data to a single line protocol file
influx_inspect export -datadir /var/lib/influxdb/data \
  -waldir /var/lib/influxdb/wal \
  -out /tmp/influxdb-export.txt

# For large datasets, compress the output
influx_inspect export -datadir /var/lib/influxdb/data \
  -waldir /var/lib/influxdb/wal \
  -out /tmp/influxdb-export.txt && \
gzip /tmp/influxdb-export.txt

Key flags:

  • -datadir: Path to your InfluxDB data directory (default: /var/lib/influxdb/data)
  • -waldir: Path to your WAL directory (default: /var/lib/influxdb/wal)
  • -out: Output file path
  • -compress: Optional flag to gzip output directly

The export reads all TSM shards across your databases and retention policies, writing one line per data point. Timestamps are preserved exactly, which matters for dashboards that reference specific time ranges.

Step 2: Load into HyperbyteDB via /write

HyperbyteDB's POST /write endpoint accepts InfluxDB line protocol with no modifications. Point it at the file you exported:

# Single-node import
curl -XPOST 'http://localhost:8086/write?db=mydb' \
  --data-binary @/tmp/influxdb-export.txt

For large exports, use gzip compression to reduce transfer time and memory usage:

# Compressed import
gunzip -c /tmp/influxdb-export.txt.gz | \
curl -XPOST 'http://localhost:8086/write?db=mydb' \
  --data-binary @-

Query parameters available on /write:

  • db (required): Target database name in HyperbyteDB
  • rp: Retention policy (defaults to the default RP)
  • precision: Timestamp precision—ns, us, ms, s—defaults to nanoseconds
  • u / p: Credentials if authentication is enabled

If the export uses second-precision timestamps (common with influx_inspect export), specify that to avoid unnecessary conversion overhead:

curl -XPOST 'http://localhost:8086/write?db=mydb&precision=s' \
  --data-binary @/tmp/influxdb-export.txt

Handling large exports

HyperbyteDB's default maximum request body size is 25 MB. For exports exceeding this, split the file and send in batches:

# Split into 100,000-line chunks
split -l 100000 /tmp/influxdb-export.txt chunk_

# Import each chunk
for file in chunk_*; do
  curl -XPOST 'http://localhost:8086/write?db=mydb' \
    --data-binary @$file
done

Alternatively, stream directly from gzip to avoid disk space for intermediate files:

ssh user@influxdb-host "influx_inspect export -datadir /var/lib/influxdb/data \
  -waldir /var/lib/influxdb/wal -out -" | \
gzip -c | ssh user@hyperbyte-host "gunzip -c | \
curl -XPOST 'http://localhost:8086/write?db=mydb' --data-binary @-"

Verify the import

After loading, confirm the data arrived correctly by querying from HyperbyteDB using the same InfluxQL you use in Grafana:

# Check point counts match
curl -G 'http://localhost:8086/query' \
  --data-urlencode 'db=mydb' \
  --data-urlencode 'q=SELECT count(*) FROM cpu GROUP BY host'

# Spot-check a specific time range
curl -G 'http://localhost:8086/query' \
  --data-urlencode 'db=mydb' \
  --data-urlencode 'q=SELECT * FROM cpu WHERE time > 2024-01-01T00:00:00Z AND time < 2024-01-02T00:00:00Z'

If your Grafana dashboards reference specific databases or retention policies, ensure those exist in HyperbyteDB before importing:

curl -XPOST 'http://localhost:8086/query' \
  --data-urlencode 'q=CREATE DATABASE mydb'

The complete migration picture

With this step, the migration story is complete:

  1. Export historical data from InfluxDB v1 with influx_inspect export
  2. Load it into HyperbyteDB via POST /write
  3. Update Telegraf, Grafana, and scripts to point to the new URL
  4. New data flows live to HyperbyteDB; historical context stays intact

HyperbyteDB's InfluxDB v1 compatibility isn't just a marketing claim—it's the same line protocol, the same InfluxQL, and the same /write endpoint that existing tooling already speaks. The export step from InfluxDB v1 produces exactly what HyperbyteDB consumes.

If you're evaluating the switch, run the export against your existing InfluxDB instance and load a sample into HyperbyteDB to validate your dashboards and queries before cutting over live traffic.