Quickstart
Stream your first database changes to Google Cloud in under 5 minutes.
Stream your first database changes to Google Cloud in under 5 minutes. One binary, no Kafka, no external dependencies.
Install
brew install nanosync/tap/nanosync curl -fsSL https://nanosync.dev/install.sh | sh docker pull ghcr.io/nanosyncorg/nanosync:latest See the Downloads page for all platforms and architectures.
| OS | Architecture | Download |
|---|---|---|
| macOS | Apple Silicon (arm64) | nanosync_v0.0.1_darwin_arm64.tar.gz |
| macOS | Intel (amd64) | nanosync_v0.0.1_darwin_amd64.tar.gz |
| Linux | amd64 | nanosync_v0.0.1_linux_amd64.tar.gz |
| Linux | arm64 | nanosync_v0.0.1_linux_arm64.tar.gz |
| Windows | amd64 | nanosync_v0.0.1_windows_amd64.zip |
# macOS Apple Silicon
curl -Lo nanosync.tar.gz https://github.com/nanosyncorg/nanosync-public/releases/download/v0.0.1/nanosync_v0.0.1_darwin_arm64.tar.gz
tar -xzf nanosync.tar.gz && sudo mv nanosync /usr/local/bin/ nanosync --version
# nanosync v0.1.0 (go1.24, darwin/arm64)
Prerequisites — Prepare your source database
Complete these steps before running anything. CDC will not work without them.
Add to postgresql.conf and restart PostgreSQL:
wal_level = logical
max_replication_slots = 10
max_wal_senders = 10Then create the replication user:
CREATE ROLE nanosync LOGIN REPLICATION PASSWORD 'secret';
GRANT SELECT ON ALL TABLES IN SCHEMA public TO nanosync;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO nanosync;Cloud SQL or AlloyDB? Set wal_level = logical via instance flags in the GCP console — postgresql.conf is managed by Google. Everything else is identical.
Enable CDC on the database and each table you want to replicate:
USE mydb;
EXEC sys.sp_cdc_enable_db;
EXEC sys.sp_cdc_enable_table
@source_schema = N'dbo',
@source_name = N'orders',
@role_name = NULL;
-- Grant permissions
CREATE LOGIN nanosync WITH PASSWORD = 'secret';
CREATE USER nanosync FOR LOGIN nanosync;
EXEC sp_addrolemember 'db_datareader', 'nanosync';
GRANT SELECT ON SCHEMA::cdc TO nanosync;Azure SQL or no CDC access? Use cdc_mode: tlog in your pipeline config — reads directly from the transaction log with only VIEW DATABASE STATE permission. No CDC setup required.
Step 1 — See data flowing right now
No config file needed. Stream any table directly to your terminal:
nanosync stream \
--source "postgres://user:pass@localhost:5432/mydb" \
--tables "public.orders" nanosync stream \
--source "sqlserver://user:pass@host:1433?database=mydb" \
--tables "dbo.orders" \
--cdc-only You’ll see the snapshot backfill, the switchover to live CDC, and then a stream of JSON events — all in one command:
streaming postgres://user:***@localhost:5432/mydb → stdout (tables: [public.orders])
pipeline "stream" started — press Ctrl-C to stop
INF snapshot started tables=1 workers=4 partitions=48
INF snapshot progress done=12/48 rows=1,200,000
INF snapshot progress done=48/48 rows=4,800,000 elapsed=2.1s
INF cdc streaming lsn=0/3A1B2C4D
{"_ns_op":"INSERT","_ns_table":"public.orders","id":9022,"status":"pending","amount":149.99}
{"_ns_op":"UPDATE","_ns_table":"public.orders","id":9021,"status":"shipped","amount":149.99}
{"_ns_op":"DELETE","_ns_table":"public.orders","id":8901}
Press Ctrl-C when you’re done. The rest of this guide sets up a persistent pipeline that streams into BigQuery.
Step 2 — Set up BigQuery
# Enable the API
gcloud services enable bigquery.googleapis.com --project=my-project
# Create the destination dataset
bq mk --dataset --location=US my-project:replication
# Create a service account and grant access
gcloud iam service-accounts create nanosync --project=my-project
gcloud projects add-iam-policy-binding my-project \
--member="serviceAccount:nanosync@my-project.iam.gserviceaccount.com" \
--role="roles/bigquery.dataEditor"
gcloud projects add-iam-policy-binding my-project \
--member="serviceAccount:nanosync@my-project.iam.gserviceaccount.com" \
--role="roles/bigquery.jobUser"
# Download the key
gcloud iam service-accounts keys create nanosync-bq-key.json \
--iam-account=nanosync@my-project.iam.gserviceaccount.com
On GKE, skip the key — Workload Identity works automatically. Omit credentials_file from the config below.
Step 3 — Create the config file
Create nanosync.yaml:
connections:
- name: my-postgres
type: postgres
dsn: "postgres://nanosync:${env:PG_PASSWORD}@localhost:5432/mydb?sslmode=require"
- name: my-bigquery
type: bigquery
properties:
project_id: my-project
dataset_id: replication
credentials_file: /path/to/nanosync-bq-key.json
pipelines:
- name: orders-to-bigquery
source:
connection: my-postgres
tables:
- public.orders
- public.order_items
sink:
connection: my-bigquery
properties:
table_id: orders
primary_keys: id connections:
- name: my-sqlserver
type: sqlserver
dsn: "sqlserver://nanosync:${env:SQL_PASSWORD}@host:1433?database=mydb"
- name: my-bigquery
type: bigquery
properties:
project_id: my-project
dataset_id: replication
credentials_file: /path/to/nanosync-bq-key.json
pipelines:
- name: orders-to-bigquery
source:
connection: my-sqlserver
tables:
- dbo.orders
- dbo.order_items
properties:
cdc_mode: cdc # or "tlog" if CDC is not enabled on the source
sink:
connection: my-bigquery
properties:
table_id: orders
primary_keys: id Secrets never go in the config file. Use ${env:VAR_NAME} — values are injected from environment variables at startup.
Step 4 — Test your connections
Before starting the pipeline, verify both connections are reachable:
export PG_PASSWORD=my-secret # or SQL_PASSWORD for SQL Server
nanosync test connection my-postgres
# ✓ my-postgres connected in 12 ms
nanosync test connection my-bigquery
# ✓ my-bigquery connected in 34 ms
If either fails, the error tells you exactly what’s wrong:
✗ my-postgres: connection refused — dial tcp 127.0.0.1:5432: connect: connection refused
Fix the issue before continuing — don’t waste time debugging a running pipeline.
Step 5 — Start the server
nanosync start dev --config nanosync.yaml
nanosync serving — Ctrl-C to stop
REST API: http://localhost:7600/v1
Web UI: http://localhost:7600/app
The pipeline starts automatically. The initial snapshot begins immediately — Nanosync reads all rows from the configured tables in parallel, then switches to streaming live changes.
Step 6 — Watch it work
nanosync monitor
A full-screen live dashboard opens in your terminal:
NAME SOURCE TARGET STATUS LAG EV/S
orders-to-bigquery my-postgres bigquery ● snapshotting — —
[████████░░░░] 67% 2,400,000 rows
orders-to-bigquery my-postgres bigquery ● live CDC 12ms 4,231
Press Enter to drill into table-level breakdown. Press w for worker fleet view. Press q to quit.
You can also check metrics directly:
nanosync metrics pipeline orders-to-bigquery
NAME EVENTS/S LAG LAST CHECKPOINT
orders-to-bigquery 4,231 12ms 2026-03-21 14:23:01
Open the web UI at http://localhost:7600/app for a full dashboard with run history and throughput graphs.
Step 7 — Kill it and restart
This is the part worth seeing. Press Ctrl-C to stop the server, then restart it:
nanosync start dev --config nanosync.yaml
INF pipeline resuming name=orders-to-bigquery lsn=0/3A1B2C4D
INF cdc streaming lsn=0/3A1B2C4D
No re-snapshot. No manual recovery. The pipeline resumed from the last committed WAL offset — exactly where it left off. That’s the checkpoint.
Next steps
- PostgreSQL source setup — replication slot details, Cloud SQL/AlloyDB specifics
- SQL Server source setup — CDC vs tlog mode, Azure SQL
- BigQuery sink setup — IAM, partitioning, DELETE handling
- Configuration reference — all YAML options
- CLI reference — full command listing