Step 3: Test Real-Time Sync

With the connector running, let's verify that data flows from PostgreSQL to ClickHouse in real-time.

Create Test Data in PostgreSQL

PostgreSQL

CREATE TABLE test_sync (
    id SERIAL PRIMARY KEY,
    name TEXT,
    created_at TIMESTAMP DEFAULT NOW()
);

INSERT INTO test_sync (name) VALUES ('Hello from PostgreSQL!');

Verify in ClickHouse

Wait ~5 seconds for the change to propagate

ClickHouse

SELECT * FROM test_sync;

Expected output:

id name created_at
1 Hello from PostgreSQL! 2024-06-15 10:30:00

The row you inserted in PostgreSQL should now appear in ClickHouse. The table was auto-created by the connector (AUTO_CREATE_TABLES=true).

Test an UPDATE

PostgreSQL

-- In PostgreSQL:
UPDATE test_sync SET name = 'Updated!' WHERE id = 1;

Wait ~5 seconds

ClickHouse

-- In ClickHouse (after a few seconds):
SELECT * FROM test_sync;
-- Should show: name = 'Updated!'

Expected output:

id name created_at
1 Updated! 2024-06-15 10:30:00
🎉 Real-time sync is working! Your data flows from PostgreSQL to ClickHouse in seconds.

Next Steps

Configure your actual tables — update TABLE_INCLUDE_LIST in docker-compose.yml to include the tables you want to sync (e.g., public.orders,public.customers,public.products)
Build pivot tables over ClickHouse data — use the ReportBurster pivot table feature to create interactive analytics dashboards directly from your synced ClickHouse data
Transform into a proper OLAP star schema — use dbt Core to reshape the mirrored 1:1 tables into denormalized fact and dimension tables for maximum analytical query performance