Performance & Real-Time
Optimize dashboard performance with efficient data loading, Redis caching, and real-time data patterns.
Table of Contents
- Think Architecture, Not Just Speed
- Multi-Component Reports
- Efficient Data Loading with componentId
- Pulling Data from Multiple Databases
- Dashboards: Embedding Multiple Components
- Cache Expensive Queries with Redis
- Real-Time Data with Redis Pub/Sub
Think Architecture, Not Just Speed
DuckDB handles millions of rows in-process. ClickHouse scales to billions. The analytical engines are fast — but fast engines don't excuse lazy architecture. When you're building dashboards over huge datasets that need to stay current, the bottleneck isn't the query engine. It's how data flows from database to cache to browser. Being intentional about that pipeline — what gets queried, when, and how often — is what separates a sluggish dashboard from a great one.
Multi-Component Reports
A standard report produces one visualization — one data table, one chart, or one pivot table. With multi-component reports, a single report can contain multiple visualizations, each with its own data. One report, many components — ideal for dashboards.
Adding Multiple Data Tables
A single data table configuration looks like this:
tabulator {
layout "fitColumns"
columns {
column field: "region", title: "Region"
column field: "revenue", title: "Revenue", formatter: "money"
}
}To add more data tables to the same report, give each one a unique name:
tabulator('salesGrid') {
layout "fitColumns"
columns {
column field: "region", title: "Region"
column field: "revenue", title: "Revenue", formatter: "money"
}
}
tabulator('customersGrid') {
height 400
columns {
column field: "name", title: "Customer"
column field: "orders", title: "Orders", hozAlign: "right"
}
}The only difference: tabulator { becomes tabulator('salesGrid') {. The name inside the parentheses — 'salesGrid', 'customersGrid' — is how you'll reference each component later.
Adding Multiple Charts
Same idea. A single chart:
chart {
type "bar"
labelField "month"
datasets {
dataset field: "revenue", label: "Revenue", color: "#3b82f6"
}
}Multiple charts — give each a name:
chart('revenueChart') {
type "bar"
labelField "month"
datasets {
dataset field: "revenue", label: "Revenue", color: "#3b82f6"
}
}
chart('trendChart') {
type "line"
labelField "quarter"
datasets {
dataset field: "growth", label: "Growth %", color: "#10b981"
}
}Adding Multiple Pivot Tables
pivotTable('regionPivot') {
rows "region"
cols "quarter"
vals "revenue"
aggregatorName "Sum"
}
pivotTable('productPivot') {
rows "product"
cols "month"
vals "quantity"
aggregatorName "Sum"
}You can mix and match freely — a single report can have two data tables, three charts, and a pivot table.
Providing Data for Each Component
In the data script, use ctx.reportData('name', rows) to send data to a specific named component. The name must match the one you used above.
def sales = ctx.dbSql.rows("SELECT region, month, revenue FROM monthly_sales")
ctx.reportData('salesGrid', sales)
ctx.reportData('revenueChart', sales) // same data, different visualization
def customers = ctx.dbSql.rows("SELECT name, orders FROM top_customers")
ctx.reportData('customersGrid', customers)Efficient Data Loading with componentId
When a dashboard renders, each web component makes its own HTTP request to fetch data — passing ?componentId=<id> in the URL. Without any guard in your script, every request executes every data-fetching block — meaning N components × M queries hit your database on every page load.
The fix is a one-liner at the top of your data script:
def componentId = ctx.variables?.get('componentId')Then wrap each data block so it only runs when requested:
def componentId = ctx.variables?.get('componentId')
if (!componentId || componentId == 'salesGrid') {
def sales = ctx.dbSql.rows("SELECT region, month, revenue FROM monthly_sales")
ctx.reportData('salesGrid', sales)
}
if (!componentId || componentId == 'revenueChart') {
def revenue = ctx.dbSql.rows("SELECT quarter, total FROM quarterly_revenue")
ctx.reportData('revenueChart', revenue)
}
if (!componentId || componentId == 'customersGrid') {
def customers = ctx.dbSql.rows("SELECT name, orders FROM top_customers")
ctx.reportData('customersGrid', customers)
}When a specific component requests its data, only its block executes.
Result: A dashboard with 6 components makes 6 small, fast requests instead of 6 requests that each run all 6 queries.
Pulling Data from Multiple Databases
A dashboard often needs data from more than one database. Use ctx.getConnection('code') to query any configured database connection:
// Default connection (configured in the report settings)
def sales = ctx.dbSql.rows("SELECT * FROM monthly_sales")
ctx.reportData('salesGrid', sales)
// Additional connection by its code
def duckDb = ctx.getConnection('db-analytics-duckdb')
def cubeData = duckDb.rows("SELECT region, quarter, revenue FROM revenue_cube")
ctx.reportData('regionPivot', cubeData)Connection codes are shown in the Code column of the Connections list, with a copy-to-clipboard button.
Dashboards: Embedding Multiple Components
When embedding, use the component-id attribute to target a specific named component. All components share the same report-code:
<rb-tabulator report-code="cfo-dashboard" component-id="salesGrid"
api-base-url="http://localhost:9090/api/jobman/reporting">
</rb-tabulator>
<rb-chart report-code="cfo-dashboard" component-id="revenueChart"
api-base-url="http://localhost:9090/api/jobman/reporting">
</rb-chart>
<rb-tabulator report-code="cfo-dashboard" component-id="customersGrid"
api-base-url="http://localhost:9090/api/jobman/reporting">
</rb-tabulator>
<rb-pivot-table report-code="cfo-dashboard" component-id="regionPivot"
api-base-url="http://localhost:9090/api/jobman/reporting">
</rb-pivot-table>Each component fetches only its own configuration and data from the backend.
Cache Expensive Queries with Redis
The componentId guard stops each request from running every query. But every user still hits the database on every page load. When your dashboard aggregates millions of rows — revenue rollups, customer counts, geographic breakdowns — those queries take real time, and the results rarely change between page loads.
Redis eliminates this. Cache the query result once, serve it from memory for every subsequent request, expire it after a TTL you control.
Before — every request hits the database:
def componentId = ctx.variables?.get('componentId')
if (!componentId || componentId == 'revenueChart') {
def revenue = ctx.dbSql.rows("SELECT quarter, total FROM quarterly_revenue")
ctx.reportData('revenueChart', revenue)
}After — first request hits the database, subsequent requests read from Redis:
import io.lettuce.core.RedisClient
import groovy.json.JsonOutput
import groovy.json.JsonSlurper
def componentId = ctx.variables?.get('componentId')
// Connect to Redis (started via Starter Packs)
def redisClient = RedisClient.create("redis://:redis@localhost:6379")
def connection = redisClient.connect()
def redis = connection.sync()
try {
if (!componentId || componentId == 'revenueChart') {
def cacheKey = "dashboard:cfo:revenueChart"
def cached = redis.get(cacheKey)
def revenue
if (cached) {
// Cache hit — parse JSON back to list of maps
revenue = new JsonSlurper().parseText(cached)
} else {
// Cache miss — query the database, store result for 5 minutes
revenue = ctx.dbSql.rows("SELECT quarter, total FROM quarterly_revenue")
redis.setex(cacheKey, 300, JsonOutput.toJson(revenue))
}
ctx.reportData('revenueChart', revenue)
}
if (!componentId || componentId == 'customersGrid') {
def cacheKey = "dashboard:cfo:customersGrid"
def cached = redis.get(cacheKey)
def customers
if (cached) {
customers = new JsonSlurper().parseText(cached)
} else {
customers = ctx.dbSql.rows("SELECT name, orders FROM top_customers")
redis.setex(cacheKey, 300, JsonOutput.toJson(customers))
}
ctx.reportData('customersGrid', customers)
}
} finally {
connection.close()
redisClient.shutdown()
}What happens:
redis.get(cacheKey)— returns the cached JSON string, ornullon missredis.setex(cacheKey, 300, json)— stores the result with a 300-second (5-minute) TTL- After 5 minutes, Redis expires the key automatically and the next request re-queries the database
Choose your TTL based on how fresh the data needs to be. A CFO dashboard refreshed hourly? 3600. A live sales board? 30. Static reference data? 86400 (one day).
To force-refresh a specific cache entry (e.g., after an ETL run), delete the key:
redis.del("dashboard:cfo:revenueChart")Start Redis from the Starter Packs page — it runs on localhost:6379 with password redis by default. No other setup required.
Connection overhead? Each script creates a Redis client, uses it, and shuts it down — about 15–25ms of overhead. When you're caching SQL queries that take 500ms–5s, that's negligible. You're still saving 95%+ of the execution time.
Real-Time Data with Redis Pub/Sub
Redis isn't just a cache — it also supports Pub/Sub messaging. Publish a message when data changes, subscribe from your application, and push updates to dashboards without polling.
Example use case: An ETL job finishes loading new sales data into the warehouse. It publishes a message to a Redis channel. A dashboard listener picks it up, invalidates the relevant cache keys, and triggers a refresh. Users see live data without hitting reload — no polling, no stale numbers.
Before adding real-time infrastructure, ask yourself honestly: do your users genuinely need sub-second updates, or is a 5-minute cache TTL good enough? Most dashboards are refreshed a few times a day — not every second.
If you're sure you need it, Hephaestus can help you design and build a Redis Pub/Sub pipeline tailored to your architecture.