TimescaleDB
Store and query time-series data with hypertables, compression, and continuous aggregates.
MIT-0 · Free to use, modify, and redistribute. No attribution required.
⭐ 2 · 697 · 0 current installs · 0 all-time installs
byIván@ivangdavila
MIT-0
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description match the SKILL.md content. Declares psql as an available binary which is appropriate for running the provided SQL. Minor note: the doc mentions the third-party tool 'timescaledb-parallel-copy' but the skill does not declare that binary as required (informational only).
Instruction Scope
SKILL.md contains SQL commands and operational guidance for hypertables, compression, retention, indexes, etc. It does not instruct reading unrelated files, environment variables, or sending data to external endpoints.
Install Mechanism
No install spec and no code files — instruction-only skill (low risk). Nothing is downloaded or written to disk by the skill package itself.
Credentials
The skill declares no environment variables or credentials. That is proportional: the instructions assume an existing PostgreSQL/TimescaleDB instance and psql client, but do not request secrets themselves.
Persistence & Privilege
always is false and the skill does not request persistent system presence or modify other skills/config. Normal autonomy settings remain (no extra privilege requested).
Assessment
This skill is documentation-only and appears coherent. Before using it: ensure you have a running PostgreSQL instance with the TimescaleDB extension and that the psql client is available on PATH; the skill will not install TimescaleDB for you. Review any SQL before executing (some commands like converting large tables, compression, or retention are irreversible/expensive). If you plan to use the mentioned 'timescaledb-parallel-copy' tool, obtain it from a trusted source — the skill does not declare or install it. Because the skill does not request credentials, the agent/you will need to provide DB connection details at runtime if you want it to run commands; only supply credentials to trusted agents/environments.Like a lobster shell, security has layers — review code before you run it.
Current versionv1.0.0
Download ziplatest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
⏱️ Clawdis
OSLinux · macOS · Windows
Any binpsql
SKILL.md
Hypertables
- Convert table to hypertable:
SELECT create_hypertable('metrics', 'time') - Must have time column (TIMESTAMPTZ recommended)—partition key for chunks
- Call BEFORE inserting data—converting large tables is expensive
- Can't undo easily—plan schema before converting
Chunk Interval
- Default 7 days per chunk—tune based on data volume
SELECT set_chunk_time_interval('metrics', INTERVAL '1 day')for high-volume- Chunks should be 25% of memory—too small = overhead, too large = slow queries
- Check chunk sizes:
SELECT * FROM chunks_detailed_size('metrics')
time_bucket
time_bucket('1 hour', time)groups timestamps—like date_trunc but with arbitrary intervals- Use in GROUP BY for aggregation:
GROUP BY time_bucket('5 minutes', time) - Origin parameter for offset:
time_bucket('1 day', time, '2024-01-01'::timestamptz) - Beats date_trunc for non-standard intervals—15min, 4h, etc.
Continuous Aggregates
- Materialized views that auto-refresh—pre-compute expensive aggregations
CREATE MATERIALIZED VIEW hourly_stats WITH (timescaledb.continuous) AS SELECT ...- Add refresh policy:
SELECT add_continuous_aggregate_policy('hourly_stats', ...) - Query aggregate view instead of raw hypertable—orders of magnitude faster
Real-Time Aggregates
- Continuous aggregates include recent data automatically—no stale reads
WITH (timescaledb.continuous, timescaledb.materialized_only = false)for real-time- Combines materialized historical + live recent—transparent to queries
- Small performance cost for real-time—disable if batch-only acceptable
Compression
- Compress old chunks to save 90%+ storage:
ALTER TABLE metrics SET (timescaledb.compress) - Add compression policy:
SELECT add_compression_policy('metrics', INTERVAL '7 days') - Compressed chunks are read-only—can't update/delete individual rows
- Decompress for modifications:
SELECT decompress_chunk('chunk_name')
Retention
- Auto-delete old data:
SELECT add_retention_policy('metrics', INTERVAL '90 days') - Drops entire chunks—efficient, no row-by-row delete
- Retention runs on scheduler—data persists slightly past interval
- Combine with compression: compress at 7d, drop at 90d
Indexing
- Time column auto-indexed in hypertable—don't add redundant index
- Add indexes on filter columns:
CREATE INDEX ON metrics (device_id, time DESC) - Composite indexes with time last—enables chunk exclusion
- Skip indexes on rarely-filtered columns—each index slows writes
Insert Performance
- Batch inserts critical—single-row inserts are slow
- Use COPY or multi-value INSERT:
INSERT INTO metrics VALUES (...), (...), ... - Parallel COPY with
timescaledb-parallel-copytool—saturates I/O - Out-of-order inserts work but slower—prefer time-ordered writes
Query Patterns
- Always include time range in WHERE—enables chunk exclusion
WHERE time > now() - INTERVAL '1 day'skips old chunks entirely- ORDER BY time DESC with LIMIT for "latest N"—index scan, fast
- Avoid SELECT * on wide tables—fetch only needed columns
Distributed Hypertables
- Multi-node for horizontal scale—data sharded across nodes
- Create access node + data nodes—access node coordinates queries
- More operational complexity—start single-node, distribute when needed
- Not needed for most workloads—single node handles millions of rows/sec
Files
1 totalSelect a file
Select a file to preview.
Comments
Loading comments…
