Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

timeplus-sql-guide

v1.0.4

Write and execute Timeplus streaming SQL for real-time analytics. Use this skill when the user wants to create streams, run streaming queries, build material...

0· 377·0 current·0 all-time
byGang Tao@gangtao
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description: write and execute Timeplus streaming SQL. Declared requirements: curl and TIMEPLUS_HOST/TIMEPLUS_USER/TIMEPLUS_PASSWORD. The instructions use curl against port 8123/3218 and these env vars — proportionate and expected for the described functionality.
Instruction Scope
SKILL.md and reference docs instruct the agent to run SQL via curl using the declared env vars and to create streams, UDFs, materialized views, ingest, sinks, etc. That stays within the Timeplus domain. However several examples in the references: (a) show using an undeclared TIMEPLUS_API_KEY header for the management API, and (b) include hard-coded credentials (e.g., example ClickHouse/MySQL/S3 credentials and a Slack webhook URL) which contradict the guide's 'never hardcode credentials' guidance and could encourage copy-paste of secrets. The skill's runtime instructions themselves do not tell the agent to read unrelated system files or extra env vars.
Install Mechanism
Instruction-only skill; no install spec and no code executed locally by the skill bundle. All remote actions are performed by the agent via curl calls to the Timeplus server — lowest install risk from the skill package itself.
Credentials
Declared env vars (TIMEPLUS_HOST, TIMEPLUS_USER, TIMEPLUS_PASSWORD) are appropriate and minimal for connecting to a Timeplus instance. The references include other credential-like variables (TIMEPLUS_API_KEY) and example AWS/CQ/DB passwords inside SQL snippets; those are not declared as required by the skill but are present in examples — this is a potential source of confusion and an insecure pattern if users copy-paste.
Persistence & Privilege
always: false, no install, and no modifications to other skills or global agent config. Autonomous invocation is allowed by default (normal), but the skill does not request elevated or persistent platform privileges.
Assessment
This is largely a coherent instruction-only guide for interacting with a Timeplus server over HTTP and requires only curl plus the Timeplus host/user/password — which is appropriate. Before installing or using it: - Only provide credentials for a Timeplus instance you trust; the skill will send whatever SQL you run to that host using the provided credentials. - Watch out for copy-paste pitfalls: several example snippets contain hard-coded credentials (DB passwords, AWS keys, Slack webhook URLs) and an example referencing TIMEPLUS_API_KEY which the skill does not declare. Do not copy those secrets into your environment or production SQL. - Prefer least-privileged credentials (a user with only the permissions needed) rather than a highly privileged account. - Be careful when creating Python Table Functions, remote UDFs, or alerts that post to webhooks: those can make outbound network requests and, if misused, could leak data to external endpoints. Review any generated UDF code and webhook URLs before running. - If you need the management API example that uses TIMEPLUS_API_KEY, add that env var intentionally and treat it as a separate credential. If you want, I can: (a) extract or rewrite the examples to remove hard-coded secrets and clearly mark where user-supplied secrets belong, or (b) generate safe templates that use placeholder env vars only.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Binscurl
EnvTIMEPLUS_HOST, TIMEPLUS_USER, TIMEPLUS_PASSWORD
Primary envTIMEPLUS_PASSWORD
latestvk970jkwew6e51f96tebw099rz983q2kq
377downloads
0stars
5versions
Updated 8h ago
v1.0.4
MIT-0

Timeplus Streaming SQL Guide

You are an expert in Timeplus — a high-performance real-time streaming analytics platform built on a streaming SQL engine (Proton). You write correct, efficient Timeplus SQL and execute it via the ClickHouse-compatible HTTP API.

Quick Reference

TaskReference
Get data inreferences/INGESTION.md
Transform datareferences/TRANSFORMATIONS.md
Send data outreferences/SINKS.md
Full SQL syntax, types, functionsreferences/SQL_REFERENCE.md
Random streams (simulated data)references/RANDOM_STREAMS.md
Python & JavaScript UDFsreferences/UDFS.md
Python Table Functionsreferences/Python_TABLE_FUNCTION.md

Executing SQL

Environment Setup

Always use these environment variables — never hardcode credentials:

- TIMEPLUS_HOST       # hostname or IP
- TIMEPLUS_USER       # username
- TIMEPLUS_PASSWORD   # password (can be empty)

Running SQL via curl (port 8123)

Port 8123 is the ClickHouse-compatible HTTP interface. Use it for all DDL and historical queries (CREATE, DROP, INSERT, SELECT from table(...)). Always use username password with -u option

NOTE, if the curl returns nothing, it is not an error, it means the query returns no records. You can check the HTTP status code to confirm success (200 OK) or failure (4xx/5xx).

# Standard pattern — pipe SQL into curl
echo "YOUR SQL HERE" | curl "http://${TIMEPLUS_HOST}:8123/" \
  -u "${TIMEPLUS_USER}:${TIMEPLUS_PASSWORD}" \
  --data-binary @-

Health check:

curl "http://${TIMEPLUS_HOST}:8123/"
# Returns: Ok.

DDL example — create a stream:

echo "CREATE STREAM IF NOT EXISTS sensor_data (
  device_id string,
  temperature float32,
  ts datetime64(3, 'UTC') DEFAULT now64(3, 'UTC')
) SETTINGS logstore_retention_ms=86400000" | \
curl "http://${TIMEPLUS_HOST}:8123/" \
  -u "${TIMEPLUS_USER}:${TIMEPLUS_PASSWORD}" \
  --data-binary @-

Historical query with JSON output:

echo "SELECT * FROM table(sensor_data) LIMIT 10" | \
curl "http://${TIMEPLUS_HOST}:8123/?default_format=JSONEachRow" \
  -u "${TIMEPLUS_USER}:${TIMEPLUS_PASSWORD}" \
  --data-binary @-

Insert data:

echo "INSERT INTO sensor_data (device_id, temperature) VALUES ('dev-1', 23.5), ('dev-2', 18.2)" | \
curl "http://${TIMEPLUS_HOST}:8123/" \
  -u "${TIMEPLUS_USER}:${TIMEPLUS_PASSWORD}" \
  --data-binary @-

Streaming Ingest via REST API (port 3218)

For pushing event batches into a stream:

curl -s -X POST "http://${TIMEPLUS_HOST}:3218/proton/v1/ingest/streams/sensor_data" \
  -H "Content-Type: application/json" \
  -d '{
    "columns": ["device_id", "temperature"],
    "data": [
      ["dev-1", 23.5],
      ["dev-2", 18.2],
      ["dev-3", 31.0]
    ]
  }'

Output Formats

Append ?default_format=<format> to the URL:

FormatUse Case
TabSeparatedDefault, human-readable
JSONEachRowOne JSON object per line
JSONCompactCompact JSON array
CSVComma-separated
VerticalColumn-per-line, for inspection

Core Concepts

Streaming vs Historical Queries

-- STREAMING: Continuous, never ends. Default behavior.
SELECT device_id, temperature FROM sensor_data;

-- HISTORICAL: Bounded, returns immediately. Use table().
SELECT device_id, temperature FROM table(sensor_data) LIMIT 100;

-- HISTORICAL + FUTURE: All past events + all future events
SELECT * FROM sensor_data WHERE _tp_time >= earliest_timestamp();

The _tp_time Column

Every stream has a built-in _tp_time datetime64(3, 'UTC') event-time column. It defaults to ingestion time. You can set a custom event-time column via SETTINGS event_time_column='your_column' when creating the stream.

Stream Modes

ModeCreated WithBehavior
appendCREATE STREAM (default)Immutable log, new rows only
versioned_kv+ SETTINGS mode='versioned_kv'Latest value per primary key
changelog_kv+ SETTINGS mode='changelog_kv'Insert/Update/Delete tracking
mutableCREATE MUTABLE STREAMRow-level UPDATE/DELETE (Enterprise)

Common Patterns

Pattern 1: Create stream → insert → query

# 1. Create stream
echo "CREATE STREAM IF NOT EXISTS orders (
  order_id string,
  product string,
  amount float32,
  region string
)" | curl "http://${TIMEPLUS_HOST}:8123/" \
  -u "${TIMEPLUS_USER}:${TIMEPLUS_PASSWORD}" \
  --data-binary @-

# 2. Insert data
echo "INSERT INTO orders VALUES ('o-1','Widget',19.99,'US'), ('o-2','Gadget',49.99,'EU')" | \
  curl "http://${TIMEPLUS_HOST}:8123/" \
  -u "${TIMEPLUS_USER}:${TIMEPLUS_PASSWORD}" \
  --data-binary @-

# 3. Query historical data
echo "SELECT region, sum(amount) FROM table(orders) GROUP BY region" | \
  curl "http://${TIMEPLUS_HOST}:8123/?default_format=JSONEachRow" \
  -u "${TIMEPLUS_USER}:${TIMEPLUS_PASSWORD}" \
  --data-binary @-

Pattern 2: Window aggregation (streaming)

echo "SELECT window_start, region, sum(amount) AS revenue
FROM tumble(orders, 1m)
GROUP BY window_start, region
EMIT AFTER WATERMARK AND DELAY 5s" | \
  curl "http://${TIMEPLUS_HOST}:8123/" \
  -u "${TIMEPLUS_USER}:${TIMEPLUS_PASSWORD}" \
  --data-binary @-

Pattern 3: Materialized view pipeline

echo "CREATE MATERIALIZED VIEW IF NOT EXISTS mv_revenue_by_region
INTO revenue_by_region AS
SELECT window_start, region, sum(amount) AS total
FROM tumble(orders, 5m)
GROUP BY window_start, region" | \
  curl "http://${TIMEPLUS_HOST}:8123/" \
  -u "${TIMEPLUS_USER}:${TIMEPLUS_PASSWORD}" \
  --data-binary @-

Pattern 4: Random stream for testing

echo "CREATE RANDOM STREAM IF NOT EXISTS mock_sensors (
  device_id string DEFAULT 'device-' || to_string(rand() % 10),
  temperature float32 DEFAULT 20 + (rand() % 30),
  status string DEFAULT ['ok','warn','error'][rand() % 3 + 1]
) SETTINGS eps=5" | \
  curl "http://${TIMEPLUS_HOST}:8123/" \
  -u "${TIMEPLUS_USER}:${TIMEPLUS_PASSWORD}" \
  --data-binary @-

Error Handling

Common errors and fixes:

ErrorCauseFix
Connection refusedWrong host/portCheck TIMEPLUS_HOST and port 8123 is open
Authentication failedWrong credentialsCheck TIMEPLUS_USER / TIMEPLUS_PASSWORD
Stream already existsDuplicate CREATEUse CREATE STREAM IF NOT EXISTS
Unknown columnTypo or wrong streamRun DESCRIBE stream_name to check schema
Streaming query timeoutUsing streaming on port 8123Wrap with table() for historical query
Type mismatchWrong data typeUse explicit cast: cast(val, 'float32')

Inspect a stream:

echo "DESCRIBE sensor_data" | curl "http://${TIMEPLUS_HOST}:8123/" \
  -u "${TIMEPLUS_USER}:${TIMEPLUS_PASSWORD}" \
  --data-binary @-

List all streams:

echo "SHOW STREAMS" | curl "http://${TIMEPLUS_HOST}:8123/" \
  -u "${TIMEPLUS_USER}:${TIMEPLUS_PASSWORD}" \
  --data-binary @-

Explain a query:

echo "EXPLAIN SELECT * FROM tumble(sensor_data, 1m) GROUP BY window_start" | \
  curl "http://${TIMEPLUS_HOST}:8123/" \
  -u "${TIMEPLUS_USER}:${TIMEPLUS_PASSWORD}" \
  --data-binary @-

When to Read Reference Files

Load the relevant reference file when the user's request requires deeper knowledge:

  • Creating or modifying streams, external streams, sourcesreferences/INGESTION.md
  • Window functions, JOINs, CTEs, materialized views, aggregationsreferences/TRANSFORMATIONS.md
  • Sinks, external tables, Kafka output, webhooksreferences/SINKS.md
  • Data types, full function catalog, query settings, all DDLreferences/SQL_REFERENCE.md
  • Simulating data, random streams, test data generationreferences/RANDOM_STREAMS.md
  • Writing Python UDFs, JavaScript UDFs, remote UDFs, SQL lambdasreferences/UDFS.md
  • Python Table Functionsreferences/Python_TABLE_FUNCTION.md
  • Scheduled Tasksreferences/TASK.md
  • Alertsreferences/ALERT.md

Comments

Loading comments...