Dune Analytics API

v2.0.0

Dune Analytics API skill for querying, analyzing, and uploading blockchain data. Use this skill whenever the user mentions Dune, on-chain data, blockchain an...

2· 855·3 current·3 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for lz-web3/dune-analytics-api.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Dune Analytics API" (lz-web3/dune-analytics-api) from ClawHub.
Skill page: https://clawhub.ai/lz-web3/dune-analytics-api
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: DUNE_API_KEY
Required binaries: python3
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install lz-web3/dune-analytics-api

ClawHub CLI

Package manager switcher

npx clawhub@latest install dune-analytics-api
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the requested resources: python3 and DUNE_API_KEY are appropriate for a Python-based Dune API client. The scripts and references all relate to discovering tables, executing/updating queries, and uploading data to Dune.
Instruction Scope
SKILL.md and the scripts restrict themselves to Dune API operations and documentation fetches. They read DUNE_API_KEY (declared) and repository reference files. The skill includes operations that modify the user's Dune account (create/update/delete queries/tables, upload/overwrite data); SKILL.md notes best practices (confirm before updating, prefer private queries), but these destructive actions are functionally part of the stated purpose and require user Dune API credentials.
Install Mechanism
There is no automated install spec; SKILL.md instructs the user to pip install dune-client. Pulling a PyPI package is expected for a Python client but users should be aware that an external package is required and will be installed into the environment.
Credentials
Only one credential is required: DUNE_API_KEY (declared as primary). No other secrets, config paths, or unrelated credentials are requested.
Persistence & Privilege
always:false and no special system-wide privileges are requested. However, with the supplied DUNE_API_KEY the skill can perform destructive operations (upload/overwrite/delete tables, update queries) in the user's Dune account. Autonomous invocation is the platform default; consider requiring explicit user confirmation for write/delete actions.
Assessment
This skill appears to do what it claims: it uses your DUNE_API_KEY to query, create/update queries, and upload tables to your Dune account. Before installing or granting the API key: 1) Treat DUNE_API_KEY as sensitive — use a restricted/test Dune account or a key with least-privilege if possible. 2) Be aware the skill can overwrite or delete your Dune tables/queries — only allow it to run write operations after reviewing the SQL and confirming actions. 3) The skill requires pip installing the dune-client package; review that package if you have supply-chain concerns. 4) If you plan to allow autonomous agent actions, restrict or require confirmations for destructive commands (update_sql, upload_csv, delete_table, clear_table). If you want a safer setup, use a read-only or limited API key or a dedicated sandbox Dune account for automation.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

📊 Clawdis
Binspython3
EnvDUNE_API_KEY
Primary envDUNE_API_KEY
latestvk97dgz88qr01t7gr1vwdsvhzd182pj1k
855downloads
2stars
7versions
Updated 1mo ago
v2.0.0
MIT-0

Dune Analytics API

A skill for querying and analyzing blockchain data via the Dune Analytics API.

Setup

pip install dune-client

Set DUNE_API_KEY via environment variable, .env file, or agent config.

Best Practices

  1. Read references first — The reference files contain critical table names, anti-patterns, and chain-specific gotchas that aren't obvious from table names alone. Reading the right reference before writing SQL prevents common mistakes like using dex.trades for wallet analysis (which inflates volume ~30%) or missing Solana's dedup requirement.

  2. Prefer private queries — Creating queries with is_private=True keeps the user's workspace clean and avoids polluting the public Dune namespace. Fall back to public if it fails (free plan limitation), and let the user know.

  3. Reuse before creating — Dune charges credits per execution. Reusing or updating an existing query avoids unnecessary duplicates and makes credit tracking easier. Only create new queries when the user explicitly asks.

  4. Confirm before updating — Modifying an existing query's SQL is destructive (previous version isn't saved by default). A quick confirmation avoids overwriting work the user might want to keep.

  5. Track credits — Each execution costs credits depending on the performance tier and data scanned. Reporting credits consumed helps the user manage their budget. See query-execution.md.

Scripts — Common Operations

For common operations, use the scripts in scripts/ to avoid writing boilerplate code every time. All scripts read DUNE_API_KEY from the environment automatically.

ScriptCommandWhat it does
dune_query.pyexecute --query-id IDExecute a saved query (supports --params, --performance, --format)
dune_query.pyget_latest --query-id IDGet cached result without re-execution
dune_query.pyget_sql --query-id IDPrint query SQL
dune_query.pyupdate_sql --query-id ID --sql "..."Update query SQL
dune_discover.pysearch --keyword "uniswap"Search tables by keyword
dune_discover.pyschema --table "dex.trades"Show table columns and types
dune_discover.pylist_schemas --namespace "uniswap_v3"List tables in a namespace
dune_discover.pycontract --address "0x..."Find decoded tables by contract address
dune_discover.pydocs --keyword "dex"Search Dune documentation
dune_upload.pyupload_csv --file data.csv --table-name tblQuick CSV upload (overwrites)
dune_upload.pycreate_table --table-name tbl --namespace ns --schema '[...]'Create table with explicit schema
dune_upload.pyinsert --file data.csv --table-name tbl --namespace nsAppend data to existing table

Example:

# Execute query with parameters
python scripts/dune_query.py execute --query-id 123456 --params '{"token":"ETH"}' --format table

# Upload a CSV privately
python scripts/dune_upload.py upload_csv --file wallets.csv --table-name my_wallets --private

Reference Selection

Before writing any SQL, route to the correct reference file(s) based on your task:

Task involves...Read this reference
Finding tables / inspecting schema / discovering protocolstable-discovery.md
Finding decoded tables by contract addresstable-discovery.md
Searching Dune documentation / guides / examplestable-discovery.md
Wallet / address tracking / router identificationwallet-analysis.md
Table selection / common table namescommon-tables.md
SQL performance / complex joins / array opssql-optimization.md
API calls / execution / caching / parametersquery-execution.md
Uploading CSV/NDJSON data to Dunedata-upload.md

If your task spans multiple categories, read all relevant files. The references contain critical details (e.g., specialized tables, anti-patterns) that aren't covered in this overview — guessing table names or query patterns leads to subtle bugs.

Quick Start

from dune_client.client import DuneClient
from dune_client.query import QueryBase
import os

client = DuneClient(api_key=os.environ['DUNE_API_KEY'])

# Execute a query
result = client.run_query(query=QueryBase(query_id=123456), performance='medium', ping_frequency=5)
print(f"Rows: {len(result.result.rows)}")

# Get cached result (no re-execution)
result = client.get_latest_result(query_id=123456)

# Get/update SQL
sql = client.get_query(123456).sql
client.update_query(query_id=123456, query_sql="SELECT ...")

# Upload CSV data (quick, overwrites existing)
client.upload_csv(
    data="col1,col2\nval1,val2",
    description="My data",
    table_name="my_table",
    is_private=True
)

# Create table + insert (supports append)
client.create_table(
    namespace="my_user",
    table_name="my_table",
    schema=[{"name": "col1", "type": "varchar"}, {"name": "col2", "type": "double"}],
    is_private=True
)
import io
client.insert_data(
    namespace="my_user",
    table_name="my_table",
    data=io.BytesIO(b"col1,col2\nabc,1.5"),
    content_type="text/csv"
)

Subscription Tiers

MethodDescriptionPlan
run_queryExecute saved query (supports {{param}})Free
run_sqlExecute SQL directly (no params)Plus

Key Concepts

dex.trades vs dex_aggregator.trades

TableUse CaseVolume
dex.tradesPer-pool analysis⚠️ Inflated ~30% (multi-hop counted multiple times)
dex_aggregator.tradesUser/wallet analysisAccurate

Why this matters: If you're analyzing a specific wallet's trading activity and use dex.trades, you'll see inflated volume because a single swap through an aggregator gets split into multiple pool-level trades. dex_aggregator.trades captures the user-level intent — one row per user swap. See wallet-analysis.md for full patterns.

Solana has no dex_aggregator_solana.trades. Dedupe by tx_id:

SELECT tx_id, MAX(amount_usd) as amount_usd
FROM dex_solana.trades
GROUP BY tx_id

Data Freshness

LayerDelayExample
Raw< 1 minethereum.transactions, solana.transactions
Decoded15-60 secuniswap_v3_ethereum.evt_Swap
Curated~1 hour+dex.trades, dex_solana.trades

Query previous day's data after UTC 12:00 for completeness.

References

Detailed documentation is organized in the references/ directory:

FileDescription
table-discovery.mdTable discovery: search tables by name, inspect schema/columns, list schemas and uploads
query-execution.mdAPI patterns: execute, update, cache, multi-day fetch, credits tracking, subqueries
common-tables.mdQuick reference of commonly used tables: raw, decoded, curated, community data
sql-optimization.mdSQL optimization: CTE, JOIN strategies, array ops, partition pruning
wallet-analysis.mdWallet tracking: Solana/EVM queries, multi-chain aggregation, fee analysis
data-upload.mdData upload: CSV/NDJSON upload, create table, insert data, manage tables, credits

Comments

Loading comments...