Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Liquidity Monitor

Monitor DEX pools in real time with impermanent loss and LP yield estimates. Use when tracking pool depth, estimating IL, comparing yields across DEXes.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 262 · 1 current installs · 1 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill name and description (monitor DEX pools) are consistent with the scripts' behavior (they query DeFiLlama for pool/tvl/yield data). However SKILL.md explicitly claims 'All operations are local — no external APIs or network connections required', which is false given the code. Also SKILL.md documents a 'liquidity-monitor' CLI and data dir ~/.local/share/liquidity-monitor, but the repo provides scripts named script.sh and liquidity.sh, which use different data directories (~/.liquidity-monitor and ~/.local/share/liquidity-monitor). This mismatch between claimed behavior and actual capabilities is a red flag.
!
Instruction Scope
SKILL.md instructs only local data operations and lists only Bash + basic Unix utils, but the scripts call remote endpoints (https://yields.llama.fi, https://api.llama.fi) via python urllib/curl and depend on python3. SKILL.md references env overrides (LIQUIDITY_MONITOR_DIR) but the scripts use different variables/paths. The instructions are therefore misleading and grant the agent permission (implicitly) to perform network I/O and create files in the user's home that the SKILL.md said weren't used.
Install Mechanism
There is no install spec (instruction-only), which normally lowers risk. However, code files are included in the package (scripts/liquidity.sh, scripts/script.sh) and would be present on disk if the skill is installed — these scripts will be executed by the agent if invoked. No external downloads or obscure install URLs are present, but presence of runnable scripts without an install manifest still increases runtime risk compared to pure-doc skills.
!
Credentials
The skill declares no required env vars or credentials, but the code performs network requests to third-party APIs and writes logs/alerts/history under user home directories. While no secret exfiltration is requested, the SKILL.md promised 'no external APIs' (contradicted by code). The scripts rely on python3 and curl which are not listed as required binaries in the manifest — the declared requirements understate what the code actually needs.
Persistence & Privilege
always:false and no system-wide changes are requested. The scripts create and write to per-user data files (~/.local/share/liquidity-monitor, ~/.liquidity-monitor) and persist history/alerts — this is expected for a monitoring tool. It's not requesting elevated privileges or modifying other skills, but the combination of autonomous invocation (allowed) plus network access increases blast radius if the skill behaved maliciously.
What to consider before installing
Do not assume the skill is offline or purely local — the included scripts call public DeFiLlama endpoints (yields.llama.fi, api.llama.fi) and use curl/python3. Before installing or running: (1) inspect or run the scripts in a sandboxed environment; (2) be aware the skill will create files in your home directory (~/.local/share/liquidity-monitor and ~/.liquidity-monitor) and log queries/alerts; (3) verify network endpoints are acceptable for your threat model; (4) note the SKILL.md is inconsistent with the code (misstated offline behavior, different command names/paths), and the code shows truncation/bugs — consider asking the author for clarification or a fixed release before use.

Like a lobster shell, security has layers — review code before you run it.

Current versionv3.0.0
Download zip
chinesevk9774tsq553fr4n5zvsfv5j4fh82qq6mlatestvk97dynetsn326s9bczetw3h0t9836cf6productivityvk97fh24mt66p42awftcj661sps82sqpj

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Liquidity Monitor

Liquidity Monitor is a data processing and analysis toolkit for querying, importing, exporting, transforming, validating, and visualizing datasets from the terminal. It provides 10 core commands for working with structured data, plus built-in history logging for full traceability. All operations are local — no external APIs or network connections required.

Commands

CommandDescription
liquidity-monitor query <args>Query data from the local data store. Logs the query to history for auditing.
liquidity-monitor import <file>Import a data file into the local store. Accepts any file path as input.
liquidity-monitor export <dest>Export processed results to a specified destination (defaults to stdout).
liquidity-monitor transform <src> <dst>Transform data from one format/structure to another.
liquidity-monitor validate <args>Validate data against the built-in schema. Reports schema compliance status.
liquidity-monitor stats <args>Display basic statistics — total record count from the data log.
liquidity-monitor schema <args>Show the current data schema. Default fields: id, name, value, timestamp.
liquidity-monitor sample <args>Preview the first 5 records from the data store, or "No data" if empty.
liquidity-monitor clean <args>Clean and deduplicate the data store.
liquidity-monitor dashboard <args>Quick dashboard showing total record count and summary metrics.
liquidity-monitor helpShow help with all available commands.
liquidity-monitor versionPrint version string (liquidity-monitor v2.0.0).

Data Storage

All data is stored locally in ~/.local/share/liquidity-monitor/ (override with LIQUIDITY_MONITOR_DIR or XDG_DATA_HOME environment variables).

Directory structure:

~/.local/share/liquidity-monitor/
├── data.log         # Main data store (line-based records)
└── history.log      # Unified activity log with timestamps

Every command logs its action to history.log with a timestamp (MM-DD HH:MM) for full traceability. The main data file data.log holds all imported and queried records.

Requirements

  • Bash (with set -euo pipefail)
  • Standard Unix utilities: date, wc, head, du, echo
  • No external dependencies, databases, or API keys required
  • Optional: Set LIQUIDITY_MONITOR_DIR to customize the data directory location

When to Use

  1. Importing and querying datasets — Pull in CSV, log, or structured data files and run quick queries against them from the terminal without spinning up a database.
  2. Data validation workflows — Validate incoming data against the built-in schema before processing to catch format issues early.
  3. Data transformation pipelines — Transform data between formats or structures as part of an ETL-like workflow, all within bash.
  4. Quick dashboard views — Get instant record counts and summary metrics via dashboard or stats without writing custom scripts.
  5. Data cleanup and deduplication — Use clean to remove duplicate records and normalize the data store before exporting or further analysis.

Examples

# Import a data file
liquidity-monitor import sales_data.csv

# Query the data store
liquidity-monitor query "region=APAC"

# View schema
liquidity-monitor schema

# Preview first 5 records
liquidity-monitor sample

# Get basic statistics
liquidity-monitor stats

# Transform data
liquidity-monitor transform raw.csv cleaned.csv

# Validate data integrity
liquidity-monitor validate

# Quick dashboard
liquidity-monitor dashboard

# Export results
liquidity-monitor export results.json

# Clean and deduplicate
liquidity-monitor clean

Powered by BytesAgain | bytesagain.com | hello@bytesagain.com

Files

4 total
Select a file
Select a file to preview.

Comments

Loading comments…