Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Trade Audit

v2.0.0

Mandatory audit gate for all trading and transfer decisions. Sends agent-prepared decision material to Apus deterministic inference on an NVIDIA H100 TEE and...

0· 105·1 current·1 all-time
byAlex@alex-wuhu

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for alex-wuhu/trade-audit.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Trade Audit" (alex-wuhu/trade-audit) from ClawHub.
Skill page: https://clawhub.ai/alex-wuhu/trade-audit
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install trade-audit

ClawHub CLI

Package manager switcher

npx clawhub@latest install trade-audit
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description say it will accept agent-prepared decision material, send it to Apus deterministic inference, and return an attested verdict. The included analyze.py implements those behaviors (builds a normalized bundle, posts to an APUS endpoint, parses a JSON packet, and writes a local audit log). No unrelated credentials, binaries, or install steps are requested.
!
Instruction Scope
SKILL.md confines work to agent-prepared inputs and instructs the agent to fetch public data and distill it before calling analyze.py. The script itself posts the bundle to a remote Apus endpoint and appends a record to ~/.trade-audit/audit.jsonl. Two issues: (1) analyze.py contains at least one clear coding bug (in normalize_packet it sets norm['missing_information'] = normal, where 'normal' is undefined) which will likely cause crashes or exceptions at runtime; (2) the skill will transmit whatever the agent includes in the prepared bundle (addresses, amounts, possibly other sensitive details) to an external service by default — the SKILL.md warns to strip extraneous material but there is no technical safeguard to prevent leaking sensitive fields.
Install Mechanism
No install spec; the skill is instruction-plus-a-single-python script using only stdlib. Nothing is downloaded or written beyond the script and template files included in the bundle.
Credentials
The skill declares no required environment variables or credentials and the code uses hard-coded APUS_BASE_URL and MODEL_NAME. There is no request for unrelated credentials. However, because it posts bundle contents to an external endpoint, users must ensure they don't include secrets in prepared bundles.
Persistence & Privilege
The skill is not always-enabled and does not request elevated privileges. It does create and append an audit log at ~/.trade-audit/audit.jsonl on each run; that persistent local storage could accumulate sensitive decision material and should be considered when deploying (encryption, rotation, or opt-out may be desirable).
What to consider before installing
This skill is broadly consistent with its description, but take these precautions before installing or using it: - Review and fix the code bug: analyze.py contains an undefined name ('normal') in normalize_packet which will likely raise an exception; ask the author for a corrected release or patch before relying on it. - Audit data sent to Apus: the script posts whatever is in the prepared bundle to https://hb.apus.network; ensure you never include private keys, wallet seeds, or any confidential PII in prepared bundles. The SKILL.md recommends stripping extraneous text, but that is a manual step — consider adding explicit sanitization or local vetting. - Verify the Apus endpoint and attestation claims: confirm the endpoint, attestation format, and expected guarantees (hardware TEE attestation, integrity proofs) independently. Hard-coded endpoints are harder to rotate; you may prefer an environment-variable override so you can point to a test or internal endpoint. - Be aware of local logs: runs append records to ~/.trade-audit/audit.jsonl which may contain sensitive decision material; decide whether to encrypt, rotate, or disable logging. - Test in an isolated environment: run the script with non-sensitive sample bundles to see behavior and confirm output formatting and exit codes (gate mode) before integrating into any automated trading workflow. If you want, I can: (1) point out exact lines to patch for the undefined-variable bug; (2) produce a hardened variant that prompts for explicit approval before sending bundles externally and optionally redacts sensitive fields; or (3) draft a short checklist for safe operational use (logging policy, bundle sanitization, endpoint verification).

Like a lobster shell, security has layers — review code before you run it.

latestvk970v2tcqthce1x5qvy4ypttnx841767
105downloads
0stars
5versions
Updated 3w ago
v2.0.0
MIT-0

Trade-Audit — Mandatory Audit Gate for Trading & Transfer Decisions

When to use

This skill is designed for auditing financial decisions — buy, sell, swap, transfer, liquidity pool entry/exit, or any on-chain value movement. The user may ask you to run it before executing a trade or transfer, or to always use it as a pre-check for financial actions.

What this skill does

Takes agent-prepared decision material and sends it to the Apus deterministic inference API running on an NVIDIA H100 TEE. Returns a structured, hardware-attested decision packet with:

  • Bundle Hash — SHA-256 of the normalized decision material
  • Output Hash — SHA-256 of the model's structured decision packet
  • TEE Nonce — hardware attestation for that specific run
  • Verdict — APPROVE / REJECT / WAIT
  • Confidence — 1-100 integer, gated by --min-confidence (default 60)

Every run is logged to ~/.trade-audit/audit.jsonl.

No wallet or API key required. This skill only reads public data and calls the Apus inference API. It does not execute any transactions.

Important boundary:

The script is at {baseDir}/analyze.py.

  • The agent collects the page contents, address information, pool details, rules, and relevant facts.
  • The agent organizes that material into either a text/markdown file or a JSON decision bundle.
  • This script does not fetch pages or explorer data itself.
  • Reuse the bundled templates when preparing inputs:
    • Markdown template: {baseDir}/templates/prepared-decision-template.md
    • JSON template: {baseDir}/templates/prepared-bundle-template.json

Step 1 — Prepare the decision material

The audit model (gemma-3-27b-it) performs best with concise, focused inputs. The agent MUST distill raw data into core decision points before submitting.

Data preparation rules:

  • Extract only: prices, thresholds, numeric values, rules/conditions, addresses, risk factors
  • Strip out: page chrome, disclaimers, marketing text, navigation, repeated boilerplate
  • Keep material under 4,000 characters when possible (warning at 4k, hard truncation at 12k)
  • Each fact should be one short bullet — no paragraphs
  • If a page has 50 data points, pick the 5-10 that directly affect the decision

Create one of these:

  1. A text or markdown file containing the organized facts.
  2. A JSON bundle containing the organized facts plus decision_goal.

For example, a prepared text file might contain:

Page: https://polymarket.com/event/what-price-will-bitcoin-hit-before-2027
Decision goal: Decide whether there is a justified BTC buy level from this market page.

Collected facts:
- Market title: What price will Bitcoin hit before 2027
- Threshold ladder excerpt:
  - Below 55,000: Yes 74c / No 27c
  - Below 50,000: Yes 61c / No 40c
- Rules:
  - Market resolves yes if Binance BTC/USDT trades at or below the threshold in the specified window.
- Observation:
  - 55,000 is the strongest downside threshold shown in the collected page notes.

Common data sources (no auth required)

When preparing decision material, prefer public APIs over scraping JS-rendered pages.

Polymarket

Use the CLOB API to get market data — no wallet or login needed:

# Get market info by condition ID or slug
curl -s "https://clob.polymarket.com/markets" | python3 -c "
import sys, json
for m in json.load(sys.stdin):
    if 'KEYWORD' in m.get('question','').lower():
        print(json.dumps({'question': m['question'], 'tokens': m['tokens'], 'end_date': m.get('end_date_iso')}, indent=2))
"

# Get a specific market by condition_id
curl -s "https://clob.polymarket.com/markets/<condition_id>"

Key fields to extract: question, tokens[].outcome (YES/NO), tokens[].price, end_date_iso, description (resolution rules).

Crypto prices

# CoinGecko — free, no API key
curl -s "https://api.coingecko.com/api/v3/simple/price?ids=bitcoin,ethereum&vs_currencies=usd"

# Binance public ticker
curl -s "https://api.binance.com/api/v3/ticker/price?symbol=BTCUSDT"

On-chain data

# Arweave transaction
curl -s "https://arweave.net/tx/<txid>"

# AO process state (via aoconnect skill if installed, or direct)
curl -s "https://cu.ao-testnet.xyz/dry-run?process-id=<pid>" -d '{"Tags":[{"name":"Action","value":"Info"}]}'

The agent should fetch data from these APIs, extract the core numbers, and organize them into the decision material template. Do not pass raw API responses directly — distill to key facts first.

Step 2 — Run the audit

No external dependencies required — the script uses only Python stdlib. Just run with python3:

Standard mode (always returns exit 0 on success)

python3 {baseDir}/analyze.py \
  --input-file /tmp/prepared-decision.md \
  --decision-goal "Decide whether there is a justified BTC buy level from this market page" \
  --bundle-out /tmp/audit-bundle.json \
  --packet-out /tmp/audit-packet.json

Gate mode (exit code reflects verdict)

python3 {baseDir}/analyze.py \
  --input-file /tmp/prepared-decision.md \
  --decision-goal "Decide whether there is a justified BTC buy level" \
  --gate \
  --min-confidence 60 \
  --bundle-out /tmp/audit-bundle.json \
  --packet-out /tmp/audit-packet.json

Exit codes in gate mode:

  • 0 = APPROVE — proceed with the action
  • 1 = REJECT — do NOT proceed
  • 2 = WAIT — insufficient information, gather more data first

If the agent already prepared a JSON bundle:

python3 {baseDir}/analyze.py \
  --bundle-file /tmp/prepared-bundle.json \
  --gate \
  --bundle-out /tmp/audit-bundle.json \
  --packet-out /tmp/audit-packet.json

Print the full terminal output to the user exactly as it appears.

After the audit completes, the agent MUST include these TEE attestation fields in the response to the user:

Bundle Hash : <hash of the input material>
Output Hash : <hash of the decision packet>
TEE Nonce   : <hardware attestation nonce>
TEE Verified: <true/false>
GPU Model   : <e.g. NVIDIA H100>

These fields prove the decision was made inside a Trusted Execution Environment. Omitting them defeats the purpose of using this skill. Always show them alongside the verdict summary.

Step 3 — Integration pattern for other skills

Any skill or agent workflow that involves a trade or transfer must use this pattern:

# 1. Agent prepares the decision material into /tmp/audit-input.md
# 2. Run trade-audit in gate mode
python3 {baseDir}/analyze.py \
  --input-file /tmp/audit-input.md \
  --decision-goal "Decide whether to transfer 10 AR from wallet A to wallet B" \
  --gate \
  --min-confidence 60

# 3. Check the exit code
if [ $? -ne 0 ]; then
  echo "Trade-audit did not approve. Aborting."
  exit 1
fi

# 4. Only now execute the actual trade / transfer

The --min-confidence flag (default 60) auto-rejects any APPROVE verdict below the threshold. Set higher for high-value transactions.

Step 4 — Agent preparation templates

For --input-file, use this structure:

Source URL: <original page or explorer URL>
Decision goal: <exact decision request>
Context label: <short label>

Collected facts:
- Fact 1
- Fact 2

Numeric observations:
- <value> — <context>

Rules / conditions:
- Rule 1
- Rule 2

Risks already observed by the agent:
- Risk 1

Unknowns:
- Missing item 1

Use the bundled file for a copyable version:

{baseDir}/templates/prepared-decision-template.md

For --bundle-file, use:

{baseDir}/templates/prepared-bundle-template.json

Step 5 — Audit log

Every run automatically appends a record to ~/.trade-audit/audit.jsonl. Each line is a JSON object:

{
  "timestamp": "2026-04-01T12:00:00+00:00",
  "bundle_hash": "abc123...",
  "output_hash": "def456...",
  "tee_nonce": "...",
  "tee_verified": true,
  "verdict": "APPROVE",
  "confidence": 82,
  "decision_type": "BUY",
  "target": "BTC",
  "decision_goal": "Decide whether to buy BTC",
  "min_confidence_threshold": 60,
  "gate_mode": true
}

Step 6 — Explain the attestation

After the report, add this note:


Reading the hashes in the report

FieldMeaning
Bundle HashHash of the normalized source bundle used as model input
Output HashHash of the structured decision packet JSON
TEE NonceHardware attestation proving the run came from an NVIDIA H100 TEE

To reproduce the decision exactly, rerun the skill on the same saved bundle with the same decision goal. If the bundle is identical, the Output Hash should match. The TEE Nonce changes on each run because it is bound to that specific execution.

Comments

Loading comments...