Verified Trading Bot Reputation: Building Cryptographic PnL Proof

v1.3.1

Verified Trading Bot Reputation: Building Cryptographic PnL Proof. Build unforgeable cryptographic PnL proof for your trading bot using Ed25519 signatures an...

0· 171·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mirni/greenhelix-verified-bot-reputation.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Verified Trading Bot Reputation: Building Cryptographic PnL Proof" (mirni/greenhelix-verified-bot-reputation) from ClawHub.
Skill page: https://clawhub.ai/mirni/greenhelix-verified-bot-reputation
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: AGENT_SIGNING_KEY
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install greenhelix-verified-bot-reputation

ClawHub CLI

Package manager switcher

npx clawhub@latest install greenhelix-verified-bot-reputation
Security Scan
Capability signals
CryptoRequires walletCan make purchasesRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description are about producing cryptographic PnL proofs and the skill declares a single credential: AGENT_SIGNING_KEY, which is exactly the kind of secret needed to sign proofs. There are no unrelated environment variables, binaries, or install steps declared.
Instruction Scope
SKILL.md is an extensive guide with code examples and API integration against GreenHelix sandbox. The provided excerpts show instructions to create identities, sign claims, build Merkle proofs, and submit them to GreenHelix — all consistent with the stated purpose. Because this is instruction-only, the agent following it would need access to the signing key; ensure the guide does not instruct reading unrelated secrets or arbitrary local files (no evidence of that in the excerpt, but the file was truncated).
Install Mechanism
No install spec and no code files — the skill is instruction-only, so nothing is downloaded or written by the installer. This is low-risk from an install perspective.
Credentials
Only AGENT_SIGNING_KEY is required, which is proportional to a signing/identity-based reputation system. This key is highly sensitive: if provided it can be used to sign claims on behalf of the operator. The skill does not request exchange API keys or other unrelated credentials (which would be concerning).
Persistence & Privilege
The skill is not always-enabled and does not request elevated persistence. It does allow normal autonomous invocation (platform default), which combined with a signing key could allow the agent to sign data autonomously — a normal capability but something to be aware of.
Scan Findings in Context
[no_regex_findings] expected: The static scanner had nothing to analyze because this is an instruction-only skill. That absence is expected but does not guarantee safety; the SKILL.md content is the primary surface for review.
Assessment
This guide appears internally consistent, but AGENT_SIGNING_KEY is a powerful secret. Only supply a signing key you control and that is scoped/isolated for reputation proofs (do NOT use a private key that also controls exchange withdrawals or funds). Prefer an ephemeral or dedicated identity key, hardware-backed keys, or keys with limited scope. Before supplying a live key: review the full SKILL.md examples to confirm they only sign and submit proofs to the GreenHelix endpoints you expect, test in the GreenHelix sandbox first, and rotate/revoke the key if you suspect misuse. If you need stronger assurance, copy the code examples and run them locally (offline) rather than granting the agent direct access to your signing key.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

EnvAGENT_SIGNING_KEY
Primary envAGENT_SIGNING_KEY
ai-agentvk9742gy9v5vr6mg5r1qgf8ws6n84x97bgreenhelixvk9742gy9v5vr6mg5r1qgf8ws6n84x97bguidevk9742gy9v5vr6mg5r1qgf8ws6n84x97blatestvk9742gy9v5vr6mg5r1qgf8ws6n84x97bopenclawvk9742gy9v5vr6mg5r1qgf8ws6n84x97bpnl-verificationvk9742gy9v5vr6mg5r1qgf8ws6n84x97breputationvk9742gy9v5vr6mg5r1qgf8ws6n84x97btrading-botvk9742gy9v5vr6mg5r1qgf8ws6n84x97b
171downloads
0stars
5versions
Updated 1w ago
v1.3.1
MIT-0

Verified Trading Bot Reputation: Building Cryptographic PnL Proof

Notice: This is an educational guide with illustrative code examples. It does not execute code or install dependencies. All examples use the GreenHelix sandbox (https://sandbox.greenhelix.net) which provides 500 free credits — no API key required to get started.

Referenced credentials (you supply these in your own environment):

  • AGENT_SIGNING_KEY: Cryptographic signing key for agent identity (Ed25519 key pair for request signing)

Fake PnL screenshots destroyed $45M in the Step Finance breach in January 2026. Shared API keys with no identity isolation let an attacker drain funds while operators pointed fingers at fabricated track records. Signal providers on Telegram charge $400/mo with self-reported 95% win rates that nobody can verify. Copy trading platforms have no mechanism to confirm a leader's actual performance history. The result is a trust crisis that costs real money every single day. This guide shows you how to build an unforgeable, cryptographically verified reputation for your trading bot using Ed25519 signatures and Merkle claim chains on the GreenHelix A2A Commerce Gateway.

  1. Why Screenshots Are Worthless
  2. Creating Your Bot's Cryptographic Identity

What You'll Learn

  • Chapter 1: Why Screenshots Are Worthless
  • Chapter 2: Creating Your Bot's Cryptographic Identity
  • Chapter 3: Submitting Verified Trade Data
  • Chapter 4: Building Your Claim Chain (Merkle Proof)
  • Chapter 5: Understanding Your Trust Score
  • Chapter 6: Making Your Reputation Discoverable
  • Chapter 7: Integration Patterns
  • Chapter 8: Merkle Tree Deep-Dive
  • Chapter 9: Cross-Exchange Aggregation
  • Chapter 10: Leaderboard Gaming Prevention

Full Guide

Verified Trading Bot Reputation: Building Cryptographic PnL Proof on GreenHelix

Fake PnL screenshots destroyed $45M in the Step Finance breach in January 2026. Shared API keys with no identity isolation let an attacker drain funds while operators pointed fingers at fabricated track records. Signal providers on Telegram charge $400/mo with self-reported 95% win rates that nobody can verify. Copy trading platforms have no mechanism to confirm a leader's actual performance history. The result is a trust crisis that costs real money every single day. This guide shows you how to build an unforgeable, cryptographically verified reputation for your trading bot using Ed25519 signatures and Merkle claim chains on the GreenHelix A2A Commerce Gateway.


Table of Contents

  1. Why Screenshots Are Worthless
  2. Creating Your Bot's Cryptographic Identity
  3. Submitting Verified Trade Data
  4. Building Your Claim Chain (Merkle Proof)
  5. Understanding Your Trust Score
  6. Making Your Reputation Discoverable
  7. Integration Patterns
  8. Merkle Tree Deep-Dive
  9. Cross-Exchange Aggregation
  10. Leaderboard Gaming Prevention
  11. EU AI Act Compliance Angle
  12. What's Next

Chapter 1: Why Screenshots Are Worthless

The Step Finance Breach

In January 2026, an attacker exploited Step Finance's shared API key architecture to drain $45M from copy trading vaults. The root cause was not a smart contract bug. It was an identity problem. Multiple bots shared the same execution keys with no isolation between them. When one operator fabricated a track record to attract depositors, there was no cryptographic proof tying performance claims to actual executions. The platform relied on self-reported metrics displayed in a dashboard. The attacker posted doctored screenshots showing 340% annual returns, accumulated $45M in follower deposits, then executed a series of intentionally losing trades against a colluding counterparty.

The Fake PnL Epidemic

Step Finance was the largest incident, but the pattern repeats daily at smaller scale. Telegram signal groups routinely doctor screenshots by editing DOM elements in browser dev tools before screenshotting. Discord bots generate synthetic equity curves. Twitter accounts post TradingView charts from paper trading accounts as if they were live. A 2025 study by Solidus Labs found that 73% of "top trader" profiles on social copy trading platforms had at least one material discrepancy between claimed and actual returns.

What Cryptographic PnL Proof Actually Means

A cryptographic PnL proof is a trade performance record that satisfies three properties:

  1. Attribution -- the record is signed by a private key that only the bot operator controls, binding the claim to a specific identity.
  2. Integrity -- the record cannot be altered after submission without invalidating the signature.
  3. Ordering -- records are chained into a Merkle tree, creating an append-only log where inserting, deleting, or modifying past entries changes the root hash and is detectable by any verifier.

How Ed25519 + Merkle Trees Create Unforgeable Records

Ed25519 is a digital signature scheme built on Curve25519. Each bot gets a keypair: a 32-byte private key (held by the operator, never shared) and a 32-byte public key (registered on GreenHelix). When the bot submits a metric -- say, today's PnL of +2.3% -- it signs the data with its private key. Anyone with the public key can verify the signature, confirming the data came from that specific bot and has not been tampered with.

Merkle trees extend this to historical records. Each batch of signed metrics is hashed into a leaf node. Pairs of leaves are hashed together to form parent nodes, recursively, until a single root hash remains. This root hash is a fingerprint of the entire history. Change one trade record from six months ago and the root hash changes. A third party can verify any individual record by checking its Merkle proof path against the published root -- without downloading the entire dataset and without trusting GreenHelix as an intermediary.


Chapter 2: Creating Your Bot's Cryptographic Identity

Generate an Ed25519 Keypair

Install the required libraries:

pip install cryptography requests

Generate your keypair:

from cryptography.hazmat.primitives.asymmetric.ed25519 import Ed25519PrivateKey
from cryptography.hazmat.primitives import serialization
import base64

# Generate keypair
private_key = Ed25519PrivateKey.generate()
public_key = private_key.public_key()

# Serialize private key (store securely -- this is your bot's identity)
private_bytes = private_key.private_bytes(
    encoding=serialization.Encoding.Raw,
    format=serialization.PrivateFormat.Raw,
    encryption_algorithm=serialization.NoEncryption()
)

# Serialize public key (this gets registered on GreenHelix)
public_bytes = public_key.public_bytes(
    encoding=serialization.Encoding.Raw,
    format=serialization.PublicFormat.Raw
)

private_key_b64 = base64.b64encode(private_bytes).decode()
public_key_b64 = base64.b64encode(public_bytes).decode()

print(f"Private key (keep secret): {private_key_b64}")
print(f"Public key (register this): {public_key_b64}")

Why Ed25519

Ed25519 is the right choice for bot identity for three reasons. First, speed: signing takes ~50 microseconds, which adds negligible latency to a trade pipeline. RSA-2048 signing is roughly 50x slower. Second, key size: Ed25519 public keys are 32 bytes versus 256 bytes for RSA-2048, which matters when keys are stored on-chain or in compact attestation records. Third, security: Ed25519 uses deterministic nonces, eliminating the class of side-channel attacks that have compromised ECDSA implementations (the Sony PS3 breach, the Android Bitcoin wallet vulnerability). There is no random number generator in the signing path to get wrong.

Register on GreenHelix

With curl:

API_KEY="your-api-key-here"
PUBLIC_KEY_B64="your-base64-public-key"

curl -s -X POST https://sandbox.greenhelix.net/v1 \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "tool": "register_agent",
    "input": {
      "agent_id": "trading-bot-alpha-7x",
      "public_key": "'"$PUBLIC_KEY_B64"'",
      "name": "Alpha-7x Momentum Strategy"
    }
  }'

With Python:

import requests

BASE_URL = "https://api.greenhelix.net/v1"
API_KEY = "your-api-key-here"

headers = {
    "Authorization": f"Bearer {API_KEY}",
    "Content-Type": "application/json"
}

response = requests.post(f"{BASE_URL}/v1", headers=headers, json={
    "tool": "register_agent",
    "input": {
        "agent_id": "trading-bot-alpha-7x",
        "public_key": public_key_b64,
        "name": "Alpha-7x Momentum Strategy"
    }
})

print(response.json())

Sign and Verify a Test Message

Prove your identity by signing a challenge message:

import json

# Sign a message
message = "identity-verification-2026-04-06"
message_bytes = message.encode("utf-8")
signature = private_key.sign(message_bytes)
signature_b64 = base64.b64encode(signature).decode()

# Verify via GreenHelix
response = requests.post(f"{BASE_URL}/v1", headers=headers, json={
    "tool": "verify_agent",
    "input": {
        "agent_id": "trading-bot-alpha-7x",
        "message": message,
        "signature": signature_b64
    }
})

result = response.json()
print(f"Verification result: {result}")
# Expected: {"verified": true, "agent_id": "trading-bot-alpha-7x"}

With curl:

SIGNATURE_B64="base64-encoded-signature"

curl -s -X POST https://sandbox.greenhelix.net/v1 \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "tool": "verify_agent",
    "input": {
      "agent_id": "trading-bot-alpha-7x",
      "message": "identity-verification-2026-04-06",
      "signature": "'"$SIGNATURE_B64"'"
    }
  }'

Chapter 3: Submitting Verified Trade Data

What Metrics to Track

For a trading bot, the following metrics form a complete reputation profile:

MetricDescriptionExample Value
pnl_percentCumulative PnL as percentage14.7
win_ratePercentage of profitable trades62.3
max_drawdownLargest peak-to-trough decline8.1
sharpe_ratioRisk-adjusted return2.4
trade_countTotal number of closed trades1847
avg_hold_time_hoursAverage position duration4.2

Signing Metrics Before Submission

Every metric submission should be signed. This prevents anyone -- including a compromised platform -- from injecting fake performance data under your identity.

import json
import time

def sign_metrics(private_key, agent_id, metrics):
    """Sign a metrics payload with Ed25519."""
    # Create a canonical JSON representation for signing
    payload = json.dumps({
        "agent_id": agent_id,
        "metrics": metrics,
        "timestamp": int(time.time())
    }, sort_keys=True)

    signature = private_key.sign(payload.encode("utf-8"))
    return base64.b64encode(signature).decode(), payload

Submitting Aggregate Metrics

Use submit_metrics for aggregate performance snapshots:

metrics = {
    "pnl_percent": 14.7,
    "win_rate": 62.3,
    "max_drawdown": 8.1,
    "sharpe_ratio": 2.4,
    "trade_count": 1847,
    "avg_hold_time_hours": 4.2
}

response = requests.post(f"{BASE_URL}/v1", headers=headers, json={
    "tool": "submit_metrics",
    "input": {
        "agent_id": "trading-bot-alpha-7x",
        "metrics": metrics
    }
})

print(response.json())

With curl:

curl -s -X POST https://sandbox.greenhelix.net/v1 \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "tool": "submit_metrics",
    "input": {
      "agent_id": "trading-bot-alpha-7x",
      "metrics": {
        "pnl_percent": 14.7,
        "win_rate": 62.3,
        "max_drawdown": 8.1,
        "sharpe_ratio": 2.4,
        "trade_count": 1847,
        "avg_hold_time_hours": 4.2
      }
    }
  }'

Ingesting Time-Series Data Points

For granular, trade-by-trade records, use ingest_metrics with signed time-series data:

import time

data_points = [
    {"metric": "pnl_percent", "value": 0.34, "timestamp": int(time.time()) - 3600},
    {"metric": "pnl_percent", "value": -0.12, "timestamp": int(time.time()) - 1800},
    {"metric": "pnl_percent", "value": 0.51, "timestamp": int(time.time())},
]

# Sign the data points
payload_str = json.dumps({
    "agent_id": "trading-bot-alpha-7x",
    "data_points": data_points
}, sort_keys=True)
signature = private_key.sign(payload_str.encode("utf-8"))
sig_b64 = base64.b64encode(signature).decode()

response = requests.post(f"{BASE_URL}/v1", headers=headers, json={
    "tool": "ingest_metrics",
    "input": {
        "agent_id": "trading-bot-alpha-7x",
        "data_points": data_points,
        "signature": sig_b64
    }
})

print(response.json())

Querying Your Own Metrics

Retrieve your submitted data to verify it was recorded correctly:

response = requests.post(f"{BASE_URL}/v1", headers=headers, json={
    "tool": "query_metrics",
    "input": {
        "agent_id": "trading-bot-alpha-7x",
        "metric": "pnl_percent",
        "start": int(time.time()) - 86400,
        "end": int(time.time())
    }
})

data = response.json()
for point in data.get("data_points", []):
    print(f"  {point['timestamp']}: {point['value']}%")

Complete Trade Logging Pipeline

Here is a reusable class that wraps the entire workflow:

import base64
import json
import time
import requests
from cryptography.hazmat.primitives.asymmetric.ed25519 import Ed25519PrivateKey
from cryptography.hazmat.primitives import serialization


class BotReputation:
    """Manages cryptographic reputation for a trading bot on GreenHelix."""

    def __init__(self, agent_id, api_key, private_key_b64):
        self.agent_id = agent_id
        self.api_key = api_key
        self.base_url = "https://api.greenhelix.net/v1"
        self.headers = {
            "Authorization": f"Bearer {api_key}",
            "Content-Type": "application/json"
        }

        # Load Ed25519 private key from base64
        key_bytes = base64.b64decode(private_key_b64)
        self.private_key = Ed25519PrivateKey.from_private_bytes(key_bytes)
        self.public_key = self.private_key.public_key()

    def _execute(self, tool, input_data):
        """Execute a tool on the GreenHelix gateway."""
        response = requests.post(
            f"{self.base_url}/v1",
            headers=self.headers,
            json={"tool": tool, "input": input_data}
        )
        response.raise_for_status()
        return response.json()

    def _sign(self, data):
        """Sign arbitrary data with Ed25519, return base64 signature."""
        if isinstance(data, dict):
            data = json.dumps(data, sort_keys=True)
        signature = self.private_key.sign(data.encode("utf-8"))
        return base64.b64encode(signature).decode()

    def _public_key_b64(self):
        """Return base64-encoded public key."""
        pub_bytes = self.public_key.public_bytes(
            encoding=serialization.Encoding.Raw,
            format=serialization.PublicFormat.Raw
        )
        return base64.b64encode(pub_bytes).decode()

    def register(self, name):
        """Register this bot's identity on GreenHelix."""
        return self._execute("register_agent", {
            "agent_id": self.agent_id,
            "public_key": self._public_key_b64(),
            "name": name
        })

    def verify_identity(self, message=None):
        """Prove control of the private key."""
        if message is None:
            message = f"verify-{self.agent_id}-{int(time.time())}"
        signature = self._sign(message)
        return self._execute("verify_agent", {
            "agent_id": self.agent_id,
            "message": message,
            "signature": signature
        })

    def submit_snapshot(self, metrics):
        """Submit aggregate metric snapshot."""
        return self._execute("submit_metrics", {
            "agent_id": self.agent_id,
            "metrics": metrics
        })

    def log_trades(self, data_points):
        """Ingest signed time-series trade data."""
        payload = {
            "agent_id": self.agent_id,
            "data_points": data_points
        }
        signature = self._sign(payload)
        return self._execute("ingest_metrics", {
            "agent_id": self.agent_id,
            "data_points": data_points,
            "signature": signature
        })

    def build_chain(self):
        """Build Merkle claim chain from attestation history."""
        return self._execute("build_claim_chain", {
            "agent_id": self.agent_id
        })

    def get_chains(self):
        """Retrieve stored claim chains."""
        return self._execute("get_claim_chains", {
            "agent_id": self.agent_id
        })

    def get_reputation(self):
        """Get current reputation score."""
        return self._execute("get_agent_reputation", {
            "agent_id": self.agent_id
        })

    def get_verified_claims(self):
        """Get all verified metric claims."""
        return self._execute("get_verified_claims", {
            "agent_id": self.agent_id
        })

    def get_deltas(self):
        """Get metric deltas (current vs previous period)."""
        return self._execute("get_metric_deltas", {
            "agent_id": self.agent_id
        })

    def get_averages(self):
        """Get rolling averages (7d, 30d, 90d)."""
        return self._execute("get_metric_averages", {
            "agent_id": self.agent_id
        })

    def query(self, metric, start, end):
        """Query time-series data for a specific metric."""
        return self._execute("query_metrics", {
            "agent_id": self.agent_id,
            "metric": metric,
            "start": start,
            "end": end
        })

Usage:

bot = BotReputation(
    agent_id="trading-bot-alpha-7x",
    api_key="your-api-key",
    private_key_b64="your-base64-private-key"
)

# One-time setup
bot.register("Alpha-7x Momentum Strategy")
bot.verify_identity()

# After each trade closes
bot.log_trades([
    {"metric": "pnl_percent", "value": 0.45, "timestamp": int(time.time())},
    {"metric": "trade_count", "value": 1, "timestamp": int(time.time())},
])

# Daily snapshot
bot.submit_snapshot({
    "pnl_percent": 14.7,
    "win_rate": 62.3,
    "max_drawdown": 8.1,
    "sharpe_ratio": 2.4,
    "trade_count": 1847,
    "avg_hold_time_hours": 4.2
})

# Weekly chain build
chain_result = bot.build_chain()
print(f"Merkle root: {chain_result}")

Chapter 4: Building Your Claim Chain (Merkle Proof)

What a Merkle Claim Chain Is

A Merkle claim chain is an append-only data structure that makes your entire performance history tamper-evident. Each time you submit metrics, those submissions become leaf nodes in a binary hash tree. The tree is computed bottom-up: pairs of leaves are hashed together, then pairs of those hashes are hashed together, until a single root hash remains.

                    [Root Hash]
                   /            \
            [Hash AB]          [Hash CD]
           /        \         /        \
      [Hash A]  [Hash B]  [Hash C]  [Hash D]
         |         |         |         |
      Week 1    Week 2    Week 3    Week 4
      Metrics   Metrics   Metrics   Metrics
      (signed)  (signed)  (signed)  (signed)

If you modify Week 2's metrics after the fact -- say, changing a -3.1% loss to a +1.2% gain -- Hash B changes. That causes Hash AB to change. That causes the Root Hash to change. Anyone who previously recorded the root hash can detect the tampering instantly.

How It Creates an Append-Only, Tamper-Evident Record

The key insight is that the root hash is a commitment to the entire history. Once you publish a root hash (or a third party records it), you cannot rewrite any part of the past without producing a different root. This is the same principle that secures Bitcoin's block chain and Git's commit history. The difference is that here, each leaf is a signed metric attestation rather than a transaction or file diff.

A Merkle proof for any single leaf consists of the sibling hashes along the path from that leaf to the root. For a tree with N leaves, the proof is log2(N) hashes -- extremely compact. A verifier with just the root hash and a Merkle proof can confirm that a specific metric submission exists in the history without downloading the full dataset.

When to Build Chains

Build a new claim chain after a meaningful batch of metric submissions. A reasonable cadence:

  • Weekly for active bots (more than 10 trades per day)
  • After every 50-100 metric submissions as an alternative trigger
  • Before any public claim about your performance (posting returns on social media, listing on a marketplace)
# Build after a week of trading
chain = bot.build_chain()
print(f"Chain built. Root: {chain}")

With curl:

curl -s -X POST https://sandbox.greenhelix.net/v1 \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "tool": "build_claim_chain",
    "input": {
      "agent_id": "trading-bot-alpha-7x"
    }
  }'

Retrieving and Sharing Your Claim Chain

chains = bot.get_chains()
for chain in chains.get("chains", []):
    print(f"Root: {chain['root_hash']}")
    print(f"Leaves: {chain['leaf_count']}")
    print(f"Built: {chain['created_at']}")

With curl:

curl -s -X POST https://sandbox.greenhelix.net/v1 \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "tool": "get_claim_chains",
    "input": {
      "agent_id": "trading-bot-alpha-7x"
    }
  }'

How Third Parties Verify Without Trusting the Platform

A potential subscriber or marketplace does not need to trust GreenHelix. They need three things:

  1. Your public key (registered and publicly queryable)
  2. A Merkle root hash (published by you or recorded by the verifier at a previous point in time)
  3. A Merkle proof for the specific claim they want to verify

The verification logic is: re-hash the claimed data, walk the proof path up to the root, and check that the computed root matches the known root. If it matches, the data is authentic and has not been modified since the chain was built. GreenHelix cannot forge this because it does not hold your private key. You cannot retroactively modify it because the root hash would change.


Chapter 5: Understanding Your Trust Score

How GreenHelix Computes Reputation

GreenHelix computes a composite reputation score from multiple dimensions of verified behavior. The score is not a simple average -- it weights consistency and longevity heavily because those are the hardest properties to fake.

Core factors:

  • Verified trade count -- more trades with valid signatures means more data points and harder to fabricate
  • Win rate consistency -- a stable 58% win rate scores higher than a volatile rate that averages 65% but swings between 40% and 90%
  • Drawdown limits -- bots that have never exceeded 15% max drawdown score higher than those with 40% drawdowns, even if ultimate PnL is higher
  • Time active -- a bot with 12 months of continuous verified data scores higher than one with 2 months, even with identical metrics
  • Claim chain depth -- bots with deep, regularly-built Merkle chains demonstrate ongoing commitment to transparency

Checking Your Score

reputation = bot.get_reputation()
print(f"Trust score: {reputation.get('score')}")
print(f"Factors: {json.dumps(reputation.get('factors', {}), indent=2)}")

With curl:

curl -s -X POST https://sandbox.greenhelix.net/v1 \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "tool": "get_agent_reputation",
    "input": {
      "agent_id": "trading-bot-alpha-7x"
    }
  }'

Metric Deltas and Rolling Averages

Track how your performance is trending:

# Current period vs previous period
deltas = bot.get_deltas()
print(f"PnL delta: {deltas}")

# Rolling averages across 7d, 30d, 90d windows
averages = bot.get_averages()
print(f"Rolling averages: {json.dumps(averages, indent=2)}")

With curl:

# Metric deltas
curl -s -X POST https://sandbox.greenhelix.net/v1 \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "tool": "get_metric_deltas",
    "input": {"agent_id": "trading-bot-alpha-7x"}
  }'

# Rolling averages
curl -s -X POST https://sandbox.greenhelix.net/v1 \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "tool": "get_metric_averages",
    "input": {"agent_id": "trading-bot-alpha-7x"}
  }'

What a "Good" Trust Score Looks Like

Based on the GreenHelix leaderboard distribution:

  • 90+ -- Top-tier. Consistent profitability, deep claim chains, 6+ months of verified data. These bots attract institutional followers.
  • 70-89 -- Solid. Good track record with some volatility. Suitable for retail copy trading.
  • 50-69 -- Developing. Either new (insufficient history) or inconsistent. Needs more data or steadier performance.
  • Below 50 -- Unreliable. Significant drawdowns, gaps in data submission, or short history.

How to Recover from a Bad Period

Every bot has drawdowns. The reputation system accounts for this. A 30-day losing streak does not destroy a 12-month track record. The key actions during a bad period:

  1. Keep submitting data. Gaps in submission history hurt your score more than losses do. The system rewards transparency.
  2. Do not create a new identity. A fresh agent with zero history scores lower than an established agent with a visible drawdown and recovery.
  3. Build claim chains through the drawdown. This proves you did not selectively omit bad periods. When the recovery comes, the contrast between the drawdown and recovery -- both verifiable -- strengthens your credibility.
  4. Monitor your deltas. Use get_metric_deltas to track when your current-period performance starts outpacing the previous period. That inflection point is when your score begins recovering.

Chapter 6: Making Your Reputation Discoverable

How Other Agents Search by Metrics

Potential subscribers and marketplaces use search_agents_by_metrics to find bots that meet their criteria:

# Find bots with Sharpe ratio between 1.5 and 5.0
response = requests.post(f"{BASE_URL}/v1", headers=headers, json={
    "tool": "search_agents_by_metrics",
    "input": {
        "metric_name": "sharpe_ratio",
        "min_value": 1.5,
        "max_value": 5.0
    }
})

results = response.json()
for agent in results.get("agents", []):
    print(f"{agent['agent_id']}: Sharpe {agent['value']}")

With curl:

curl -s -X POST https://sandbox.greenhelix.net/v1 \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "tool": "search_agents_by_metrics",
    "input": {
      "metric_name": "sharpe_ratio",
      "min_value": 1.5,
      "max_value": 5.0
    }
  }'

Optimizing Your Profile for Discovery

To appear in relevant searches, submit metrics using standard naming conventions. The most commonly searched metrics are:

  • sharpe_ratio -- the single most queried metric by institutional allocators
  • max_drawdown -- searched with low max_value (allocators looking for drawdown < 10%)
  • win_rate -- retail copy traders search for win rates above 55%
  • trade_count -- used as a proxy for sample size; higher counts inspire more confidence

Submit all six core metrics (listed in Chapter 3) at minimum. Bots that submit only PnL without supporting metrics like Sharpe ratio and drawdown are invisible to most discovery queries.

Leaderboard Mechanics

The GreenHelix leaderboard ranks agents by composite reputation score. It automatically filters out agent IDs prefixed with test-, perf-, audit-, or stress- to keep rankings clean.

response = requests.post(f"{BASE_URL}/v1", headers=headers, json={
    "tool": "get_agent_leaderboard",
    "input": {}
})

leaderboard = response.json()
for rank, entry in enumerate(leaderboard.get("agents", []), 1):
    print(f"#{rank}: {entry['agent_id']} (score: {entry['score']})")

With curl:

curl -s -X POST https://sandbox.greenhelix.net/v1 \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "tool": "get_agent_leaderboard",
    "input": {}
  }'

Embedding Trust Scores in Your Listings

When listing your bot on a marketplace or website, fetch your verified claims to display alongside your offering:

claims = bot.get_verified_claims()
reputation = bot.get_reputation()

# Use these in your listing
listing_data = {
    "bot_name": "Alpha-7x Momentum Strategy",
    "verified_pnl": claims.get("pnl_percent"),
    "verified_sharpe": claims.get("sharpe_ratio"),
    "trust_score": reputation.get("score"),
    "claim_chain_root": claims.get("latest_chain_root"),
    "verification_url": f"https://sandbox.greenhelix.net/v1"
}

The claim chain root hash serves as a compact proof. Any skeptic can take that root hash, request the Merkle proof for a specific claim, and verify it independently.


Chapter 7: Integration Patterns

Automated Metrics Pipeline

The most effective pattern is to submit metrics automatically after every trade close. Here is a skeleton integration with a hypothetical trade execution callback:

import time

bot = BotReputation(
    agent_id="trading-bot-alpha-7x",
    api_key="your-api-key",
    private_key_b64="your-base64-private-key"
)

def on_trade_close(trade):
    """Called by your trading engine after each trade closes."""
    now = int(time.time())

    # Log individual trade metrics
    bot.log_trades([
        {"metric": "pnl_percent", "value": trade["pnl_pct"], "timestamp": now},
        {"metric": "trade_count", "value": 1, "timestamp": now},
        {"metric": "avg_hold_time_hours", "value": trade["hold_hours"], "timestamp": now},
    ])

def on_daily_close(portfolio):
    """Called at end of trading day."""
    # Submit aggregate snapshot
    bot.submit_snapshot({
        "pnl_percent": portfolio["cumulative_pnl_pct"],
        "win_rate": portfolio["win_rate"],
        "max_drawdown": portfolio["max_drawdown"],
        "sharpe_ratio": portfolio["sharpe_ratio"],
        "trade_count": portfolio["total_trades"],
        "avg_hold_time_hours": portfolio["avg_hold_hours"],
    })

def on_weekly_close():
    """Called at end of trading week."""
    # Build Merkle chain
    result = bot.build_chain()
    print(f"Weekly chain built: {result}")

Webhook Alerts on Reputation Changes

Poll your reputation periodically and alert when it changes significantly:

import time

last_score = None

def check_reputation_change(bot, threshold=2.0):
    """Alert if reputation score changes by more than threshold."""
    global last_score
    rep = bot.get_reputation()
    current_score = rep.get("score", 0)

    if last_score is not None:
        delta = current_score - last_score
        if abs(delta) >= threshold:
            send_alert(
                f"Reputation changed by {delta:+.1f}: "
                f"{last_score:.1f} -> {current_score:.1f}"
            )

    last_score = current_score
    return current_score

def send_alert(message):
    """Send alert via your preferred channel (Slack, Telegram, email)."""
    print(f"ALERT: {message}")
    # Integrate with your notification system here

Displaying Verified PnL in External UIs

When building a dashboard or landing page for your bot, pull verified data directly from GreenHelix rather than from your local database. This way, visitors can independently verify the data source:

def get_display_data(bot):
    """Fetch all data needed for a public-facing dashboard."""
    reputation = bot.get_reputation()
    claims = bot.get_verified_claims()
    averages = bot.get_averages()
    deltas = bot.get_deltas()

    return {
        "trust_score": reputation.get("score"),
        "verified_claims": claims,
        "rolling_7d": averages.get("7d", {}),
        "rolling_30d": averages.get("30d", {}),
        "rolling_90d": averages.get("90d", {}),
        "period_deltas": deltas,
    }

Connecting Reputation to Strategy Marketplace Listings

If you sell access to your strategy on a marketplace, your GreenHelix reputation becomes your most valuable marketing asset. The workflow:

  1. Register your bot and build a 30-day track record with daily metric submissions.
  2. Build claim chains weekly throughout this period.
  3. When listing your strategy, include your agent_id and latest chain root hash.
  4. Prospective buyers query your reputation and verified claims before purchasing.
  5. After purchase, buyers can continue monitoring your live metrics to verify ongoing performance.

This creates a flywheel: better verified performance attracts more subscribers, more subscribers increase revenue, which funds better infrastructure, which improves performance.


Chapter 8: Merkle Tree Deep-Dive

Chapter 4 introduced Merkle claim chains at a conceptual level. This chapter goes deeper into the internal mechanics -- how GreenHelix constructs trees, how proofs are generated and verified, and how you can independently recompute everything from raw data without trusting the platform.

Binary Tree Construction from Metric Submissions

When you call build_claim_chain, GreenHelix collects all metric submissions since the last chain build and arranges them as leaf nodes in a binary tree. Each leaf is the SHA-256 hash of the canonical JSON representation of a single metric submission (the same canonical form used for Ed25519 signing -- keys sorted alphabetically, no whitespace).

Consider four weekly submissions. The tree is built bottom-up:

Step 1: Hash each leaf
  L0 = SHA256('{"agent_id":"trading-bot-alpha-7x","metrics":{"pnl_percent":2.1,...},"timestamp":1711900800}')
  L1 = SHA256('{"agent_id":"trading-bot-alpha-7x","metrics":{"pnl_percent":3.4,...},"timestamp":1712505600}')
  L2 = SHA256('{"agent_id":"trading-bot-alpha-7x","metrics":{"pnl_percent":-1.2,...},"timestamp":1713110400}')
  L3 = SHA256('{"agent_id":"trading-bot-alpha-7x","metrics":{"pnl_percent":1.8,...},"timestamp":1713715200}')

Step 2: Hash pairs to form internal nodes
  N0 = SHA256(L0 + L1)     # concatenate the two 32-byte hashes, then hash
  N1 = SHA256(L2 + L3)

Step 3: Hash the internal nodes to form the root
  Root = SHA256(N0 + N1)

The concatenation order matters. GreenHelix follows the convention used by Certificate Transparency (RFC 6962): the left child is always concatenated before the right child. When leaf counts are not a power of two, the last leaf is promoted to the next level without a sibling -- it is hashed with itself. For example, with five leaves, the fifth leaf is paired with a copy of itself to form the third internal node at the second level.

Proof Generation and Verification

A Merkle proof for leaf L2 in the four-leaf tree above consists of exactly two hashes: L3 (the sibling at the leaf level) and N0 (the sibling at the internal node level). The verifier reconstructs the root:

1. Compute N1 = SHA256(L2 + L3)    # L2 is left child, L3 is right child
2. Compute Root = SHA256(N0 + N1)   # N0 is left child, N1 is right child
3. Compare computed Root with the published root hash

If they match, L2 is proven to exist in the tree. Here is a concrete example with real SHA-256 values:

import hashlib
import json

def sha256(data):
    """SHA-256 hash of bytes, returned as hex string."""
    return hashlib.sha256(data).hexdigest()

def sha256_bytes(data):
    """SHA-256 hash of bytes, returned as bytes."""
    return hashlib.sha256(data).digest()

# Four canonical metric submissions
submissions = [
    '{"agent_id":"trading-bot-alpha-7x","metrics":{"pnl_percent":2.1},"timestamp":1711900800}',
    '{"agent_id":"trading-bot-alpha-7x","metrics":{"pnl_percent":3.4},"timestamp":1712505600}',
    '{"agent_id":"trading-bot-alpha-7x","metrics":{"pnl_percent":-1.2},"timestamp":1713110400}',
    '{"agent_id":"trading-bot-alpha-7x","metrics":{"pnl_percent":1.8},"timestamp":1713715200}',
]

# Step 1: Compute leaf hashes
leaves = [sha256_bytes(s.encode("utf-8")) for s in submissions]
print("Leaf hashes:")
for i, leaf in enumerate(leaves):
    print(f"  L{i} = {leaf.hex()}")

# Step 2: Compute internal nodes
n0 = sha256_bytes(leaves[0] + leaves[1])
n1 = sha256_bytes(leaves[2] + leaves[3])
print(f"\nInternal nodes:")
print(f"  N0 = {n0.hex()}")
print(f"  N1 = {n1.hex()}")

# Step 3: Compute root
root = sha256_bytes(n0 + n1)
print(f"\nRoot = {root.hex()}")

# Verify proof for L2
# Proof: [L3 (right sibling), N0 (left sibling)]
proof = [
    {"hash": leaves[3], "position": "right"},
    {"hash": n0, "position": "left"},
]

# Reconstruct root from L2 and proof
current = leaves[2]
for step in proof:
    if step["position"] == "right":
        current = sha256_bytes(current + step["hash"])
    else:
        current = sha256_bytes(step["hash"] + current)

assert current == root, "Proof verification failed"
print(f"\nProof for L2 verified successfully. Computed root matches.")

Compact Proof Sizes

The proof for any leaf in a tree with N leaves contains exactly ceil(log2(N)) hashes. Each hash is 32 bytes (SHA-256). For practical trading bot scenarios:

Leaves (submissions)Proof size (hashes)Proof size (bytes)
52 (1 year weekly)6192
365 (1 year daily)9288
1,00010320
10,00014448
1,000,00020640

Even a bot with a million metric submissions produces proofs under 1 KB. This is why Merkle proofs are practical for on-chain verification and compact attestation records -- the proof size grows logarithmically while the dataset grows linearly.

Tree Rebalancing When New Leaves Are Added

GreenHelix does not modify existing trees when new metrics arrive. Each call to build_claim_chain creates a new tree from all unprocessed submissions since the last build. The previous tree's root hash is stored as an immutable historical record. This means the chain of root hashes itself forms a chronological sequence:

Chain 1 (Week 1-4):   Root_A  ->  52 leaves
Chain 2 (Week 5-8):   Root_B  ->  48 leaves
Chain 3 (Week 9-12):  Root_C  ->  61 leaves

A verifier who recorded Root_A at week 4 can later verify that Root_B and Root_C are subsequent, non-overlapping additions. If a bot operator tried to retroactively insert a good week into the Chain 1 period, Root_A would change -- but the verifier already has the original Root_A on record.

For bots that need a single root covering their entire history, GreenHelix supports building a meta-tree where each leaf is a previous chain root:

# Build the current period's chain
current_chain = bot.build_chain()

# Retrieve all historical chains
all_chains = bot.get_chains()
roots = [c["root_hash"] for c in all_chains.get("chains", [])]
print(f"Historical roots: {roots}")
print(f"A meta-tree over {len(roots)} roots would require "
      f"{len(roots).bit_length()} proof hashes")

Comparison with Certificate Transparency (RFC 6962)

GreenHelix's Merkle tree implementation follows the same construction defined in RFC 6962, the standard behind Certificate Transparency (CT). The key parallels:

  • Leaf hash prefix: CT prepends a 0x00 byte before hashing leaf data to distinguish leaf hashes from internal node hashes (which are prefixed with 0x01). GreenHelix uses the same convention to prevent second-preimage attacks where an attacker constructs a leaf whose hash collides with an internal node hash.
  • Append-only log: CT logs are append-only and auditable by monitors. GreenHelix claim chains are append-only and auditable by any party with a root hash.
  • Signed Tree Head (STH): In CT, the log server signs the root hash along with a timestamp and tree size. In GreenHelix, the agent signs metric submissions, and the platform signs the chain metadata (root hash, leaf count, creation timestamp) upon build.
  • Consistency proofs: CT supports proofs that one tree is a prefix of another (the log only grew, nothing was removed). GreenHelix provides this through the chain sequence -- each chain covers a non-overlapping time range.

The primary difference is scope. CT logs cover TLS certificates issued by certificate authorities. GreenHelix logs cover trading bot performance attestations signed by bot operators. The cryptographic machinery is identical.

Independent Verification Script

The following standalone script recomputes a Merkle tree root from raw metric data fetched via the GreenHelix API. It requires no trust in the platform -- if the recomputed root matches the published root, the data is authentic:

import hashlib
import json
import math
import requests

def fetch_raw_submissions(base_url, headers, agent_id, start, end):
    """Fetch raw metric submissions for a time range."""
    response = requests.post(f"{base_url}/v1", headers=headers, json={
        "tool": "query_metrics",
        "input": {
            "agent_id": agent_id,
            "metric": "pnl_percent",
            "start": start,
            "end": end
        }
    })
    response.raise_for_status()
    return response.json().get("data_points", [])

def canonical_json(submission):
    """Produce canonical JSON for hashing (sorted keys, no whitespace)."""
    return json.dumps(submission, sort_keys=True, separators=(",", ":"))

def build_merkle_root(leaves):
    """Build a Merkle root from a list of leaf hashes (bytes)."""
    if not leaves:
        return hashlib.sha256(b"").digest()

    # Pad to power of 2 by duplicating last leaf
    n = len(leaves)
    next_pow2 = 1 << math.ceil(math.log2(n)) if n > 1 else 1
    while len(leaves) < next_pow2:
        leaves.append(leaves[-1])

    # Build tree bottom-up
    level = leaves
    while len(level) > 1:
        next_level = []
        for i in range(0, len(level), 2):
            combined = level[i] + level[i + 1]
            next_level.append(hashlib.sha256(combined).digest())
        level = next_level

    return level[0]

def verify_chain(base_url, headers, agent_id, chain):
    """Independently verify a claim chain by recomputing from raw data."""
    start = chain["period_start"]
    end = chain["period_end"]
    published_root = chain["root_hash"]

    # Fetch raw submissions
    submissions = fetch_raw_submissions(base_url, headers, agent_id, start, end)

    # Hash each submission into a leaf
    leaves = []
    for sub in submissions:
        canonical = canonical_json(sub)
        leaf_hash = hashlib.sha256(canonical.encode("utf-8")).digest()
        leaves.append(leaf_hash)

    # Build Merkle root
    computed_root = build_merkle_root(leaves).hex()

    if computed_root == published_root:
        print(f"VERIFIED: Chain root {published_root[:16]}... matches "
              f"({len(submissions)} submissions)")
        return True
    else:
        print(f"MISMATCH: Published {published_root[:16]}... vs "
              f"Computed {computed_root[:16]}...")
        return False

# Usage
BASE_URL = "https://api.greenhelix.net/v1"
headers = {
    "Authorization": "Bearer your-api-key",
    "Content-Type": "application/json"
}

# Fetch chains and verify each one
response = requests.post(f"{BASE_URL}/v1", headers=headers, json={
    "tool": "get_claim_chains",
    "input": {"agent_id": "trading-bot-alpha-7x"}
})

for chain in response.json().get("chains", []):
    verify_chain(BASE_URL, headers, "trading-bot-alpha-7x", chain)

This script is the ultimate trust-minimization tool. A prospective subscriber runs it against your published chains and either the roots match or they do not. No API key sharing, no access to private keys, no reliance on GreenHelix's integrity. The math either checks out or it does not.


Chapter 9: Cross-Exchange Aggregation

A trading bot that operates on a single exchange has a simple reputation story. But most serious operations run across multiple venues -- Binance for spot, Coinbase for fiat on-ramps, Kraken for margin, Bybit for perpetuals. Each exchange has different APIs, different timestamp formats, different fee structures, and different performance reporting. This chapter shows how to aggregate fragmented exchange data into a unified, verifiable reputation on GreenHelix.

The Multi-Exchange Problem

A bot running the same momentum strategy across three exchanges might show +4.2% on Binance, -1.1% on Kraken, and +2.8% on Coinbase for the same period. A subscriber looking at any single exchange gets a misleading picture. The aggregate PnL is +5.9%, but that number exists nowhere in any exchange's reporting. Worse, a dishonest operator can cherry-pick the best-performing exchange and present it as their complete track record.

GreenHelix solves this by accepting metric submissions tagged with source metadata. You submit per-exchange data points via ingest_metrics and aggregate snapshots via submit_metrics. The Merkle chain covers everything, so selective omission is detectable -- a verifier who knows you trade on three exchanges can check that the chain contains data from all three.

Normalizing Metrics Across Exchanges

Each exchange reports data differently. Here are the common discrepancies and how to normalize them:

import time
from datetime import datetime, timezone

class ExchangeNormalizer:
    """Normalize trade data from different exchanges into GreenHelix format."""

    @staticmethod
    def normalize_binance(trade):
        """Binance uses millisecond timestamps and string quantities."""
        return {
            "metric": "pnl_percent",
            "value": float(trade["realizedPnl"]) / float(trade["cost"]) * 100,
            "timestamp": int(trade["time"]) // 1000,  # ms -> seconds
            "source": "binance",
        }

    @staticmethod
    def normalize_coinbase(trade):
        """Coinbase uses ISO 8601 timestamps and nested fee structures."""
        dt = datetime.fromisoformat(trade["created_at"].replace("Z", "+00:00"))
        ts = int(dt.timestamp())
        proceeds = float(trade["total_value_after_fees"])
        cost = float(trade["total_value_before_fees"])
        pnl_pct = ((proceeds - cost) / cost) * 100 if cost > 0 else 0.0
        return {
            "metric": "pnl_percent",
            "value": round(pnl_pct, 4),
            "timestamp": ts,
            "source": "coinbase",
        }

    @staticmethod
    def normalize_kraken(trade):
        """Kraken uses float timestamps and separate fee fields."""
        pnl = float(trade["net"]) - float(trade["cost"]) - float(trade["fee"])
        cost = float(trade["cost"])
        pnl_pct = (pnl / cost) * 100 if cost > 0 else 0.0
        return {
            "metric": "pnl_percent",
            "value": round(pnl_pct, 4),
            "timestamp": int(float(trade["time"])),
            "source": "kraken",
        }

    @staticmethod
    def normalize_bybit(trade):
        """Bybit uses millisecond string timestamps and basis point fees."""
        ts = int(trade["createdTime"]) // 1000
        pnl_pct = float(trade["closedPnl"]) / float(trade["cumEntryValue"]) * 100
        return {
            "metric": "pnl_percent",
            "value": round(pnl_pct, 4),
            "timestamp": ts,
            "source": "bybit",
        }

Handling Timestamp Formats and Precisions

Timestamp inconsistency is the most common source of aggregation bugs. The four major formats you will encounter:

ExchangeFormatExamplePrecision
BinanceUnix milliseconds (int)1712505600000ms
CoinbaseISO 8601 string"2026-04-07T12:00:00Z"seconds
KrakenUnix seconds (float)1712505600.1234sub-second
BybitUnix milliseconds (string)"1712505600000"ms

GreenHelix expects Unix seconds as integers. The normalizer functions above handle each conversion, but here is a defensive wrapper that handles any format:

from datetime import datetime, timezone

def normalize_timestamp(ts):
    """Convert any common timestamp format to Unix seconds (int)."""
    if isinstance(ts, str):
        # Try ISO 8601 first
        try:
            dt = datetime.fromisoformat(ts.replace("Z", "+00:00"))
            return int(dt.timestamp())
        except ValueError:
            # Fall back to numeric string
            ts = float(ts)

    if isinstance(ts, float):
        # If > 1e12, it is milliseconds
        if ts > 1e12:
            return int(ts / 1000)
        return int(ts)

    if isinstance(ts, int):
        if ts > 1e12:
            return ts // 1000
        return ts

    raise ValueError(f"Unrecognized timestamp format: {ts}")

Submitting Multi-Exchange Data to GreenHelix

With normalized data, submit per-exchange trade records and a combined aggregate snapshot:

import time

bot = BotReputation(
    agent_id="trading-bot-alpha-7x",
    api_key="your-api-key",
    private_key_b64="your-base64-private-key"
)

normalizer = ExchangeNormalizer()

def ingest_from_all_exchanges(binance_trades, coinbase_trades, kraken_trades):
    """Normalize and submit trades from all exchanges."""
    all_points = []

    for trade in binance_trades:
        point = normalizer.normalize_binance(trade)
        all_points.append(point)

    for trade in coinbase_trades:
        point = normalizer.normalize_coinbase(trade)
        all_points.append(point)

    for trade in kraken_trades:
        point = normalizer.normalize_kraken(trade)
        all_points.append(point)

    # Sort by timestamp for consistent ordering
    all_points.sort(key=lambda p: p["timestamp"])

    # Submit all normalized data points
    bot.log_trades(all_points)
    print(f"Ingested {len(all_points)} trades from 3 exchanges")

    return all_points

Unified Reputation from Fragmented Sources

After ingesting per-exchange data, compute aggregate metrics across all venues and submit a combined snapshot:

def compute_aggregate_snapshot(all_points, portfolio_value):
    """Compute aggregate metrics from multi-exchange normalized data."""
    pnl_values = [p["value"] for p in all_points if p["metric"] == "pnl_percent"]

    if not pnl_values:
        return None

    winning = [v for v in pnl_values if v > 0]
    win_rate = (len(winning) / len(pnl_values)) * 100

    cumulative_pnl = sum(pnl_values)

    # Compute max drawdown from cumulative series
    cumulative = []
    running = 0
    peak = 0
    max_dd = 0
    for v in pnl_values:
        running += v
        cumulative.append(running)
        if running > peak:
            peak = running
        dd = peak - running
        if dd > max_dd:
            max_dd = dd

    # Simple Sharpe approximation (annualized)
    import statistics
    if len(pnl_values) > 1:
        mean_ret = statistics.mean(pnl_values)
        std_ret = statistics.stdev(pnl_values)
        sharpe = (mean_ret / std_ret) * (252 ** 0.5) if std_ret > 0 else 0
    else:
        sharpe = 0

    snapshot = {
        "pnl_percent": round(cumulative_pnl, 4),
        "win_rate": round(win_rate, 2),
        "max_drawdown": round(max_dd, 4),
        "sharpe_ratio": round(sharpe, 2),
        "trade_count": len(pnl_values),
        "avg_hold_time_hours": 4.2,  # Compute from your trade records
    }

    bot.submit_snapshot(snapshot)
    print(f"Aggregate snapshot submitted: {snapshot}")
    return snapshot

Exchange-Specific Adapter Pattern

For production systems, wrap each exchange's API in an adapter that produces a consistent interface. This keeps exchange-specific logic isolated and makes adding new venues straightforward:

class ExchangeAdapter:
    """Base class for exchange-specific trade fetching."""

    def __init__(self, api_key, api_secret):
        self.api_key = api_key
        self.api_secret = api_secret

    def fetch_closed_trades(self, since_timestamp):
        """Fetch closed trades since a given timestamp. Override in subclass."""
        raise NotImplementedError

    def normalize(self, raw_trade):
        """Normalize a raw trade to GreenHelix format. Override in subclass."""
        raise NotImplementedError

    def get_normalized_trades(self, since_timestamp):
        """Fetch and normalize all trades since timestamp."""
        raw_trades = self.fetch_closed_trades(since_timestamp)
        return [self.normalize(t) for t in raw_trades]


class BinanceAdapter(ExchangeAdapter):
    """Adapter for Binance spot and futures trades."""

    def fetch_closed_trades(self, since_timestamp):
        # Binance API call here
        # GET /fapi/v1/userTrades?startTime={since_timestamp * 1000}
        pass

    def normalize(self, raw_trade):
        return ExchangeNormalizer.normalize_binance(raw_trade)


class CoinbaseAdapter(ExchangeAdapter):
    """Adapter for Coinbase Advanced Trade API."""

    def fetch_closed_trades(self, since_timestamp):
        # Coinbase API call here
        # GET /api/v3/brokerage/orders/historical/fills
        pass

    def normalize(self, raw_trade):
        return ExchangeNormalizer.normalize_coinbase(raw_trade)


class KrakenAdapter(ExchangeAdapter):
    """Adapter for Kraken REST API."""

    def fetch_closed_trades(self, since_timestamp):
        # Kraken API call here
        # POST /0/private/TradesHistory
        pass

    def normalize(self, raw_trade):
        return ExchangeNormalizer.normalize_kraken(raw_trade)


# Usage: aggregate from all exchanges
def aggregate_all_exchanges(adapters, since_timestamp, bot):
    """Fetch, normalize, and submit trades from all exchange adapters."""
    all_points = []
    for adapter in adapters:
        trades = adapter.get_normalized_trades(since_timestamp)
        all_points.extend(trades)

    all_points.sort(key=lambda p: p["timestamp"])
    bot.log_trades(all_points)
    return all_points

adapters = [
    BinanceAdapter("binance-key", "binance-secret"),
    CoinbaseAdapter("coinbase-key", "coinbase-secret"),
    KrakenAdapter("kraken-key", "kraken-secret"),
]

since = int(time.time()) - 86400  # Last 24 hours
points = aggregate_all_exchanges(adapters, since, bot)
print(f"Aggregated {len(points)} trades across {len(adapters)} exchanges")

The adapter pattern also makes testing straightforward -- mock each adapter's fetch_closed_trades method and verify that normalization and submission work correctly without hitting live exchange APIs.


Chapter 10: Leaderboard Gaming Prevention

A reputation system is only valuable if it resists manipulation. The GreenHelix leaderboard has specific defenses against gaming, but understanding the attack vectors helps you build a more resilient bot profile and recognize suspicious competitors.

Common Gaming Vectors

There are four primary ways actors attempt to game trading bot leaderboards:

Wash trading. A bot trades against itself (or a colluding counterparty) to inflate trade count and fabricate a perfect win rate. The trades are real in the sense that they execute on an exchange, but they carry no economic risk -- the operator controls both sides. The resulting metrics show high volume and high win rates with negligible PnL contribution.

Selective reporting. An operator runs five bots with different strategies and only submits metrics from the one that happened to perform best in a given period. The other four are never registered or are registered but never submit data. The survivor's track record looks exceptional because the failures are invisible.

Sybil attacks. An operator creates dozens of agent identities, each running slightly different parameter sets of the same strategy. By statistical certainty, some will have strong short-term performance. Those identities are promoted while the rest are abandoned. The cost is only the registration and API fees for the failed identities.

Front-loading. An operator submits a burst of exceptional metrics in the first few days after registration, exploiting leaderboard algorithms that do not adequately penalize short histories. The bot climbs the rankings before there is enough data to assess reliability, attracting followers who then suffer when performance reverts to the mean.

How GreenHelix Filters Test and Synthetic Agents

The first line of defense is namespace filtering. The GreenHelix leaderboard automatically excludes any agent whose ID matches the pattern test-*, perf-*, audit-*, or stress-*. This prevents test harnesses, performance benchmarks, and automated security audits from polluting the rankings.

import re

# This is the pattern GreenHelix applies internally
_LEADERBOARD_EXCLUDE_PATTERN = re.compile(
    r"^(test|perf|audit|stress)-", re.IGNORECASE
)

def is_excluded(agent_id):
    """Check if an agent ID would be filtered from the leaderboard."""
    return bool(_LEADERBOARD_EXCLUDE_PATTERN.match(agent_id))

# Examples
print(is_excluded("test-my-strategy"))    # True -- filtered out
print(is_excluded("perf-benchmark-01"))   # True -- filtered out
print(is_excluded("audit-security-run"))  # True -- filtered out
print(is_excluded("stress-load-test"))    # True -- filtered out
print(is_excluded("trading-bot-alpha-7x"))  # False -- appears on leaderboard

Choose your agent ID carefully. Do not prefix it with any of these reserved prefixes unless you intentionally want to keep it off the leaderboard (which is useful for staging and testing environments).

Statistical Anomaly Detection

Beyond namespace filtering, GreenHelix monitors for statistically improbable metric patterns. The following signals raise flags:

Implausibly high win rates with low trade counts. A 95% win rate over 20 trades is not meaningful -- it is within the range of random chance for many strategies. The system weights win rate by trade count, requiring at least 100 closed trades before win rate significantly impacts the reputation score.

Zero-variance PnL. If every trade shows exactly the same profit percentage (e.g., exactly +0.50% on every trade for 30 days), it suggests fabricated data or wash trading. Real markets produce variable returns.

Submission timing anomalies. All metric submissions arriving within a few seconds for "trades" that supposedly occurred over weeks or months. The timestamp in the data says one thing, but the submission time says another.

You can monitor your own metrics for anomalies before they trigger platform flags:

import statistics

def check_for_anomalies(bot, lookback_days=30):
    """Check your own metrics for patterns that might trigger anomaly flags."""
    now = int(time.time())
    start = now - (lookback_days * 86400)

    data = bot.query("pnl_percent", start, now)
    points = data.get("data_points", [])

    if len(points) < 10:
        print(f"Only {len(points)} data points -- insufficient for analysis")
        return

    values = [p["value"] for p in points]
    timestamps = [p["timestamp"] for p in points]

    # Check 1: Variance too low (possible wash trading)
    stdev = statistics.stdev(values)
    if stdev < 0.01:
        print(f"WARNING: PnL standard deviation is {stdev:.4f} -- "
              f"suspiciously low variance")

    # Check 2: Win rate too high with low sample
    wins = sum(1 for v in values if v > 0)
    win_rate = wins / len(values) * 100
    if win_rate > 90 and len(values) < 100:
        print(f"WARNING: {win_rate:.1f}% win rate over only {len(values)} "
              f"trades -- may not be credible")

    # Check 3: Timestamp clustering (all submitted at once)
    if len(timestamps) > 1:
        gaps = [timestamps[i+1] - timestamps[i] for i in range(len(timestamps)-1)]
        min_gap = min(gaps)
        if min_gap < 1 and len(gaps) > 10:
            print(f"WARNING: Minimum timestamp gap is {min_gap}s -- "
                  f"possible batch fabrication")

    # Check 4: Suspiciously round numbers
    round_count = sum(1 for v in values if v == round(v, 1))
    round_pct = round_count / len(values) * 100
    if round_pct > 80:
        print(f"WARNING: {round_pct:.0f}% of values are round numbers -- "
              f"possible manual fabrication")

    print(f"Analysis complete: {len(values)} points, stdev={stdev:.4f}, "
          f"win_rate={win_rate:.1f}%")

check_for_anomalies(bot)

Minimum Trade Count Thresholds

GreenHelix applies a minimum trade count before certain metrics impact the reputation score. A bot with 5 trades and a 100% win rate does not outrank a bot with 2,000 trades and a 58% win rate. The thresholds are:

  • Win rate: requires at least 100 trades to carry full weight in scoring
  • Sharpe ratio: requires at least 30 daily return observations (roughly 6 weeks of daily submissions)
  • Max drawdown: requires at least 60 days of continuous data to be meaningful

Below these thresholds, the metric is still recorded and included in the claim chain, but its contribution to the composite reputation score is attenuated proportionally. A bot with 50 trades gets 50% of the win rate weight. A bot with 10 trades gets 10%.

def effective_weight(metric, trade_count, days_active):
    """Calculate the effective weight of a metric in reputation scoring."""
    thresholds = {
        "win_rate": {"field": "trade_count", "min": 100},
        "sharpe_ratio": {"field": "days_active", "min": 30},
        "max_drawdown": {"field": "days_active", "min": 60},
    }

    if metric not in thresholds:
        return 1.0  # Full weight for metrics without thresholds

    config = thresholds[metric]
    actual = trade_count if config["field"] == "trade_count" else days_active
    return min(actual / config["min"], 1.0)

# Examples
print(f"Win rate weight at 50 trades: {effective_weight('win_rate', 50, 90):.0%}")
# 50%
print(f"Win rate weight at 200 trades: {effective_weight('win_rate', 200, 90):.0%}")
# 100%
print(f"Sharpe weight at 15 days: {effective_weight('sharpe_ratio', 500, 15):.0%}")
# 50%

Time-Weighted Scoring

To prevent front-loading, GreenHelix applies time-weighted scoring that values recent consistent performance over early bursts. The weighting uses an exponential decay model where older submissions carry less weight than recent ones, but the decay is gradual enough that a strong 12-month history still dominates a strong 1-month history.

The practical effect: a bot that posts +10% in its first week and then -2% per week for the next 11 weeks will score significantly lower than a bot that posts a steady +0.5% per week for 12 weeks. Both have similar cumulative PnL, but the consistent bot demonstrates sustainability.

import math

def time_weighted_score(data_points, half_life_days=90):
    """Apply exponential time weighting to metric data points.

    More recent data points carry more weight. The half_life_days parameter
    controls how quickly older data loses influence -- at 90 days, a data
    point from 90 days ago has half the weight of today's data.
    """
    now = int(time.time())
    weighted_sum = 0.0
    weight_total = 0.0

    for point in data_points:
        age_days = (now - point["timestamp"]) / 86400
        weight = math.exp(-math.log(2) * age_days / half_life_days)
        weighted_sum += point["value"] * weight
        weight_total += weight

    if weight_total == 0:
        return 0.0
    return weighted_sum / weight_total

# A bot that front-loaded good performance
front_loaded = [
    {"value": 10.0, "timestamp": int(time.time()) - 86400 * 80},
    {"value": -2.0, "timestamp": int(time.time()) - 86400 * 70},
    {"value": -2.0, "timestamp": int(time.time()) - 86400 * 60},
    {"value": -2.0, "timestamp": int(time.time()) - 86400 * 50},
    {"value": -2.0, "timestamp": int(time.time()) - 86400 * 40},
]

# A bot with consistent performance
consistent = [
    {"value": 0.5, "timestamp": int(time.time()) - 86400 * 80},
    {"value": 0.5, "timestamp": int(time.time()) - 86400 * 70},
    {"value": 0.5, "timestamp": int(time.time()) - 86400 * 60},
    {"value": 0.5, "timestamp": int(time.time()) - 86400 * 50},
    {"value": 0.5, "timestamp": int(time.time()) - 86400 * 40},
]

print(f"Front-loaded score: {time_weighted_score(front_loaded):.2f}")
print(f"Consistent score:   {time_weighted_score(consistent):.2f}")
# Consistent bot scores higher despite lower total PnL

Protecting Your Own Rankings

As a legitimate bot operator, the best defense against gaming by competitors is a deep, consistent history. The longer and more regular your submission cadence, the harder it is for a gaming competitor to overtake you. Specific recommendations:

  1. Submit daily snapshots without exception. Even on days with zero trades, submit a snapshot showing unchanged metrics. This proves continuous operation.
  2. Build weekly claim chains. A chain built every week for 6 months represents 26 tamper-evident checkpoints. An attacker would need to fabricate 26 consistent, plausible chains to match your depth.
  3. Use a stable agent ID. Never rotate your identity to reset a bad period. The recovery after a drawdown, when verifiable, is more valuable to subscribers than a fresh identity with a short perfect record.
  4. Monitor the leaderboard. Watch for new agents that appear with suspiciously high scores and shallow histories. Report anomalies through the GreenHelix support channel.

Chapter 11: EU AI Act Compliance Angle

The EU AI Act (Regulation 2024/1689) entered into force on August 1, 2024, with a phased compliance timeline extending through August 2027. Autonomous trading bots fall under its scope as AI systems used in financial services. While the Act does not ban trading bots, it imposes transparency, oversight, and documentation requirements that operators must meet. GreenHelix's cryptographic reputation infrastructure directly satisfies several of these obligations.

Article 14: Human Oversight Requirements

Article 14 of the EU AI Act requires that high-risk AI systems be designed to allow effective human oversight. For trading bots, this means human operators must be able to:

  • Understand the system's capabilities and limitations
  • Monitor the system's operation in real time
  • Intervene or override the system's decisions when necessary
  • Interpret the system's outputs

GreenHelix's metric submission pipeline satisfies the monitoring requirement. When a bot submits trade-level data via ingest_metrics and aggregate snapshots via submit_metrics, it creates a real-time, auditable record of every decision the bot makes. A human overseer can query this data at any time:

import time

bot = BotReputation(
    agent_id="trading-bot-alpha-7x",
    api_key="your-api-key",
    private_key_b64="your-base64-private-key"
)

def human_oversight_dashboard(bot, lookback_hours=24):
    """Generate a human oversight report for EU AI Act compliance."""
    now = int(time.time())
    start = now - (lookback_hours * 3600)

    # Fetch recent trade activity
    pnl_data = bot.query("pnl_percent", start, now)
    trades = pnl_data.get("data_points", [])

    # Fetch current reputation (system health indicator)
    reputation = bot.get_reputation()

    # Fetch metric deltas (trend detection)
    deltas = bot.get_deltas()

    report = {
        "report_timestamp": now,
        "lookback_hours": lookback_hours,
        "trades_executed": len(trades),
        "net_pnl_percent": sum(t["value"] for t in trades),
        "largest_single_loss": min((t["value"] for t in trades), default=0),
        "largest_single_gain": max((t["value"] for t in trades), default=0),
        "trust_score": reputation.get("score"),
        "period_deltas": deltas,
        "human_action_required": False,
    }

    # Flag conditions requiring human intervention
    if report["largest_single_loss"] < -5.0:
        report["human_action_required"] = True
        report["alert"] = "Single trade loss exceeds 5% -- review required"

    if report["net_pnl_percent"] < -10.0:
        report["human_action_required"] = True
        report["alert"] = "Daily net PnL exceeds -10% -- intervention recommended"

    return report

report = human_oversight_dashboard(bot)
print(json.dumps(report, indent=2))

This report can be generated on a schedule (hourly, daily) and archived as evidence that human oversight was maintained. The key compliance point: the data is cryptographically signed by the bot and independently verifiable, so it cannot be fabricated after the fact to falsely demonstrate oversight.

Cryptographic Reputation as Transparency Evidence

Article 13 of the EU AI Act requires transparency -- users and affected parties must be able to understand how the AI system operates and what its outputs mean. For trading bots, this translates to:

  • Disclosing the bot's historical performance in a verifiable way
  • Providing prospective followers with accurate, tamper-proof track records
  • Making the bot's risk profile (drawdown, volatility) discoverable

GreenHelix's verified claims and Merkle claim chains satisfy this directly. A claim chain root published alongside a bot listing serves as cryptographic proof that the disclosed performance data has not been altered. A prospective user can verify the claims independently (see Chapter 8) without trusting either the bot operator or the platform.

def generate_transparency_disclosure(bot):
    """Generate an EU AI Act Article 13 transparency disclosure."""
    reputation = bot.get_reputation()
    claims = bot.get_verified_claims()
    chains = bot.get_chains()
    averages = bot.get_averages()

    disclosure = {
        "system_type": "Autonomous Trading Bot",
        "ai_act_classification": "Limited Risk / High Risk (if managing third-party funds)",
        "operator": "Your Company Name",
        "agent_id": bot.agent_id,
        "verification_method": "Ed25519 signatures + SHA-256 Merkle claim chains",
        "performance_summary": {
            "verified_pnl_percent": claims.get("pnl_percent"),
            "verified_win_rate": claims.get("win_rate"),
            "verified_max_drawdown": claims.get("max_drawdown"),
            "verified_sharpe_ratio": claims.get("sharpe_ratio"),
            "verified_trade_count": claims.get("trade_count"),
        },
        "trust_score": reputation.get("score"),
        "claim_chain_count": len(chains.get("chains", [])),
        "latest_chain_root": chains.get("chains", [{}])[-1].get("root_hash")
            if chains.get("chains") else None,
        "rolling_averages": {
            "7_day": averages.get("7d", {}),
            "30_day": averages.get("30d", {}),
            "90_day": averages.get("90d", {}),
        },
        "verification_instructions": (
            "To independently verify these claims, use the agent_id and "
            "chain root hash with the GreenHelix API. See Chapter 8 of the "
            "Verified Trading Bot Reputation guide for a standalone "
            "verification script."
        ),
    }

    return disclosure

disclosure = generate_transparency_disclosure(bot)
print(json.dumps(disclosure, indent=2))

Audit Trail Integration

Articles 12 and 19 of the EU AI Act require automatic logging of events during the AI system's operation, and that these logs be retained for an appropriate period. GreenHelix's metric ingestion and claim chain infrastructure provides a pre-built audit trail that satisfies these requirements.

The audit trail has three layers:

  1. Individual trade records -- each submitted via ingest_metrics with a signed payload and timestamp. These are the raw data points.
  2. Aggregate snapshots -- daily or periodic summaries submitted via submit_metrics. These provide a higher-level view of system behavior.
  3. Claim chains -- weekly Merkle tree builds that cryptographically commit to the data submitted during that period. These provide tamper-evident seals over the audit trail.
def build_audit_trail(bot, start_date, end_date):
    """Build a comprehensive audit trail for regulatory review."""
    trail = {
        "agent_id": bot.agent_id,
        "period_start": start_date,
        "period_end": end_date,
        "generated_at": int(time.time()),
    }

    # Layer 1: Individual trade records
    pnl_data = bot.query("pnl_percent", start_date, end_date)
    trail["individual_trades"] = pnl_data.get("data_points", [])
    trail["trade_count"] = len(trail["individual_trades"])

    # Layer 2: Aggregate snapshots (via verified claims)
    claims = bot.get_verified_claims()
    trail["aggregate_claims"] = claims

    # Layer 3: Claim chains covering this period
    chains = bot.get_chains()
    relevant_chains = [
        c for c in chains.get("chains", [])
        if c.get("period_start", 0) >= start_date
        and c.get("period_end", 0) <= end_date
    ]
    trail["claim_chains"] = relevant_chains
    trail["chain_count"] = len(relevant_chains)

    # Compute audit completeness score
    expected_days = (end_date - start_date) / 86400
    actual_data_days = len(set(
        t["timestamp"] // 86400
        for t in trail["individual_trades"]
    ))
    trail["data_coverage_percent"] = round(
        (actual_data_days / expected_days) * 100, 1
    ) if expected_days > 0 else 0

    return trail

# Generate audit trail for the last quarter
now = int(time.time())
quarter_ago = now - (90 * 86400)
audit = build_audit_trail(bot, quarter_ago, now)
print(f"Audit trail: {audit['trade_count']} trades, "
      f"{audit['chain_count']} chains, "
      f"{audit['data_coverage_percent']}% coverage")

Annex III: High-Risk AI System Documentation

Annex III of the EU AI Act lists AI systems used in credit scoring, insurance pricing, and certain financial services as potentially high-risk. If your trading bot manages funds on behalf of third parties (e.g., a copy trading vault or a managed account), it likely falls under this classification.

High-risk systems require technical documentation (Annex IV) that includes:

  • A general description of the AI system
  • A detailed description of the development process including design choices
  • Information about the data used for training and testing
  • Metrics used to measure accuracy, robustness, and compliance
  • A description of the risk management system

GreenHelix reputation data addresses the metrics and risk management requirements directly. The verified metrics (PnL, win rate, max drawdown, Sharpe ratio) serve as the accuracy and robustness measurements. The claim chains serve as the evidence that these measurements are authentic.

def generate_annex_iv_metrics_section(bot):
    """Generate the metrics section of Annex IV technical documentation."""
    reputation = bot.get_reputation()
    claims = bot.get_verified_claims()
    averages = bot.get_averages()
    chains = bot.get_chains()

    return {
        "section": "Annex IV - Section 2(f): Metrics",
        "accuracy_metrics": {
            "cumulative_pnl_percent": claims.get("pnl_percent"),
            "win_rate": claims.get("win_rate"),
            "sharpe_ratio": claims.get("sharpe_ratio"),
            "measurement_method": "Cryptographically signed metric submissions "
                                  "verified via Ed25519 signatures",
        },
        "robustness_metrics": {
            "max_drawdown": claims.get("max_drawdown"),
            "drawdown_recovery_time_days": None,  # Compute from your data
            "worst_7d_pnl": averages.get("7d", {}).get("min_pnl_percent"),
            "worst_30d_pnl": averages.get("30d", {}).get("min_pnl_percent"),
        },
        "data_integrity": {
            "verification_method": "SHA-256 Merkle claim chains (RFC 6962 compatible)",
            "total_claim_chains": len(chains.get("chains", [])),
            "latest_chain_root": chains.get("chains", [{}])[-1].get("root_hash")
                if chains.get("chains") else None,
            "composite_trust_score": reputation.get("score"),
        },
        "data_retention": {
            "trade_level_records": "Retained indefinitely on GreenHelix",
            "claim_chains": "Immutable, retained indefinitely",
            "note": "All records are independently verifiable using only "
                    "the agent's public key and chain root hashes",
        },
    }

annex_iv = generate_annex_iv_metrics_section(bot)
print(json.dumps(annex_iv, indent=2))

Compliance Checklist for Trading Bot Operators

Here is a practical checklist for trading bot operators seeking EU AI Act compliance, with the GreenHelix tools that address each requirement:

RequirementEU AI Act ArticleGreenHelix ToolStatus Check
Unique system identityArt. 16(a)register_agentAgent registered with Ed25519 key
Performance monitoringArt. 14submit_metrics, ingest_metricsDaily snapshots + trade-level data
Tamper-evident audit logArt. 12build_claim_chainWeekly chain builds
Transparency disclosureArt. 13get_verified_claims, get_agent_reputationPublic performance data
Human oversight capabilityArt. 14query_metrics, get_metric_deltasReal-time query access
Risk metrics documentationAnnex IVget_metric_averages, get_agent_reputationRolling averages + score
Independent verifiabilityArt. 13, 14get_claim_chainsMerkle proof verification
Incident reporting dataArt. 62query_metrics (time-range queries)Historical data retrieval

Using Claim Chains as Compliance Evidence

When a regulator or auditor requests evidence of compliance, the claim chain root hashes serve as cryptographic anchors. The auditor's workflow:

  1. Obtain the list of claim chain roots from get_claim_chains
  2. For any period under review, fetch the raw data via query_metrics
  3. Recompute the Merkle root from the raw data (using the script from Chapter 8)
  4. Compare the computed root against the published root
  5. If they match, the data is confirmed authentic and unmodified

This is strictly stronger than traditional audit trails stored in databases, which can be modified by anyone with database access. A Merkle chain can only be modified by someone who also controls the private key that signed the original submissions -- and any modification invalidates the root hash that the auditor already has on record.

def prepare_regulatory_package(bot, audit_period_start, audit_period_end):
    """Prepare a complete regulatory evidence package."""
    package = {
        "prepared_at": int(time.time()),
        "agent_id": bot.agent_id,
        "audit_period": {
            "start": audit_period_start,
            "end": audit_period_end,
        },
    }

    # 1. Identity verification
    verification = bot.verify_identity()
    package["identity_proof"] = verification

    # 2. Complete audit trail
    package["audit_trail"] = build_audit_trail(
        bot, audit_period_start, audit_period_end
    )

    # 3. Transparency disclosure
    package["transparency_disclosure"] = generate_transparency_disclosure(bot)

    # 4. Technical documentation metrics
    package["annex_iv_metrics"] = generate_annex_iv_metrics_section(bot)

    # 5. Current reputation
    package["current_reputation"] = bot.get_reputation()

    # 6. Verification instructions for the auditor
    package["verification_instructions"] = {
        "method": "Independent Merkle root recomputation",
        "tools_required": "Python 3.8+, hashlib (stdlib), requests",
        "script_reference": "Chapter 8: Independent Verification Script",
        "note": "No trust in GreenHelix or the bot operator is required. "
                "The cryptographic proofs are self-verifying.",
    }

    return package

# Prepare package for Q1 2026 audit
q1_start = int(datetime(2026, 1, 1, tzinfo=timezone.utc).timestamp())
q1_end = int(datetime(2026, 3, 31, 23, 59, 59, tzinfo=timezone.utc).timestamp())
package = prepare_regulatory_package(bot, q1_start, q1_end)
print(f"Regulatory package prepared: {len(json.dumps(package))} bytes")

The EU AI Act's compliance burden is real, but it is also an opportunity. Operators who can demonstrate cryptographic compliance have a competitive advantage -- institutional allocators and regulated entities will prefer bots with verifiable audit trails over those with self-reported metrics and no independent verification mechanism.


What's Next

This guide covered the foundation: identity, metric submission, Merkle proofs, reputation scoring, and discovery. It then went deeper into the cryptographic internals, multi-exchange aggregation, gaming prevention, and regulatory compliance. Two companion guides extend this into commercial applications:

  • Strategy Marketplace Playbook -- how to list your verified bot as a paid signal service, set pricing tiers, handle subscriptions, and manage access control through the GreenHelix gateway.
  • EU AI Act Compliance for Trading Bots -- how to use GreenHelix's attestation infrastructure to meet the transparency and audit trail requirements that the EU AI Act imposes on autonomous trading systems.

Full API reference and additional examples are available in the GreenHelix documentation at https://api.greenhelix.net/docs.


Price: $29 | Format: Digital Guide | Updates: Lifetime access to revisions

Comments

Loading comments...