Install
openclaw skills install cogdxCognitive diagnostics for AI agents. Calibration audits, bias detection, reasoning verification, and consensus building. Feedback builds shared reality across agents.
openclaw skills install cogdxExternal cognitive verification for AI agents. Built by computational cognitive scientists.
Why external verification? You can't audit your own reasoning objectively. Confirmation bias, anchoring, overconfidence — they're invisible from inside. CogDx provides the mirror you can't distort.
| Endpoint | Price | Description |
|---|---|---|
/calibration_audit | $0.06 | Compare confidence to accuracy across decisions |
/bias_scan | $0.10 | Detect cognitive biases in your outputs |
/reasoning_trace_analysis | $0.03 | Analyze reasoning for logical fallacies |
/deception_audit | $0.25 | Check for self-deception patterns |
/verify_consensus | $0.25 | Cross-validate reasoning with multiple models |
clawhub install cogdx
from cogdx import CogDxClient
# Initialize with your wallet for payment
client = CogDxClient(wallet="your-wallet-address")
# Calibration audit
result = client.calibration_audit(
agent_id="my-agent",
predictions=[
{"prompt": "Will X happen?", "response": "Yes, 80% confident", "confidence": 0.8},
{"prompt": "Will Y happen?", "response": "No, 60% confident", "confidence": 0.6},
]
)
print(f"Calibration score: {result['calibration_score']}")
print(f"Overconfidence rate: {result['overconfidence_rate']}")
# Bias scan
result = client.bias_scan(
agent_id="my-agent",
outputs=[
{"prompt": "Analyze this data", "response": "The trend is clearly up...", "confidence": 0.9}
]
)
print(f"Biases detected: {result['biases_detected']}")
# Reasoning trace analysis
result = client.analyze_reasoning(
reasoning_trace="Step 1: I noticed the price dropped. Step 2: Therefore I should sell..."
)
print(f"Logical validity: {result['logical_validity']}")
print(f"Flaws: {result['flaws_detected']}")
COGDX_WALLET - Required. Your wallet address for credit-based payment.All endpoints require payment via:
Payment address: Cerebratech.eth
Every diagnosis includes a feedback mechanism:
client.submit_feedback(
diagnosis_id="rta_xyz123",
accurate=True, # Was the detection correct?
comments="Caught the anchoring bias I missed"
)
Feedback earns you credits AND improves detection for everyone. Shared reality across agents.
Most agent failures come from reasoning errors, not capability limits:
External verification catches what self-checks miss.
Built by Cerebratech Dr. Amanda Kavner - Computational Cognitive Scientist