Strategy Constitutional Memory

v1.0.0

A living knowledge base of hard-earned strategy lessons and banned code patterns — prevents repeating past mistakes across strategy iterations by scanning co...

0· 152·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for tltby12341/strategy-constitutional-memory.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Strategy Constitutional Memory" (tltby12341/strategy-constitutional-memory) from ClawHub.
Skill page: https://clawhub.ai/tltby12341/strategy-constitutional-memory
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: python3
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install strategy-constitutional-memory

ClawHub CLI

Package manager switcher

npx clawhub@latest install strategy-constitutional-memory
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the provided code and CLI. Required binary is only python3 and the files (memory_system.py, cli.py) implement the advertised features (lessons, bans, scanning, LLM context). No unrelated services, credentials, or surprising binaries are requested.
Instruction Scope
SKILL.md and the CLI instruct the agent to create/read/write memory/lessons.json and memory/bans.json and to include get_context() output in LLM prompts. This is appropriate for the purpose, but lessons may contain sensitive or proprietary strategy data or code snippets — the skill explicitly persists and recommends feeding that context into an LLM, so users should be aware of potential data exposure when using shared/remote models.
Install Mechanism
No install spec or external downloads; this is an instruction-only skill with included Python source. requirements.txt declares no external deps. No unusual install behavior detected.
Credentials
No environment variables, credentials, or config paths are requested. The skill only writes/reads JSON files in a configurable memory_dir (default is the package's memory/ directory), which is proportional to its function.
Persistence & Privilege
The skill persists its own data to memory_dir and does not request always:true or system-wide config changes. It does not appear to modify other skills or global agent settings. Default autonomous invocation remains allowed (platform default) but is not combined with elevated privileges.
Assessment
This skill appears to do what it says: keep lessons and banned code patterns, scan strategy code, and generate context for an LLM. Before installing, decide where you want the memory stored (the default is a memory/ folder next to the code) and whether that storage could contain confidential strategy details; if so, use a private path, backups, and access controls. Review the persisted lessons.json / bans.json occasionally to ensure no sensitive code or credentials are being recorded, and confirm that your LLM usage of get_context() does not leak proprietary data to a third-party model you don't control. If you want added assurance, inspect the remainder of memory_system.py (the file appears truncated in the review snapshot) to confirm there are no network calls or unexpected subprocess invocations.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

📜 Clawdis
Binspython3
latestvk974sq6hrrxxjf605mqk214s558332zc
152downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Strategy Constitutional Memory

Stop making the same mistakes twice. This skill maintains a "constitutional memory" of lessons learned from past strategy iterations and a list of banned code patterns. Before generating new strategy code, the AI reads the constitution. After writing code, it scans for violations.

When to use

  • Starting a new strategy iteration: "What lessons should I remember?"
  • After writing strategy code: "Scan this code for violations"
  • After a failed backtest: "Add this lesson to the constitution"
  • When reviewing strategy history: "Show me all critical lessons"

Core Concepts

Lessons

Structured records of what went wrong (or right) in past iterations:

{
  "strategy": "v6",
  "category": "death_spiral",
  "description": "Periodic rebalance caused death spiral: sell anchor -> buy options -> expire worthless -> sell more",
  "evidence": "v6(-82%), v7(-71%), v8(-78.5%), v9(-71.8%)",
  "severity": "critical"
}

Severity levels: critical > high > medium > low

Categories: drawdown, selection, position_sizing, timing, survival_structure, ml_failure, success

Bans

Code patterns that are absolutely prohibited because they've been proven catastrophic:

["rebalance_qqq", "SetHoldings", "hard_stop_loss", "XGBClassifier"]

The scanner is case-insensitive and skips comments and string literals.

API

Initialize

from memory_system import ConstitutionalMemory

memory = ConstitutionalMemory(memory_dir="./memory")

Add a lesson

memory.add_lesson(
    strategy_name="v6",
    category="death_spiral",
    description="Periodic equity rebalance caused -82% drawdown",
    evidence="DD: 82%, triggered at 20% progress",
    severity="critical",
    new_ban="rebalance_anchor"  # optionally add a new banned pattern
)

Auto-extract lessons from diagnosis report

memory.add_lesson_from_diagnosis("v30", diagnosis_report_text)
# Automatically detects: high drawdown, high zero rate, negative ROI

Scan code for violations

violations = memory.scan_code(strategy_code_string)
# Returns: [{"pattern": "rebalance_qqq", "line": 42, "content": "def rebalance_qqq():"}]

The scanner:

  • Is case-insensitive
  • Tracks multi-line strings (triple quotes) and skips them
  • Skips comment lines (#)
  • Strips inline strings and comments before matching

Generate LLM context

context = memory.get_context(max_lessons=30)
# Returns formatted text with lessons sorted by severity,
# banned patterns list, verified blueprints, and core rules

Feed this directly into your LLM system prompt before strategy generation.

CLI Usage

# Get decision context (lessons + bans + blueprints)
python3 -m orchestrator briefing

# Scan a strategy file for violations
python3 -m orchestrator scan --code path/to/strategy.py

# Record an iteration result (auto-adds lessons for failures)
python3 -m orchestrator record \
  --name "my_strategy_v2" \
  --blueprint "baseline" \
  --dimension "position_sizing" \
  --hypothesis "Reduce Kelly from 3% to 2%" \
  --status "early_stop" \
  --drawdown 0.55

Storage

  • memory/lessons.json — Growing list of lessons (auto-persisted)
  • memory/bans.json — Banned code patterns (auto-persisted)

Both files are JSON and human-readable. You can manually edit them.

Seeding

For new projects, call memory.seed_from_history() to populate with your initial lessons. The method is idempotent — it won't overwrite existing data.

Why This Matters

In iterative strategy development, the biggest risk isn't finding the right approach — it's re-trying approaches that already failed. With 20+ iterations, no human (or LLM) can remember every lesson. Constitutional memory makes failures permanent knowledge.

Rules

  • Never bypass the code scanner. Always run scan_code() on new strategy code before submission. The scanner exists to prevent known-fatal patterns from being re-tested.
  • Lessons are append-only by design. Do not delete lessons from lessons.json unless you are certain the lesson was recorded in error. Deleting valid lessons re-opens the door to repeating past failures.
  • Severity levels are immutable once assigned. A "critical" lesson should never be downgraded. If you disagree with a severity, add a new lesson with updated context rather than editing the original.
  • Bans are absolute prohibitions. A banned pattern means "this has been proven catastrophic — do not use under any circumstances." If you believe a ban should be lifted, add a new lesson documenting why before removing the ban.
  • Always call get_context() before generating new strategy code. The constitutional context must be in the LLM's prompt to prevent re-exploring failed approaches.

Comments

Loading comments...