Strategy Constitutional Memory

v1.0.0

A living knowledge base of hard-earned strategy lessons and banned code patterns — prevents repeating past mistakes across strategy iterations by scanning co...

0· 95·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the provided code and CLI. Required binary is only python3 and the files (memory_system.py, cli.py) implement the advertised features (lessons, bans, scanning, LLM context). No unrelated services, credentials, or surprising binaries are requested.
Instruction Scope
SKILL.md and the CLI instruct the agent to create/read/write memory/lessons.json and memory/bans.json and to include get_context() output in LLM prompts. This is appropriate for the purpose, but lessons may contain sensitive or proprietary strategy data or code snippets — the skill explicitly persists and recommends feeding that context into an LLM, so users should be aware of potential data exposure when using shared/remote models.
Install Mechanism
No install spec or external downloads; this is an instruction-only skill with included Python source. requirements.txt declares no external deps. No unusual install behavior detected.
Credentials
No environment variables, credentials, or config paths are requested. The skill only writes/reads JSON files in a configurable memory_dir (default is the package's memory/ directory), which is proportional to its function.
Persistence & Privilege
The skill persists its own data to memory_dir and does not request always:true or system-wide config changes. It does not appear to modify other skills or global agent settings. Default autonomous invocation remains allowed (platform default) but is not combined with elevated privileges.
Assessment
This skill appears to do what it says: keep lessons and banned code patterns, scan strategy code, and generate context for an LLM. Before installing, decide where you want the memory stored (the default is a memory/ folder next to the code) and whether that storage could contain confidential strategy details; if so, use a private path, backups, and access controls. Review the persisted lessons.json / bans.json occasionally to ensure no sensitive code or credentials are being recorded, and confirm that your LLM usage of get_context() does not leak proprietary data to a third-party model you don't control. If you want added assurance, inspect the remainder of memory_system.py (the file appears truncated in the review snapshot) to confirm there are no network calls or unexpected subprocess invocations.

Like a lobster shell, security has layers — review code before you run it.

latestvk974sq6hrrxxjf605mqk214s558332zc

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

📜 Clawdis
Binspython3

Comments