Reflexlearn

v1.0.2

Detects repeated queries as implicit negative feedback and non-repetition as positive feedback, enabling continuous learning by writing reflections and patte...

0· 92·0 current·0 all-time
byKinvectum@kaventures
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (detect repeated queries and write reflections/patterns) align with required binaries (python3, bash), included Python code, and files that read/write ~/.openclaw/*. No unrelated credentials, external services, or binaries are requested.
Instruction Scope
SKILL.md and the script instruct only to embed queries, compare against ~/.openclaw/reflex_history.json, and write to MEMORY.md / SOUL.md / reflexlearn-pending.md under ~/.openclaw. Optional Ollama calls are to localhost. The skill reads its SKILL.md for config. All referenced files and operations are consistent with the documented behavior.
Install Mechanism
install.sh performs pip installs from PyPI and pre-caches a Hugging Face model (~80 MB). These are declared and require explicit user confirmation. Note: the script uses the system 'pip' (no virtualenv), so consider using a virtual environment if you want to avoid modifying global Python packages.
Credentials
No environment variables or external credentials are requested. Network access is confined to the documented install step and an optional local-only Ollama instance; the runtime supports a strict --offline mode and enforces writes only under ~/.openclaw/.
Persistence & Privilege
Skill is not always-enabled, does not request system-wide privileges, and restricts all filesystem writes to ~/.openclaw/. It does not attempt to modify other skills or global agent settings. Autonomous invocation is allowed by default (normal for skills) but not escalated.
Assessment
This skill appears to do what it claims. Before installing: review install.sh and consider running it inside a Python virtual environment to avoid altering system packages; confirm you have ~80 MB free for the model cache; be aware that reflections and patterns will be written to ~/.openclaw/MEMORY.md, ~/.openclaw/reflexlearn-pending.md (default cautious mode) and optionally ~/.openclaw/SOUL.md if you switch to aggressive mode or manually accept pending entries. If you plan to use Ollama features, verify Ollama runs locally (http://localhost:11434) — the skill contacts only localhost for that integration. If you want tighter control, keep MODE=cautious and review reflexlearn-pending.md regularly before promoting changes to SOUL.md.

Like a lobster shell, security has layers — review code before you run it.

latestvk976rkerkh9m7wfz4b8r4yk5y983asmv

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

Binspython3, bash

Comments