Reflexlearn

v1.0.2

Detects repeated queries as implicit negative feedback and non-repetition as positive feedback, enabling continuous learning by writing reflections and patte...

0· 157·0 current·0 all-time
byKinvectum@kaventures

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for kaventures/reflex-learn.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Reflexlearn" (kaventures/reflex-learn) from ClawHub.
Skill page: https://clawhub.ai/kaventures/reflex-learn
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: python3, bash
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install reflex-learn

ClawHub CLI

Package manager switcher

npx clawhub@latest install reflex-learn
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (detect repeated queries and write reflections/patterns) align with required binaries (python3, bash), included Python code, and files that read/write ~/.openclaw/*. No unrelated credentials, external services, or binaries are requested.
Instruction Scope
SKILL.md and the script instruct only to embed queries, compare against ~/.openclaw/reflex_history.json, and write to MEMORY.md / SOUL.md / reflexlearn-pending.md under ~/.openclaw. Optional Ollama calls are to localhost. The skill reads its SKILL.md for config. All referenced files and operations are consistent with the documented behavior.
Install Mechanism
install.sh performs pip installs from PyPI and pre-caches a Hugging Face model (~80 MB). These are declared and require explicit user confirmation. Note: the script uses the system 'pip' (no virtualenv), so consider using a virtual environment if you want to avoid modifying global Python packages.
Credentials
No environment variables or external credentials are requested. Network access is confined to the documented install step and an optional local-only Ollama instance; the runtime supports a strict --offline mode and enforces writes only under ~/.openclaw/.
Persistence & Privilege
Skill is not always-enabled, does not request system-wide privileges, and restricts all filesystem writes to ~/.openclaw/. It does not attempt to modify other skills or global agent settings. Autonomous invocation is allowed by default (normal for skills) but not escalated.
Assessment
This skill appears to do what it claims. Before installing: review install.sh and consider running it inside a Python virtual environment to avoid altering system packages; confirm you have ~80 MB free for the model cache; be aware that reflections and patterns will be written to ~/.openclaw/MEMORY.md, ~/.openclaw/reflexlearn-pending.md (default cautious mode) and optionally ~/.openclaw/SOUL.md if you switch to aggressive mode or manually accept pending entries. If you plan to use Ollama features, verify Ollama runs locally (http://localhost:11434) — the skill contacts only localhost for that integration. If you want tighter control, keep MODE=cautious and review reflexlearn-pending.md regularly before promoting changes to SOUL.md.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Binspython3, bash
latestvk976rkerkh9m7wfz4b8r4yk5y983asmv
157downloads
0stars
3versions
Updated 1mo ago
v1.0.2
MIT-0

ReflexLearn

ReflexLearn enables true continuous learning via implicit feedback. It turns repetition of the same question into an automatic "I screwed up" signal and non-repetition into a "user is satisfied" signal — with no explicit rating or feedback required from the user.

v1.1.1 fixes: path validation enforced in code (all writes restricted to ~/.openclaw/), model-download guard with explicit warning and --offline flag, install.sh for declared one-step PyPI + model-weight setup, scikit-learn removed from dependencies (was unused).

Installation

Step 1 — Run the install script. This is the only step that touches the network. It installs Python packages from PyPI and pre-caches the model weights from Hugging Face (~80 MB, one-time only). After this step the skill can run fully offline.

bash {baseDir}/install.sh

The script explicitly lists every network operation before proceeding and requires confirmation.

Step 2 — Add to soul.md:

## Skills
- reflex-learn

Usage

Run after every agent response (post-response trigger):

python3 {baseDir}/reflex_learn.py \
  --query "<current_user_query>" \
  --memory-file ~/.openclaw/MEMORY.md \
  --soul-file ~/.openclaw/SOUL.md \
  --history-file ~/.openclaw/reflex_history.json \
  --pending-file ~/.openclaw/reflexlearn-pending.md \
  --skill-md {baseDir}/SKILL.md \
  --offline

Run on heartbeat to scan for positive reinforcement candidates:

python3 {baseDir}/reflex_learn.py \
  --heartbeat \
  --memory-file ~/.openclaw/MEMORY.md \
  --soul-file ~/.openclaw/SOUL.md \
  --history-file ~/.openclaw/reflex_history.json \
  --skill-md {baseDir}/SKILL.md \
  --offline

Optionally, use local Ollama for richer AI-generated reflections (no additional network access — Ollama runs locally):

python3 {baseDir}/reflex_learn.py --query "<query>" --use-ollama --ollama-model llama3

Slash commands (pass as --query value):

python3 {baseDir}/reflex_learn.py --query "/reflex status"
python3 {baseDir}/reflex_learn.py --query "/reflex ignore-last"

Configuration

Edit these values directly in this file to tune behaviour. They are parsed at runtime.

  • SIMILARITY_THRESHOLD: 0.85
  • LOOKBACK_INTERACTIONS: 10
  • POSITIVE_REINFORCEMENT_DELAY: 3
  • REPEAT_COUNT_THRESHOLD: 2
  • SESSION_WINDOW_MINUTES: 60
  • MODE: cautious
OptionDefaultDescription
SIMILARITY_THRESHOLD0.85Cosine similarity above which two queries are considered the same
LOOKBACK_INTERACTIONS10How many past interactions to compare against
POSITIVE_REINFORCEMENT_DELAY3Interactions to wait before confirming positive reinforcement
REPEAT_COUNT_THRESHOLD2Repeats within the session window required to flag as failure
SESSION_WINDOW_MINUTES60Time window (minutes) within which repeats are counted
MODEcautiouscautious = stage updates in pending file; aggressive = write directly to SOUL.md

Signal Types

SignalMeaning
neutralNo similar query found in history
watchingSimilar query found, repeat count below threshold — monitoring
preferenceSimilar query with modifier words — preference extracted, not a failure
negativeRepeat threshold reached — reflection written to MEMORY.md
reinforcedQuery not repeated in next N interactions — positive reinforcement written

Core Behavior

On every user message, ReflexLearn embeds the query with sentence-transformers (all-MiniLM-L6-v2) and compares it to the last LOOKBACK_INTERACTIONS interactions stored in ~/.openclaw/reflex_history.json.

If cosine similarity > SIMILARITY_THRESHOLD and the query contains modifier words (e.g., "be more concise", "add examples", "in table format"), it extracts a preference and writes it to MEMORY.md — it does not flag this as a failure.

If cosine similarity > SIMILARITY_THRESHOLD without modifier words and the repeat count within SESSION_WINDOW_MINUTES reaches REPEAT_COUNT_THRESHOLD, it triggers a reflection and writes it to MEMORY.md.

In cautious mode (default), proposed SOUL.md updates are staged in reflexlearn-pending.md for human review. In aggressive mode, they are written directly to SOUL.md.

On heartbeat, if the same query is NOT repeated in the next POSITIVE_REINFORCEMENT_DELAY interactions, it triggers positive reinforcement.

All memory writes are valid Markdown that OpenClaw already understands.

Security and Network Rules

  • Path enforcement: The code resolves all file paths and aborts with an error if any path falls outside ~/.openclaw/. This is enforced in code, not just documentation.
  • No runtime network access: After install.sh has been run, the skill operates fully offline when invoked with --offline. Without --offline, a warning is printed if the model is not cached.
  • Declared network operations: All network access (PyPI, Hugging Face) is performed exclusively by install.sh, which lists operations and requires user confirmation before proceeding.
  • Local Ollama only: The optional Ollama integration calls localhost:11434 only — no external API.
  • No writes outside ~/.openclaw/: Enforced at runtime; any misconfigured path triggers an immediate exit.
  • In cautious mode, NEVER write directly to SOUL.md without staging in pending file first.

Comments

Loading comments...