Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Cyber Memory

v1.0.2

Five-layer memory system with automatic fact extraction via local LLM (Ollama). Processes session transcripts locally — no external API required.

0· 12·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (five-layer memory, local fact extraction via Ollama) aligns with the behavior in SKILL.md and the hook code: it reads session transcripts, calls a local LLM endpoint by default, and writes extracted facts into workspace/memory. The declared required config (workspace.dir) matches how the hook resolves where to write memory files.
Instruction Scope
The hook explicitly reads session transcript files, truncates and filters recent messages, sends them to an LLM prompt that extracts preferences/decisions/rules/info, and writes markdown fact and snapshot files to the workspace. This is within the skill's scope, but the prompt requests extraction of potentially sensitive items (accounts, contacts, etc.), so the hook can persist sensitive data locally. The SKILL.md documents local-first behavior and optional external API configuration; the instructions do not instruct reading unrelated files or network endpoints beyond the LLM endpoint.
Install Mechanism
Instruction-only skill with a hook code file; there is no install script or third-party download. The install steps are manual (copy hook directory into ~/.openclaw/hooks/ and enable). No remote code downloads are requested by the skill itself.
!
Credentials
The code will use an API key if present: resolveApiKey prefers cfg.providers.openai.apiKey then process.env.OPENAI_API_KEY, falling back to 'ollama'. However, requires.env lists no credentials — so the handler reads OPENAI_API_KEY from the environment without that being declared. That is an inconsistency: if you have OPENAI_API_KEY set, the hook will include it in requests (and could route data to a non-local baseUrl if configured). The SKILL.md does document optional baseUrl/apiKey configuration, but the implicit environment fallback should be noted as it can change where data is sent.
Persistence & Privilege
The skill does not request always: true and does not modify other skills or system-wide configs. It writes files into the configured workspace directory (normal for a memory hook). Autonomous invocation (enable hook) is expected; there is no elevated system persistence requested by the skill.
Assessment
This skill appears to do what it says: it reads session transcripts and extracts facts using an LLM, saving results to your workspace. Before installing, consider the following: - Sensitive data persistence: The extractor prompt explicitly targets items like accounts/contacts/important info. That means credentials or PII present in conversations can be written into workspace/memory/*.md. Review and restrict who can read those files (filesystem permissions), and inspect the memory files after running. - External API risk: By default the hook targets a local Ollama endpoint (http://localhost:11434). However, you or the system config can set baseUrl and apiKey to point to an external provider. Additionally, the hook will use process.env.OPENAI_API_KEY if present. If you have OPENAI_API_KEY or configure baseUrl to a remote endpoint, conversation snippets will be sent to that external service. If you want strictly local operation, ensure baseUrl is left at the local default and unset OPENAI_API_KEY. - Minor path mismatch: SKILL.md describes reading transcripts under ~/.openclaw/agents/*/sessions/*.jsonl, but the handler resolves sessions relative to the workspace path (it uses path.dirname(workspace.dir)/sessions). This may affect whether it finds your session files as documented — test in a safe environment. - Review code: Because the hook reads/writes local data and issues HTTP requests, you should review handler.ts (already included) and test the hook in a sandboxed agent to confirm behavior and file locations before enabling in production. If you are comfortable with local storage of extracted facts and ensure the baseUrl/apiKey env/config are not set to remote endpoints, this skill is coherent with its stated purpose. If you want to avoid accidental exfiltration, unset OPENAI_API_KEY and keep the baseUrl pointing to your local Ollama instance.
hooks/memory-flush/handler.ts:167
Environment variable access combined with network send.
!
hooks/memory-flush/handler.ts:23
File read combined with network send (possible exfiltration).
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

agentvk972szttn2hfp2tx33wkb94v61844r8tlatestvk97fsc6rk2g8p328w79yn1m2h9844hc2memoryvk972szttn2hfp2tx33wkb94v61844r8topenclawvk972szttn2hfp2tx33wkb94v61844r8t

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🧠 Clawdis
Configworkspace.dir

SKILL.md

Memory Architecture 🧠

A complete memory system for OpenClaw agents. Five layers of storage, automatic fact extraction via local LLM, hybrid search, and behavioral rules that prevent context loss.

🔒 Local-first by default — all fact extraction runs on your local Ollama instance. No data leaves your machine.

🔒 Privacy & Data Handling

This skill includes a hook (memory-flush) that:

  • Reads session transcripts from disk (~/.openclaw/agents/*/sessions/*.jsonl)
  • Processes content locally via Ollama (default) — no data leaves your machine
  • No external API key required — works out of the box with local LLM

What is processed: Recent user/assistant messages (last 30 messages, each truncated to 500 chars). Where it runs: Local Ollama endpoint (http://localhost:11434/v1/chat/completions by default). What is saved locally: Extracted facts as Markdown files in workspace/memory/.

Optional: You can configure an external OpenAI-compatible API by setting baseUrl and apiKey in the hook config, but local Ollama is the default and recommended setup.

Architecture

🔥 Hot    → SESSION-STATE.md (WAL protocol, survives compaction)
🌤 Warm   → memory/YYYY-MM-DD.md (daily event summaries)
🧊 Cold   → MEMORY.md (decisions, preferences, rules — always loaded)
🕸 Graph  → memory/ontology/ (entity relationships)
📚 Learn  → .learnings/ (errors, best practices)

Automation

MechanismTriggerWhat it does
session-memory (built-in)/new /resetSaves conversation to memory/
memory-flush (this skill)Compaction + /newLLM extracts structured facts (local Ollama)
command-logger (built-in)Any commandAudit log
session indexingAutomaticHistorical sessions searchable

Search

  • Vector: any OpenAI-compatible embedding provider (Ollama, OpenAI, etc.)
  • Keyword: SQLite FTS5 (BM25)
  • Hybrid: weighted vector + keyword fusion
  • Scope: MEMORY.md + daily logs + session transcripts + SESSION-STATE.md

Setup

1. Prerequisites

Install Ollama and pull a chat model:

# Install Ollama (https://ollama.ai)
ollama pull qwen2.5:7b   # or any chat model you prefer
ollama serve              # ensure Ollama is running on localhost:11434

2. Enable Built-in Hooks

openclaw hooks enable session-memory
openclaw hooks enable command-logger

3. Install Memory-Flush Hook

Copy the hooks/memory-flush/ directory to ~/.openclaw/hooks/:

cp -r hooks/memory-flush ~/.openclaw/hooks/
openclaw hooks enable memory-flush

4. Configure Fact Extraction (Optional)

By default, the hook uses local Ollama — no configuration needed. To customize:

{
  hooks: {
    internal: {
      enabled: true,
      entries: {
        "memory-flush": {
          enabled: true,
          extractionModel: "qwen2.5:7b",           // Ollama model name
          baseUrl: "http://localhost:11434/v1/chat/completions"  // Ollama endpoint
        }
      }
    }
  }
}

Works with any OpenAI-compatible API (Ollama, LM Studio, vLLM, etc.). Set baseUrl and apiKey to use an external provider.

5. Enable Session Indexing

{
  agents: {
    defaults: {
      memorySearch: {
        provider: "local",  // or "openai", "ollama", "gemini", "voyage", etc.
        experimental: {
          sessionMemory: true
        },
        sources: ["memory", "sessions"],
        extraPaths: ["SESSION-STATE.md"]
      }
    }
  }
}

6. Restart Gateway

openclaw gateway restart

Agent Behavioral Rules

Add these rules to your AGENTS.md:

Memory Writing

  • Important info → MEMORY.md immediately (decisions, preferences, rules)
  • Daily summaries → memory/YYYY-MM-DD.md (event summaries, no raw tool output)
  • Cron report details → skip (already delivered elsewhere)
  • Critical info zero loss — important things must go to MEMORY.md, not just daily logs

Memory Searching

  • Check MEMORY.md + today/yesterday logs at session start
  • Use memory_search for historical queries
  • Ontology queries (relationships, "who is responsible for X") → use ontology skill

Sub-agent Context Injection

When spawning sub-agents, inject relevant context from MEMORY.md:

[Key context from memory, max 500 words]

---

[Actual task]

File Structure

workspace/
├── AGENTS.md              # Behavioral rules (loaded every session)
├── SOUL.md                # Agent personality
├── USER.md                # User preferences
├── TOOLS.md               # Tool notes (keep lean, <2KB)
├── MEMORY.md              # Long-term curated memory
├── SESSION-STATE.md       # Hot working memory (WAL)
├── memory/
│   ├── YYYY-MM-DD.md      # Daily logs (summaries only)
│   ├── YYYY-MM-DD-facts-* # Auto-extracted facts
│   ├── YYYY-MM-DD-compact # Pre-compaction snapshots
│   └── ontology/
│       ├── graph.jsonl    # Knowledge graph
│       └── schema.yaml    # Entity type definitions
├── .learnings/
│   ├── LEARNINGS.md       # Best practices
│   ├── ERRORS.md          # Error log
│   └── FEATURE_REQUESTS.md
└── hooks/
    └── memory-flush/
        ├── HOOK.md
        └── handler.ts     # LLM fact extraction (local Ollama default)

What Gets Loaded When

FileWhen
AGENTS.md, SOUL.md, USER.md, TOOLS.mdEvery session start
MEMORY.mdDM session start
memory/today + yesterdayEvery session start
SESSION-STATE.mdVia memory_search (indexed)
Other memory filesVia memory_search on demand

Token Optimization

  • Keep TOOLS.md lean (<2KB), move detailed configs to tools/ subdirectory
  • Daily logs: event summaries, not raw tool output
  • Fact extraction: 30 messages → ~10 facts (16:1 compression)

Troubleshooting

Facts not extracted? Check Ollama is running (ollama serve) and the model is available (ollama list).

Session search not working? Verify experimental.sessionMemory: true and sources: ["memory", "sessions"].

Hook not loading? Run openclaw hooks list --verbose and check for errors.

Want to use external API? Set baseUrl and apiKey in the hook config to use OpenAI or any compatible provider.

License

MIT

Files

3 total
Select a file
Select a file to preview.

Comments

Loading comments…