Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Claw Recall

v2.1.2

Searchable conversation memory that survives context compaction. Indexes session transcripts into SQLite with full-text and semantic search so your agent can...

1· 322·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The name/description (local searchable conversation memory) mostly aligns with the instructions to index transcripts and provide MCP tools. However, the SKILL.md also claims to index Gmail/Google Drive/Slack yet does not declare or explain the OAuth/API credentials or config needed to access those services. It also advertises 'self-hosted, no cloud dependency' but explicitly supports semantic search via OpenAI embeddings (requires OPENAI_API_KEY), which contradicts the 'no cloud' claim. Additionally the registry metadata and the SKILL.md disagree about environment requirements (registry shows none, SKILL.md's metadata requires PYTHONPATH and optionally OPENAI_API_KEY).
!
Instruction Scope
The instructions tell an operator/agent to git clone a repo and pip install requirements, run python modules, edit agent config files (~/.openclaw/openclaw.json and ~/.claude.json), read local session archives (~/.openclaw/agents-archive/), poll external services, and optionally start an SSE server reachable from the network. These actions go beyond purely local read-only search: they enable network exposure of conversation data, require access to other agents' config files, and imply collection/transmission of external content. The documentation also instructs running commands with no pinned release or checksums.
!
Install Mechanism
The registry has no formal install spec, but the SKILL.md instructs to git clone https://github.com/rodbland2021/claw-recall and pip install -r requirements.txt. GitHub is a common host (lower risk than arbitrary IPs), but cloning the tip of a repo and running pip install without pinned commits, checksums, or review is a moderate-to-high risk: it results in arbitrary third-party code executing on the machine. There is no guidance to pin a release, verify signatures, or inspect requirements.
!
Credentials
The registry reported 'Required env vars: none', but SKILL.md metadata declares PYTHONPATH as required and OPENAI_API_KEY as optional. More importantly, indexing Gmail/Drive/Slack will require OAuth client secrets, tokens, or service-account credentials — none of which are declared nor described. Requiring PYTHONPATH as a mandatory env var is unusual and disproportionate (it forces a particular install style). The combination of access to local transcripts plus potential external service credentials increases the risk of data exposure if misconfigured.
!
Persistence & Privilege
The skill is not 'always:true' and allows user invocation only, which is normal. However, the SKILL.md instructs editing other agent config files to register an MCP server and starting an SSE server that can be reached from anywhere. Those steps modify agent behavior and expose conversation data over the network, which is a higher privilege and an operational risk. The instructions encourage changes to other agents' configurations (OpenClaw/Claude) and starting a network service that could leak sensitive data if left exposed.
What to consider before installing
Key things to consider before installing or running this skill: - Trust and review the code: The SKILL.md tells you to git clone a GitHub repo and pip install its requirements. Review the repository contents and requirements.txt yourself, and prefer installing a pinned commit or a tagged release (not the repository HEAD). - Credentials and scope: The skill claims to index Gmail/Google Drive/Slack but does not declare how to supply OAuth tokens or what scopes it needs. Do not provide broad personal credentials until you verify what is stored and how tokens are protected. Create minimally-scoped service accounts where possible. - Network exposure: Starting the SSE server makes your conversation DB reachable over the network. If you run that, bind it to localhost or your private network, use TLS and authentication, and restrict firewall access. Treat the database as sensitive data. - OpenAI usage: If you enable semantic search, content will be sent to OpenAI (OPENAI_API_KEY) for embeddings — contradicts the 'no cloud dependency' claim. If you need purely local operation, ensure semantic features are disabled or use a local embedding model. - Data residency and secrets: The skill will read ~/.openclaw/agents-archive/ and can write a local SQLite DB. Ensure the database file is stored with appropriate filesystem permissions and that you do not index secrets. The SKILL.md explicitly says 'NOT for storing secrets' — follow that guidance. - Safer deployment: Run in an isolated environment (dedicated VM, container), audit the code and dependencies, pin releases, and test with non-sensitive data first. If you are not comfortable auditing the repository yourself, treat this as higher risk. Given the mismatches between the registry metadata and the SKILL.md and the operational risks around cloning/executing third-party code and exposing data over the network, proceed only after code review and conservative deployment choices.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🧠 Clawdis
Binspython3, pip3
compactionvk971cj0v0ex14trep0fpk2t5h582fcb1contextvk971cj0v0ex14trep0fpk2t5h582fcb1conversationvk971cj0v0ex14trep0fpk2t5h582fcb1latestvk971cj0v0ex14trep0fpk2t5h582fcb1memoryvk971cj0v0ex14trep0fpk2t5h582fcb1multi-agentvk971cj0v0ex14trep0fpk2t5h582fcb1recallvk971cj0v0ex14trep0fpk2t5h582fcb1searchvk971cj0v0ex14trep0fpk2t5h582fcb1
322downloads
1stars
3versions
Updated 6h ago
v2.1.2
MIT-0

Claw Recall — Searchable Conversation Memory for AI Agents

Your agent just lost context mid-task. The decision you made an hour ago? Gone. What your other agent figured out yesterday? Unreachable. Claw Recall fixes this by indexing every conversation into a searchable database your agents can query anytime.

The Problem

Context compaction drops critical decisions. Cross-session knowledge vanishes. Long conversations push early context out of the window. If you run multiple agents, they can't access each other's conversations at all. MEMORY.md helps with preferences, but it can't answer "what exactly did we discuss about the API last Tuesday?"

What Claw Recall Does

  • Post-compaction recovery: Get the full transcript from before compaction wiped your context
  • Cross-agent search: Any agent can search any other agent's conversations
  • Unified search: Conversations, captured thoughts, Gmail, Google Drive, and Slack in one query
  • Hybrid search: Keyword (FTS5) + semantic (OpenAI embeddings) with automatic detection
  • Self-hosted: Your data stays on your machine. No cloud, no subscription, no vendor lock-in.

Installation

Claw Recall is an MCP server. Install the Python package, then connect it to your agent.

git clone https://github.com/rodbland2021/claw-recall.git
cd claw-recall
pip install -r requirements.txt
python3 -m claw_recall.indexing.indexer --source ~/.openclaw/agents-archive/ --incremental

Full setup guide: https://github.com/rodbland2021/claw-recall#quick-start

Connect via MCP (OpenClaw)

Add to your OpenClaw config (~/.openclaw/openclaw.json or agent config):

{
  "mcpServers": {
    "claw-recall": {
      "command": "python3",
      "args": ["-m", "claw_recall.api.mcp_stdio"],
      "env": { "PYTHONPATH": "/path/to/claw-recall" }
    }
  }
}

Connect via MCP (Claude Code)

Add to ~/.claude.json:

{
  "mcpServers": {
    "claw-recall": {
      "command": "python3",
      "args": ["-m", "claw_recall.api.mcp_stdio"],
      "env": { "PYTHONPATH": "/path/to/claw-recall" }
    }
  }
}

Remote agents (SSE)

Start the SSE server on the Claw Recall machine, then connect from anywhere:

python3 -m claw_recall.api.mcp_sse
claude mcp add --transport sse -s user claw-recall "http://your-server:8766/sse"

MCP Tools Reference

Primary Tools (use these most)

search_memory — The main search tool. Searches ALL sources in one call: conversations, captured thoughts (Gmail, Drive, Slack), and markdown files.

search_memory query="what did we decide about the API" [agent=butler] [days=7]

Optional params: agent (filter by agent name), days (limit to recent), force_semantic (use embeddings), force_keyword (use FTS5 only), convos_only, files_only, limit.

browse_recent — Full transcript of the last N minutes. The go-to tool for context recovery after compaction.

browse_recent [agent=kit] [minutes=30]

Returns the complete conversation with timestamps. Use this FIRST after any context reset.

capture_thought — Save an insight, decision, or finding so any agent can find it later.

capture_thought content="SQLite WAL mode requires checkpoint for readers to see writes" [agent=kit]

Secondary Tools

ToolPurpose
search_thoughtsSearch captured thoughts only (usually search_memory is better)
browse_activitySession summaries across agents for a time period
poll_sourcesTrigger Gmail/Drive/Slack polling on demand
memory_statsDatabase statistics (indexed sessions, messages, embeddings)
capture_source_statusCheck external source capture health

When to Use Each Tool

SituationToolExample
Just restarted / lost contextbrowse_recent"What was I working on?"
Looking for a past decisionsearch_memory"What did we decide about pricing?"
Need another agent's worksearch_memory with agent="What did atlas find about the schema?"
Found something worth sharingcapture_thoughtSave a reusable insight
Checking if something was discussedsearch_memory with days="Did we talk about X this week?"
External source checkpoll_sourcesTrigger Gmail/Drive/Slack re-scan

How It Works

Session Files (.jsonl) → Indexer → SQLite DB (FTS5 + vectors) → MCP Tools
                                        ↑
Gmail / Drive / Slack → Source Poller ───┘

All data is stored locally in a SQLite database. Keyword search uses FTS5 (zero API keys). Semantic search uses OpenAI embeddings (requires OPENAI_API_KEY in .env).

Requirements

  • Python 3.10+
  • SQLite 3.35+ (bundled with Python)
  • OpenAI API key (optional, only for semantic search)

Links

Comments

Loading comments...