DeepRecall

PassAudited by ClawScan on May 10, 2026.

Overview

DeepRecall appears purpose-aligned, but users should know it reads local memory/workspace files and uses local LLM provider credentials to send selected content to configured model APIs.

Install only if you are comfortable with a memory-recall tool reading your OpenClaw workspace and using your configured LLM credentials. Start with narrow scopes, avoid running broad `all` searches on secret-heavy projects, and verify your provider/base URL settings.

Findings (3)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

The skill may use existing local LLM account credentials, which could incur provider usage or expose account-linked activity to the configured provider.

Why it was flagged

The provider bridge reads local OpenClaw configuration and cached provider tokens, including a GitHub Copilot token, to authenticate LLM requests.

Skill content
CONFIG_FILE = OPENCLAW_DIR / "openclaw.json"
AUTH_PROFILES_FILE = OPENCLAW_DIR / "agents" / "main" / "agent" / "auth-profiles.json"
CREDENTIALS_DIR = OPENCLAW_DIR / "credentials"
...
token_file = creds_dir / "github-copilot.token.json"
...
token = data.get("token")
Recommendation

Use dedicated or limited-scope API keys where possible, review `~/.openclaw` provider settings, and verify any custom model base URL before use.

What this means

Private memory or project content may be transmitted to the configured LLM provider when broad scopes are used.

Why it was flagged

The skill discloses that it can search broad workspace scopes and send recall context to an external OpenAI-compatible LLM endpoint.

Skill content
Just markdown files
and HTTP calls to any OpenAI-compatible LLM endpoint.
...
`project` | All readable workspace files ...
`all` | Identity + memory + project (everything)
Recommendation

Prefer narrow scopes such as `identity` or `memory` for sensitive work, avoid `all` on workspaces containing secrets, and review the privacy terms of the configured LLM provider.

What this means

Sensitive names or topics from memory files can be summarized into a persistent index, and stale or poisoned memory entries could influence later recall results.

Why it was flagged

The indexer extracts people, topics, and other metadata from memory files and persists them in `MEMORY_INDEX.md`.

Skill content
lines.append("## People")
...
lines.append("## Topics")
...
index_path = workspace / "MEMORY_INDEX.md"
index_path.write_text(index_content)
Recommendation

Review generated memory indexes periodically, remove untrusted or sensitive files from the workspace, and regenerate the index after cleanup.