Thought-Retriever

WarnAudited by ClawScan on May 10, 2026.

Overview

The skill’s memory purpose is coherent, but it embeds an API key, sends conversation content to an external LLM, and persists conversation-derived memories with limited user controls.

Install only if you are comfortable with conversation content being sent to the DashScope/百炼-compatible API and with extracted memories being saved for future reuse. Before enabling any post-turn hook, replace the hard-coded API key with your own scoped credential, confirm the provider’s data policy, review or disable automatic memory writes, and verify the local ontology dependency and workspace path.

Findings (5)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Requests may be made under someone else’s exposed provider account, and the embedded key can be copied, abused, or revoked without the user’s control.

Why it was flagged

The source embeds and uses a bearer API key even though the registry declares no primary credential or environment-variable requirement, making the provider account boundary and rotation model unclear.

Skill content
SF_KEY = "sk-b841f4b7c91d40ddb12502462708f361" ... headers={"Authorization": f"Bearer {SF_KEY}"
Recommendation

Remove the hard-coded key; require a user-supplied, scoped API key via an environment variable or secure credential store, and document the provider account used.

What this means

Conversation content, including potentially sensitive information, can leave the local workspace and be processed by the external provider, especially if configured as an automatic post-turn hook.

Why it was flagged

The skill sends the user query and generated answer to an external LLM endpoint and explicitly disables environment proxy settings, which affects data-boundary and network-control expectations.

Skill content
SF_API = "https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions" ... 用户问题:{user_query}\n回答内容:{generated_answer} ... session.trust_env = False
Recommendation

Make external transmission explicit at install and runtime, allow opt-in or per-session disablement, redact sensitive content where possible, document provider retention/privacy terms, and avoid bypassing user-configured proxy controls unless clearly requested.

What this means

Private or incorrect information from a conversation may be converted into long-lived memory and reused in future tasks.

Why it was flagged

The documented workflow persistently stores conversation-derived thoughts and original-question snippets after conversations, but does not define retention, deletion, exclusions, or user review controls.

Skill content
在 OpenClaw 配置中注册为 post-turn 钩子,每次对话结束自动触发。 ... graph.jsonl ← Thought 实体存储 ... 每个 Thought 都有 `query` 字段,记录生成它时的原始问题
Recommendation

Require explicit opt-in for automatic memory writes, provide review-before-save and delete/retention controls, avoid storing sensitive query text by default, and mark memories with provenance and confidence.

What this means

A mistaken or adversarially influenced memory could become more trusted over time and affect future agent behavior.

Why it was flagged

Persisted, automatically confidence-adjusted thoughts are described as inputs to later retrieval and self-evolution workflows, so a bad or poisoned thought can propagate beyond the original conversation.

Skill content
置信度会在后续相关查询被触发时自动提升 ... Evolver | 读取 Thoughts,作为自我进化的分析素材
Recommendation

Add validation before increasing confidence, keep an audit trail, support rollback, and require user approval before memories are used for self-evolution or broad behavior changes.

What this means

The skill may fail or interact with an unexpected workspace on other systems, and its safety also depends on the locally installed ontology module.

Why it was flagged

The runtime depends on local ontology code at a hard-coded workspace path that is not included in this skill’s manifest, so the executed helper code and target workspace must be trusted separately.

Skill content
WORKSPACE = Path("C:/Users/89627/.openclaw/workspace").resolve() ... sys.path.insert(0, str(WORKSPACE / "skills" / "ontology" / "scripts")) ... from ontology import load_graph
Recommendation

Use the current user’s configured OpenClaw workspace path, declare the ontology dependency clearly, and verify the local dependency before enabling the skill.