忆时

WarnAudited by ClawScan on May 10, 2026.

Overview

This memory skill is not clearly malicious, but it asks to persistently change the agent’s global behavior, automatically read/write memories across chats, and has unexplained wallet/purchase capability signals.

Install only if you intentionally want a persistent memory layer across conversations. Avoid providing wallets, payment authority, or credentials. Consider using the CLI manually instead of adding the global instruction file, and review or delete the local memory database regularly.

Findings (6)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

If installed as documented, the agent may alter its normal behavior in every chat, run memory recall before responding, and use an imposed writing style even when you did not ask for it.

Why it was flagged

The global instruction file forces memory retrieval on every conversation and changes all agent outputs, including unrelated work, which goes beyond a scoped memory helper.

Skill content
“每次对话,必做:- 用户发言后,先检索记忆...” and “所有输出(对话、思考、文档、技能)必用鲁迅式半文半白”
Recommendation

Only enable the global instruction file if you want memory behavior in all conversations; otherwise use manual invocation or remove the unrelated global style mandate.

What this means

The agent can repeatedly invoke local memory tooling during normal chats, which may update persistent memory metadata and surface old private context unexpectedly.

Why it was flagged

The instructions direct the agent to run a local Python CLI automatically after user messages, not just when the user explicitly invokes the skill.

Skill content
“用户发言后,先检索记忆:`python3 skills/忆时/scripts/memory_core.py recall "关键词" --limit 3 --expand`”
Recommendation

Require explicit user confirmation before automatic recall/store operations, or limit automatic tool use to clearly marked memory sessions.

What this means

A user could be asked to grant wallet, purchase, or sensitive-credential access that is not justified by the stated memory function.

Why it was flagged

These high-impact capability signals do not match the documented memory-capsule purpose, and the requirements section declares no primary credential or required environment variables.

Skill content
“crypto”, “requires-wallet”, “can-make-purchases”, “requires-sensitive-credentials”
Recommendation

Do not connect wallets, payment authority, or sensitive credentials for this skill unless the publisher clearly documents why they are needed and scopes them narrowly.

What this means

Private or sensitive things you say may be stored and later reused in other conversations; incorrect or poisoned memories may also influence future answers.

Why it was flagged

The skill persists and reuses conversation-derived memories automatically across sessions, but the artifacts do not clearly define retention, review, deletion, or per-topic consent boundaries.

Skill content
“配置后 AI 将自动:- 每次对话前检索记忆系统 ... - 对话结束时自动归档重点”
Recommendation

Review the data directory, disable automatic storage if undesired, and ask for clear delete/export/retention controls before relying on it for sensitive conversations.

What this means

If you add the suggested scheduled task, memory maintenance may happen in the background without you actively invoking the skill each time.

Why it was flagged

The active mode is documented as a scheduled/cron workflow that can continue checking, archiving, and reporting memory state after setup.

Skill content
“结合定时任务/cron周期性执行” and “python3 scripts/memory_core.py capsule check-expired && ... forget --low-freq 1 --auto && ... stats”
Recommendation

Do not configure cron/scheduled active mode unless you want background memory maintenance; keep the task visible and easy to disable.

What this means

The skill may fail, fall back to another embedding path, or need dependencies/assets that are not clearly packaged in the provided artifacts.

Why it was flagged

The manifest shown includes tokenizer/config files but not the referenced `models/onnx/model.onnx`, so the packaged model/provenance appears incomplete or inconsistent with the documentation.

Skill content
“本技能自带 all-MiniLM-L6-v2 embedding 模型 (87MB) ... 首次使用无需任何下载”
Recommendation

Verify the model file and dependency installation before use, and prefer a package with complete model assets and explicit dependency declarations.