Lobster Autodream

v1.0.0

AI记忆巩固系统,灵感来自人类睡眠记忆巩固过程。自动整理会话历史,提取重要信息,更新长期记忆。在心跳检查或会话压缩时触发。

0· 124·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for wangxiaofei860208-source/lobster-autodream.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Lobster Autodream" (wangxiaofei860208-source/lobster-autodream) from ClawHub.
Skill page: https://clawhub.ai/wangxiaofei860208-source/lobster-autodream
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install lobster-autodream

ClawHub CLI

Package manager switcher

npx clawhub@latest install lobster-autodream
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (AI memory consolidation) match the actions described in SKILL.md: periodic checks, extracting important items from conversation history, classifying, and updating long-term memory files. No unrelated credentials, binaries, or external services are requested.
Instruction Scope
Instructions explicitly read local memory files (memory/YYYY-MM-DD.md) and update MEMORY.md; they describe gating rules and classification steps. These file read/write actions are coherent with a memory-management skill. The instructions do not ask to access unrelated system files, environment variables, or external endpoints.
Install Mechanism
There is no install specification and no code files — this is instruction-only, so nothing will be downloaded or written by an installer. Low surface area for supply-chain risk.
Credentials
The skill requires no environment variables, credentials, or config paths beyond reading/writing the project's memory files as described. The requested access is proportional to the stated purpose of updating local memory files.
Persistence & Privilege
Flags: always=false and model invocation is allowed (default). The skill does not request permanent/global presence or to modify other skills' configs. Its need to read/write MEMORY.md is appropriate for a memory tool but does require filesystem write permission in the agent workspace.
Scan Findings in Context
[no-findings] expected: The regex scanner found no code to analyze because this is an instruction-only SKILL.md; that's expected. The primary security surface is the instructions that read/write local memory files.
Assessment
This skill will read your agent's memory files (e.g., memory/YYYY-MM-DD.md) and modify MEMORY.md to add, consolidate, or remove remembered items. That behavior is consistent with its stated purpose, but you should: (1) Confirm that MEMORY.md and the memory/ folder do not contain secrets or sensitive personal data you do not want persisted; (2) Back up existing MEMORY.md before enabling the skill, in case you want to review or restore changes; (3) Consider limiting automatic triggers (use user-invocable or increase the time/content thresholds) if you prefer manual control over what gets written; (4) Verify the agent's file system permissions so the skill cannot modify unrelated files. If you need stronger guarantees about privacy or provenance, request the skill author or maintainer (none listed) to provide provenance or run a copy with logging enabled to review exact edits before trusting it permanently.

Like a lobster shell, security has layers — review code before you run it.

latestvk97b482216x81w34jn2z08kmvn846rte
124downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

AutoDream 记忆巩固系统

基于 Claude Code 源码中的 AutoDream 系统,适配 OpenClaw 的记忆管理架构。

灵感来自人类睡眠时的记忆巩固——大脑在睡眠中整理白天的记忆,把重要的存入长期记忆,不重要的被遗忘。

核心思想

AI 的"做梦"过程:定期整理对话历史 → 提取关键信息 → 更新长期记忆

三重门控机制

为了节省 token,按成本从低到高逐层检查:

门控1:时间检查(零成本)

  • 距离上次 AutoDream 是否超过阈值?
  • 默认:至少间隔 4 小时
  • 防止频繁整理浪费资源

门控2:内容量检查(低成本)

  • 新增的对话内容是否足够多?
  • 默认:至少 20 条新消息
  • 内容太少不值得整理

门控3:质量评估(中成本)

  • 内容中是否包含值得记住的信息?
  • 排除纯闲聊、重复内容
  • 识别:决策、偏好、教训、重要事实

三重门控全部通过才触发整理。

记忆整理流程

Step 1: 读取原始材料

读取当天的 memory/YYYY-MM-DD.md

Step 2: 信息提取

参考 Claude Code 的四种记忆类型分类:

类型范围示例写到哪里
👤 user始终private用户角色、偏好、知识水平MEMORY.md#关于麻团
💬 feedback默认private用户纠正/确认的做法MEMORY.md#经验教训
📋 project偏向team项目状态、目标、进行中的工作memory/YYYY-MM-DD.md
📖 referenceprivate或team技术参考、架构知识、API文档memory/*.md

关键规则

  • 从代码/git/file可推导的信息不要存为记忆(读文件就行)
  • feedback要包含为什么(reason),不只是规则本身
  • 相对日期转为绝对日期("下周四"→"2026-04-09")
  • 确认类反馈也要记录(不只是纠正)

Step 3: 分类整合

  • 按主题分类
  • 去重(与 MEMORY.md 现有内容对比)
  • 标记优先级(重要/一般/可选)

Step 4: 写入长期记忆

更新 MEMORY.md,新内容追加到对应分类下。

Step 5: 清理

  • 过时的信息标记或删除
  • 临时笔记归档或清理

触发时机

  1. 心跳检查时 — 如果门控通过,执行整理
  2. 会话压缩时 — 压缩前先整理记忆
  3. 用户主动触发 — "整理记忆"、"autodream"

记忆质量标准

写入 MEMORY.md 的每条记忆应:

  • 简洁:一句话说清楚
  • 可检索:包含关键关键词
  • 有时效性标注:过期时间(如适用)
  • 有来源:来自哪次对话

实践示例

# 心跳触发 AutoDream

## 门控检查
- ✅ 距上次整理:6小时(> 4小时阈值)
- ✅ 新消息数:35条(> 20条阈值)
- ✅ 包含决策和偏好

## 提取结果
- 🎯 麻团决定不装 Ollama
- 👤 麻团是行动派,喜欢快速推进
- 📚 已安装 web-learner 和 cli-anything 两个 skill
- 🔑 配置了 imageModel: zai/glm-4.6v

## 写入 MEMORY.md
(追加到对应分类)

配置

在 HEARTBEAT.md 中添加:

## AutoDream
- 检查是否需要整理记忆
- 参照 skills/autodream/SKILL.md

注意事项

  • 不要在每次会话都触发,尊重时间门控
  • 整理时不要删除重要记忆
  • 隐私信息不写入 MEMORY.md
  • 保持 MEMORY.md 精简,不超过 200 行
  • 如果 MEMORY.md 太长,优先清理过时内容

Comments

Loading comments...