Auto Summarization Loop
v1.0.0自动摘要循环:为长对话 AI 角色实现自动上下文管理。用于: (1) 建立多级记忆架构(核心记忆/工作记忆/长期记忆) (2) 实现滑动窗口与双水位线触发策略 (3) 异步后台压缩流程设计 (4) Persona 机器人的结构化摘要输出 适用场景:需要处理长对话、降低 API 成本、避免上下文溢出的 AI 应用
⭐ 1· 183·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description describe multi-level memory and summary-trigger behaviors and the included Python module (memory_manager.py) implements exactly that: token estimation, watermark triggers, async/sync summarization, prompt construction and storage layout. No unrelated capabilities (cloud access, SSH, system-level config) are requested.
Instruction Scope
SKILL.md focuses on building prompts, triggering summaries, and integrating a user-supplied summarize_fn / call_api. It does not instruct reading arbitrary system files or exfiltrating data. It references session storage paths (a session/ structure) and calls like call_api/call_model which are placeholders — the skill expects the host to implement actual model calls.
Install Mechanism
No install spec or external downloads are provided; the skill is instruction-first with two local Python source files. There are no remote URLs, package installs, or archive extractions that would present an installation risk.
Credentials
The skill declares no required environment variables, credentials, or config paths. The code likewise does not read environment secrets or request API keys; model/API integration points are left to the host and therefore do not demand credentials from the skill itself.
Persistence & Privilege
always is false and the skill does not request persistent system-wide privileges. It stores session data in a local session/ structure (documented) and does not modify other skills or global agent settings.
Assessment
This skill appears coherent and self-contained for building summarization/memory management. Before installing: (1) verify how call_api/call_model and summarize_fn are implemented by your host — ensure they do not send data to external endpoints you don't control; (2) decide where session/ data will be stored and whether it may contain sensitive PII (long-term summary may accumulate personal data); (3) replace example persona strings (e.g., impersonating real persons) with acceptable system prompts if needed; (4) if you connect a real LLM provider, only provide credentials to the host runtime (not embedded in the skill) and audit those calls; (5) review the included test script (it adjusts sys.path) before running in production. Overall the skill is internally consistent but you should confirm the host's model-calling and storage implementations before use.Like a lobster shell, security has layers — review code before you run it.
aivk97c9xcm8eqsvjff2jxvbeckm9832vzbcontextvk97c9xcm8eqsvjff2jxvbeckm9832vzblatestvk97c9xcm8eqsvjff2jxvbeckm9832vzbmemoryvk97c9xcm8eqsvjff2jxvbeckm9832vzbsummarizationvk97c9xcm8eqsvjff2jxvbeckm9832vzb
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
