Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Context Manager
v1.0.0管理多个 AI Agent 的长期记忆与文件,通过向量和时间双重过滤,实现高效语义检索和减少 LLM 调用。
⭐ 0· 39·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The SKILL.md and description claim a full system (ChromaDB, sentence-transformers, Redis, optional OpenAI integration, FastAPI) but the distributed files do not include the core 'context_manager' implementation or vector DB code. Instead, skill.py appends ~/.openclaw-autoclaw/workspace/context-manager to sys.path and attempts to import a context_manager module that is not bundled. That means the skill delegates essential functionality to external code expected to live in the user's home workspace — a coherence gap between claimed capabilities and what's actually provided.
Instruction Scope
The SKILL.md instructs creating agents, saving files, and running example scripts under ~/.openclaw-autoclaw/workspace/context-manager/ and mentions network usage for embeddings/LLM. The included demo implementation (skill_demo.py) operates only in-memory and does not perform embeddings or network calls. Real embedding/LLM behavior would come from the absent context_manager module; SKILL.md also tells the user to run python example and cat README from that workspace, which implies the agent may read local files during normal operations.
Install Mechanism
There is no install spec and no packaged implementation of the heavy dependencies (ChromaDB, sentence-transformers, etc.). The skill expects external components to be installed in the user's workspace. The absence of a controlled install path means arbitrary or third-party code under ~/.openclaw-autoclaw/workspace/context-manager will be imported and executed when the skill runs.
Credentials
No required environment variables or credentials are declared. SKILL.md documents optional OPENAI_API_KEY and Redis settings that are proportionate to the described (optional) LLM and cache features. Requesting an OpenAI key for optional LLM integration aligns with the stated functionality.
Persistence & Privilege
The skill does not request always:true and has no declared system-wide config access. However, it modifies sys.path to include a directory in the user's home workspace and will import whatever 'context_manager' exists there; that design means the skill will execute code from that workspace if present, which is a behavioral privilege to be aware of (not an automatic platform-level escalation, but still a runtime execution surface).
What to consider before installing
What to consider before installing:
- The bundle does not include the core 'context_manager' implementation it claims to use. skill.py expects to import that module from ~/.openclaw-autoclaw/workspace/context-manager; verify what code actually exists at that path before enabling the skill.
- Because the skill adds that workspace path to Python's import path, any module named context_manager placed there will be executed by the skill. Only proceed if you trust the code in that directory or you control it.
- The SKILL.md mentions optional use of an OpenAI API key and Redis. Only provide those secrets if you trust the skill and the code it imports; a malicious context_manager implementation could exfiltrate keys or data to remote endpoints.
- The included demo (skill_demo.py) is safe and in-memory; consider running the demo first to verify behavior. Do not run the full skill until you inspect or install a trusted context_manager implementation and its dependencies (chromadb, sentence-transformers, etc.).
- Recommended steps to reduce risk: inspect ~/.openclaw-autoclaw/workspace/context-manager for unexpected files, run the demo in an isolated environment, install third-party dependencies from trusted sources, and monitor network activity when first using the skill.
If you can provide the missing context_manager implementation or confirm its source (e.g., a trusted GitHub repo), I can reassess the coherence and risk with higher confidence.Like a lobster shell, security has layers — review code before you run it.
agent-managementvk971rjs2q8ggg229xg5204rc9n83xdfmlatestvk971rjs2q8ggg229xg5204rc9n83xdfmmemoryvk971rjs2q8ggg229xg5204rc9n83xdfmvector-searchvk971rjs2q8ggg229xg5204rc9n83xdfm
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
