Principles Agent
v0.2.0基于第一性原理思维的迭代式 Agent,拆解复杂目标为原子任务,自动验证调试并依赖感知执行保证最终交付质量。
⭐ 0· 48·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description match the code: modules implement goal clarification, truth derivation, task breaking, refinement, dependency sorting, execution via an injected llm_call, validation, and integration. The skill declares no external credentials or network use, which aligns with the implementation that expects an injected llm_call from the OpenClaw host.
Instruction Scope
SKILL.md states the skill never calls network or reads environment variables; the code follows that pattern by expecting an injected llm_call and not performing outbound requests or reading env vars. Minor implementation inconsistencies: cli.py expects a global 'llm_call' (will raise if run outside an OpenClaw session), and some library methods expose APIs that build prompts and then call the local _parse_response on the prompt (i.e., parse the prompt as though it were an LLM response) — these are likely bugs but not malicious behavior. The skill writes files only when the user supplies --output.
Install Mechanism
No install spec provided (instruction-only install), and there are no downloads or package installs. The skill ships source files but does not declare or perform any network-based installation — low install risk.
Credentials
The skill requires no environment variables, no credentials, and no config paths. All LLM access is expected to be provided by the host via a callable 'llm_call'. This is proportionate to the stated orchestration purpose.
Persistence & Privilege
The skill does not request persistent or elevated privileges (always: false). It does not modify system configuration or other skills. File writes are user-initiated via --output only.
Assessment
This skill appears coherent and consistent with its description: it orchestrates decomposition and delegates LLM calls to the host. Before installing, note: (1) run it inside OpenClaw or provide a proper llm_call function — the CLI will raise if llm_call isn't injected; (2) the code contains some minor implementation issues (some methods build prompts then parse the prompt as if it were an LLM response), which are likely bugs rather than malicious behavior; (3) the skill writes a report only when you pass --output (so be cautious about the output path you provide). If you plan to run this in production, review the code for robustness around JSON extraction and error handling and test it in a sandboxed OpenClaw session first.Like a lobster shell, security has layers — review code before you run it.
latestvk97dh9y0g75wje48fyhx8ecgax83x80w
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
