Code Review Expert
v1.0.0Multi-agent code review system using Manager-Worker pattern. Provides comprehensive code analysis from syntax, logic, security, and performance perspectives.
⭐ 0· 62·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description match the implementation: manager-worker pattern, specialized workers for syntax/logic/security/performance, and example usage. No unexpected binaries, env vars, or config paths are requested.
Instruction Scope
SKILL.md and code instruct the agent to embed the user's source code into prompts sent to the configured LLM. That is coherent for an LLM-based reviewer, but means the reviewed code will be transmitted to whatever LLM implementation is used (the code expects a provided llm object or a platform LLM). If the code being reviewed is sensitive, this is a privacy/data-leakage consideration.
Install Mechanism
No install spec; package is instruction-plus-local code only. package.json has no external dependencies and there are no downloads or extract steps. Low install risk.
Credentials
The skill requires no environment variables, credentials, or config paths. It relies on a provided LLM interface (this.llm) which is typical; credential management for an external model provider would be handled by the host platform, not this skill.
Persistence & Privilege
Skill is not always-enabled, does not modify other skills or system config, and does not request persistent elevated privileges. Autonomous invocation is allowed (platform default) but not combined with other concerning flags.
Assessment
This package appears to do what it says: it builds prompts and aggregates LLM-generated reviews from multiple worker roles. Before installing, consider: (1) any code you submit to the reviewer will be sent to the configured LLM — do not send sensitive or proprietary code unless you trust the model provider and environment; (2) the skill does not bundle or require model API keys — the host agent supplies the LLM interface, so review how your platform handles model credentials and logging; (3) review and test the parsing logic (report extraction) on representative outputs because heuristic parsing can mis-classify or miss issues. If those data-handling considerations are acceptable, the skill is coherent and low-risk.Like a lobster shell, security has layers — review code before you run it.
latestvk97f5qb6f09tg3jegcjvccrkdh842kfp
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
