Intelligent Delegation
ReviewAudited by ClawScan on May 10, 2026.
Overview
The skill is mostly coherent, but its scoring tool appears to understate irreversible tasks and may recommend less human oversight than the guide promises.
Install only if you are comfortable with a delegation framework that creates persistent task logs, may schedule follow-up checks, and may route work to sub-agents. Until the scoring tool is fixed, manually require human approval for irreversible or high-impact actions regardless of the tool's recommendation.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The agent could be told that an irreversible task is lower risk or does not need human approval, increasing the chance of unsafe autonomous delegation.
SKILL.md defines reversibility as 1=reversible and 5=irreversible, and says high criticality OR irreversible should require human approval. This code lowers the risk contribution as reversibility increases and only requires approval when irreversibility is paired with criticality, so irreversible tasks may receive weaker autonomy/approval recommendations than documented.
risk = (scores["criticality"] + (6 - scores["reversibility"]) + scores["subjectivity"]) / 3 ... if scores["reversibility"] >= 4 and scores["criticality"] >= 3:
Fix the scoring logic to treat higher reversibility scores as higher risk, and align the approval rule with the guide, for example requiring human approval for reversibility >= 4 unless the user explicitly overrides it.
The agent may create scheduled checks or persistent task files that continue to influence later sessions.
The skill explicitly introduces scheduled follow-up behavior and persistent heartbeat/task-tracking state. This fits the delegation purpose, but it can cause agent activity after the original interaction.
For every background task, schedule a one-shot cron job to check on completion ... Update your `HEARTBEAT.md` to check `TASKS.md` first
Approve each scheduled check, keep it one-shot and project-scoped, and periodically remove stale cron entries and task records.
Task details, lessons learned, or sensitive context may persist and influence later agent behavior.
The skill stores persistent performance notes and later uses them to guide agent routing. This is purpose-aligned, but incorrect or sensitive entries could affect future delegation decisions.
Create `memory/agent-performance.md` to track: Success rate per agent ... Known failure modes ... Before every delegation: Check if this agent has failed on similar tasks
Keep the performance log project-specific, avoid secrets or private data in entries, and review it before relying on its recommendations.
A flawed task request could be retried by multiple agents or methods before a human sees it.
The documented fallback chains intentionally retry work across agents and execution modes. This is central to the framework, but repeated retries can propagate a bad specification or unsafe action if not bounded.
When a task fails, don't just report failure — attempt automatic recovery. ... Retry with tighter scope ... Capable-tier agent ... Main agent directly
Set retry limits, require human approval for irreversible or sensitive actions, and stop fallback chains when the failure suggests a security or privacy issue.
Information included in a delegated task may be shared with another agent or model tier.
The framework is designed to pass task inputs to sub-agents and asks users to specify data sensitivity. This is disclosed and purpose-aligned, but actual agent identities and data boundaries depend on how the user fills out the contract.
- **Delegatee:** agent tier/name ... - **Input:** What the agent receives ... - **Data sensitivity:** Privacy requirements
Specify exactly which agent receives which data, redact unnecessary sensitive information, and require human review for high-sensitivity inputs.
