Lobster Santa Method
v1.0.0Multi-agent adversarial verification with convergence loop. Two independent review agents must both pass before output ships.
⭐ 0· 31·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name and description match the content: a post-generation verification framework. It requests no binaries, env vars, or installs — proportionate for an instructions-only policy/rubric.
Instruction Scope
SKILL.md stays within the verification use case and does not request unrelated files, credentials, or external endpoints. It does assume the agent runtime can spawn isolated reviewer subagents (example: 'Claude Code subagents') and enforces strict context isolation and structured JSON outputs; those runtime guarantees are not implementable by the skill text itself and must be provided by the platform. The document also uses an underspecified 'escalate' path and relies on a MAX iteration guard that must be concretely implemented by integrators to avoid unbounded loops.
Install Mechanism
No install spec or code files are present (instruction-only), so nothing is written to disk or downloaded.
Credentials
The skill requests no environment variables, credentials, or config paths — consistent with a verification/rubric instruction set.
Persistence & Privilege
always:false and default autonomous invocation are appropriate. Because the skill's runtime behavior is to spawn and re-run subagents and fix cycles, integrators should ensure the platform enforces iteration limits and safe escalation; otherwise the skill could cause runaway autonomous iterations (functional risk, not evidence of malicious intent).
Assessment
This skill is an instruction-only protocol (a rubric and process) and appears coherent for use as a post-generation review layer. Before installing or enabling it, confirm that your agent/runtime actually enforces the critical invariants the doc assumes: true context isolation between reviewers, use of distinct reviewer identities/models to reduce correlated errors, structured JSON-only outputs from reviewers, and a concrete, safe escalation path (who is notified and how) plus a hard MAX iteration limit to prevent infinite loops. If you will use this for high-risk outputs (legal, medical, financial, or code deployment), require an explicit human-in-the-loop escalation step, log reviewer outputs for audit, and ensure reviewers do not have access to secrets or external network endpoints unless explicitly needed. Finally, test the method with diverse rubrics and with different reviewer configurations to validate that 'independence' is achievable in your runtime; absence of platform-level isolation can produce correlated failures despite following the protocol.Like a lobster shell, security has layers — review code before you run it.
latestvk9759sj97rwd6v1n1q4x7c9z458478hp
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
