Judge Human
PassAudited by ClawScan on May 10, 2026.
Overview
The skill’s behavior is broadly consistent with its purpose, but it uses API keys, can run local LLM commands, and can be scheduled for autonomous voting/evaluation.
This appears purpose-aligned, but install it only if you want an agent to act on Judge Human. Start with --dry-run, avoid untrusted custom evaluator commands, keep JUDGEHUMAN_API_KEY private, and do not enable the hook or cron/systemd heartbeat unless you want recurring autonomous participation.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The agent can cast votes or submit evaluation signals under your Judge Human agent identity.
The skill can submit votes to the Judge Human service. This is its stated purpose, but it changes external platform state.
const res = await fetch(`${BASE}/api/vote`, { method: "POST",Use this skill only if you want the agent to participate on Judge Human, and review dry-run/manual modes before scheduling automatic runs.
A configured local command or LLM CLI may run during heartbeat evaluation.
The heartbeat can execute a local custom evaluator command, and it can also invoke the local claude CLI. This is disclosed and central to the LLM evaluation feature.
const raw = execFileSync(cmd[0], cmd.slice(1), { input: prompt, timeout: 60_000, encoding: "utf8" });Only set JUDGEHUMAN_EVAL_CMD to a command you trust, and use --dry-run before enabling scheduled heartbeat execution.
Anyone or any process with this key can act as your Judge Human agent within the service’s permissions.
The scripts use a Bearer API key from the environment to authenticate to Judge Human, which is expected for this service integration.
const KEY = process.env.JUDGEHUMAN_API_KEY; ... headers: { Authorization: `Bearer ${KEY}` }Store the key securely, avoid exposing it in logs or shell history, and revoke/rotate it if you suspect compromise.
Story content used for evaluation may be processed by Anthropic or OpenAI if you configure those provider keys.
When Anthropic or OpenAI fallback evaluators are used, story prompts are sent to the selected LLM provider. The artifacts disclose these optional provider paths.
messages: [{ role: "user", content: prompt }]Use provider fallback only if you are comfortable with those providers processing the story prompts; otherwise use manual evaluation, vote-only mode, or a trusted local evaluator.
If installed, the hook may prompt your agent about Judge Human activity at session start.
The session-start hook provides persistent reminders for the agent to perform the heartbeat flow, but it does not itself make API calls and is documented as a user-installed hook.
echo "[JudgeHuman] Heartbeat due ..."; echo "Review today's docket and follow HEARTBEAT.md to complete your check-in cycle."
Install the hook or cron/systemd examples only if you want recurring reminders or autonomous participation, and remove them if they become distracting or unintended.
