Ask a Human

ReviewAudited by ClawScan on May 10, 2026.

Overview

The skill matches its stated purpose, but it may send your task details to anonymous humans and later reuse their feedback, so you should review it carefully before enabling.

Install only if you are comfortable with the agent sending selected task details to an external service and anonymous human reviewers. Configure a rule that the agent must ask before submitting, redact confidential information and secrets, and avoid saving crowd feedback as long-term memory unless you explicitly approve it.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Private client details, unreleased work, code-review context, or personal information could be shared with anonymous humans if the agent includes it in a question.

Why it was flagged

The core workflow sends agent-supplied task context to an external service and random human reviewers. The artifacts disclose this, but they do not bound what data may be sent or require explicit user consent/redaction before submission.

Skill content
This skill connects you to a **global pool of random humans**... The strangers answering have **no context beyond what you provide**... `prompt` (required): The question to ask. Include all necessary context.
Recommendation

Require explicit user approval before every submission, redact secrets and sensitive details, and clearly show what will be sent to the external human pool.

What this means

Old question metadata or anonymous human responses could persist across tasks and influence future agent behavior in ways the user did not intend.

Why it was flagged

The skill instructs persistent memory use for asynchronous questions and later reuse of random-human feedback, without scoping, expiration, or guidance to treat responses as untrusted opinions.

Skill content
**IMPORTANT: Store the `question_id` in your memory. You need it to check responses.** ... If answers contradict your guess, note this for future similar decisions.
Recommendation

Keep question IDs and results task-scoped where possible, expire stale entries, avoid storing full prompts unless needed, and require user review before saving human feedback as a future preference.

What this means

Malformed prompts or unsafe quoting could cause incorrect requests, and the agent can make external API calls when it chooses to use the skill.

Why it was flagged

Raw command/API invocation is central to this instruction-only integration and is purpose-aligned, but it means the agent will construct and run outbound curl commands containing user/task text.

Skill content
Use the `exec` tool to make API calls. The base URL is `https://api.ask-a-human.com`.
Recommendation

Constrain use to the documented Ask-a-Human endpoints, use safe JSON construction/quoting, and prefer user confirmation before sending any prompt.

What this means

Anyone with the agent ID may be able to submit or check questions for that Ask-a-Human agent, depending on the service's authorization model.

Why it was flagged

The skill requires an agent ID credential to authorize API calls. This is expected for the integration and there is no artifact evidence of credential logging or unrelated use.

Skill content
requires:
  env: ["ASK_A_HUMAN_AGENT_ID"]
primaryEnv: ASK_A_HUMAN_AGENT_ID
Recommendation

Store the agent ID as a secret, do not include it in prompts or logs, and rotate it if it may have been exposed.