Ask a Human
v1.0.1Request judgment from random humans when uncertain about subjective decisions. Crowdsourced opinions on tone, style, ethics, and reality checks. CRITICAL - Responses take minutes to hours (or may never arrive).
⭐ 1· 2k·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description map directly to the behavior in SKILL.md: submitting agent questions to a crowdsourced human pool. The single required env var (ASK_A_HUMAN_AGENT_ID) is the expected credential for that API. No unrelated binaries or credentials are requested.
Instruction Scope
Instructions explicitly require sending whatever prompt/context you provide to an external API and instruct the agent to store question_id and poll for results. This is coherent for the stated purpose, but it means any data (including potentially sensitive user, code, or proprietary data) will be transmitted to external humans — the SKILL.md warns about async delays but does not enforce redaction or prompt-sanitization rules, so the agent must avoid including secrets or PII when using this skill.
Install Mechanism
This is instruction-only (no install spec, no code files to execute). README suggests copying SKILL.md into the user's skills directory or using ClawHub; both are normal and not suspicious.
Credentials
Only ASK_A_HUMAN_AGENT_ID is required and is the declared primaryEnv. That single credential is proportionate to making authenticated agent calls to the external API.
Persistence & Privilege
The skill does not request always:true and has no install-time hooks or system-wide config modifications beyond standard skill enablement instructions. It requires storing question_id in agent memory (normal).
Assessment
This skill appears to do exactly what it says: it sends your prompt and context to a third-party crowdsourced human pool and returns asynchronous responses. Before enabling or using it, consider: (1) Privacy: do NOT send passwords, API keys, proprietary code, PHI, or other sensitive data — the humans have only the context you provide and may be unauthenticated/unknown. (2) Data retention & policy: check https://app.ask-a-human.com (privacy, terms, and whether they log/store prompts/responses). (3) Fallbacks & UX: design and test fallback behavior when responses are slow/absent. (4) Agent config storage: if you put ASK_A_HUMAN_AGENT_ID in shell or OpenClaw config, store it securely and limit access. (5) Rate limits: follow the documented limits (60/hour) and implement exponential backoff. If you need the agent to handle sensitive or regulated content, do not use this skill without explicit data-handling assurances from the service.Like a lobster shell, security has layers — review code before you run it.
latestvk978mzd3wwjmn7ztx1bh87mm5180c151
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
🙋 Clawdis
EnvASK_A_HUMAN_AGENT_ID
Primary envASK_A_HUMAN_AGENT_ID
