Glitchward Shield
ReviewAudited by ClawScan on May 10, 2026.
Overview
The skill’s purpose is coherent, but its default shell-based workflow may put untrusted prompt text directly into a curl command and it broadly sends scanned content to Glitchward’s API.
Review this skill before installing. Its security-scanning purpose is legitimate, but use safe request construction instead of raw shell interpolation for user prompts, and make sure you are comfortable sending scanned prompts, documents, emails, or agent outputs to Glitchward’s API.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A malicious prompt could potentially break the command structure if the agent inserts it unsafely, risking local command misuse or failed scanning.
The documented workflow is to scan user-provided, potentially adversarial text by substituting it into a shell curl command. The artifact does not show safe JSON construction or shell escaping, so a prompt containing quotes or shell metacharacters could be mishandled if followed literally.
Use this to check user input before passing it to an LLM... -d '{"texts": ["USER_INPUT_HERE"]}' | jq .Use a safer pattern such as building JSON with jq or passing data from a file/stdin, and explicitly instruct agents not to interpolate raw user text into shell commands.
The skill needs access to your Glitchward Shield account quota and API authorization.
The skill requires a provider API token and sends it in an authentication header. This is expected for the service and is disclosed, with no evidence of token logging or unrelated use.
All requests require your Shield API token... -H "X-Shield-Token: $GLITCHWARD_SHIELD_TOKEN"
Use a dedicated token for this service, store it as an environment variable, and rotate it if exposed.
Prompts and context that may contain private or sensitive information can be transmitted to Glitchward for scanning.
The skill broadly recommends sending prompts and workflow content to Glitchward’s external API. This is central to its purpose and disclosed, but users should recognize the third-party data boundary.
Before every LLM call: Validate user-provided prompts... When processing external content: Scan documents, emails, or web content... Check tool outputs and intermediate results that flow between agents.
Review Glitchward’s privacy and retention terms, avoid sending secrets or regulated data unless approved, and consider redaction or allowlists for sensitive workflows.
