Mind Security
ReviewAudited by ClawScan on May 10, 2026.
Overview
Prompt-injection indicators were detected in the submitted artifacts (ignore-previous-instructions); human review is required before treating this skill as clean.
This skill appears benign and purpose-aligned. Before installing, decide which modules you actually need, provide only those API keys, avoid scanning confidential content unless third-party processing is acceptable, and install the optional llm-guard dependency only from a trusted environment. ClawScan detected prompt-injection indicators (ignore-previous-instructions), so this skill requires review even though the model response was benign.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Private text, media, or URLs may leave the local environment and be processed under those providers' terms.
The skill clearly discloses that user-selected content is sent to third-party APIs for analysis.
**External endpoints** — this skill sends user-provided data ... BitMind: Image/video files or URLs; GPTZero: Text content; VirusTotal/URLScan.io/Google Safe Browsing: URLs
Only scan data you are comfortable sharing with the named providers, and review their privacy/retention policies for sensitive material.
Images or videos submitted for deepfake detection may not be purely transient at the provider side.
Deepfake analysis may involve provider-side caching and storage of media after upload.
Background (fire-and-forget, no added latency) ... Cache result in Redis ... Upload media to R2 storage
Avoid submitting highly sensitive media unless provider-side caching/storage is acceptable; prefer local-only checks when suitable.
Running the scripts can consume account quota or use paid API access for those providers.
The skill uses several provider credentials to access external scanning APIs.
Required env vars: BITMIND_API_KEY, GPTZERO_API_KEY, VIRUSTOTAL_API_KEY, URLSCAN_API_KEY, GOOGLE_SAFE_BROWSING_KEY
Use limited-scope or quota-limited keys where possible, and set only the keys needed for the module being used.
Installing the optional ML scanner adds normal package/model supply-chain risk and local cache usage.
The optional ML layer depends on an unpinned third-party package and model download.
`pip install llm-guard` ... Downloads ~500MB model on first run (cached in `~/.cache/huggingface/`).
Install optional dependencies in a virtual environment, pin versions if possible, and use the regex-only mode if you do not need the ML layer.
