Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Ghostprint

v3.0.1

LLM fingerprint noise injector. Sends behaviorally realistic randomized queries to Anthropic, Z.ai, and any OpenAI-compatible provider on a schedule to deper...

0· 65·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (LLM noise injector) match the included code and runtime instructions: both a standalone Python script and an OpenClaw plugin that schedule queries to configured providers. No unrelated services or binaries are requested.
!
Instruction Scope
SKILL.md instructs cloning the repo into OpenClaw extensions and enabling the plugin so it runs as a background scheduler inside the gateway. The plugin claims 'no API keys needed' because it will reuse your OpenClaw provider keys via runtime resolution; that behavior is not obviously explicit in the high-level summary and may surprise users. The plugin also exposes commands to fire rounds and view stats and provides an --install-cron helper for the standalone script (which will create scheduled background execution). All of these create persistent outbound network activity using your credentials.
Install Mechanism
There is no remote arbitrary binary download or obscure installer: the SKILL.md recommends cloning a GitHub repository and enabling the plugin. Code is included in the package (Python and TypeScript). No URL shorteners, personal IPs, or extracts from unknown archives were observed in the provided files.
Credentials
The registry metadata lists no required env vars, but both plugin and Python implementations expect API keys to exist (standalone uses config or ${ENV_VAR}, plugin reuses OpenClaw provider credentials via runtime resolution). This is proportionate to the stated purpose (the skill must call LLM providers), but it is important: reusing the same API key links noise and real traffic (the README explicitly documents this risk). Users may be surprised the plugin will access provider keys without adding separate credentials.
Persistence & Privilege
always:false (normal). However the plugin runs as a background service in the gateway and schedules recurring network requests (Poisson timing). That background capability is expected for the stated purpose but increases blast radius: the skill will autonomously make network requests on a schedule and write logs to its extension directory (ghostprint.log).
Scan Findings in Context
[system-prompt-override] unexpected: The static pre-scan flagged a prompt-injection pattern in SKILL.md. I did not find an explicit system-prompt override command in the visible SKILL.md text, so this may be a false positive from the scanner, but it warrants a careful human review of SKILL.md and any files that construct prompts (the plugin and Python code both build system/user messages).
What to consider before installing
What to consider before installing: - Understand the behavior: Ghostprint will send scheduled LLM requests using API keys available to OpenClaw or the ones you put in config.yaml. Those calls are real network requests that consume credit and are logged by the provider under whatever API key is used. - Use separate credentials if you want to avoid account-level correlation: the README itself recommends dedicated throwaway API keys for noise; this is important because using your primary key means providers can easily link noise and real queries. - If your goal is stronger separation, run noise through a proxy/VPN or on a separate host as recommended in ANTI-FINGERPRINT.md. The tool warns that same-IP and same-account correlation remain risks. - Check the scheduler/cron installer: review the script path and cron entries before running --install-cron so you know what will run and as which user. - Inspect logs and config: ghostprint writes ghostprint.log and uses config.yaml. Confirm logs contain only metadata (as claimed) and that config.yaml does not accidentally include secrets you don't want stored in plain text. - Test safely: run python3 ghostprint.py --run-once with a throwaway key and monitor the outbound requests to confirm behavior before enabling long-term scheduling or installing into OpenClaw. - Review code for hidden endpoints or exfil: the provided files appear to validate provider URLs and restrict non-HTTPS/private-IP targets, and I saw no other external endpoints, but given the pre-scan flag and the power this plugin has (background network requests using your keys) you should review the full code (especially the truncated parts of ghostprint.py in this package) or run it in an isolated environment. If you are not comfortable with a plugin that will autonomously make network calls using your existing OpenClaw provider keys, treat this as risky: do not install or run it until you have created separate keys/accounts and validated the code and scheduling behavior.

Like a lobster shell, security has layers — review code before you run it.

latestvk978zj4zmewgz1dkp33vqcpqmh83rv32

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments