Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Prepper Skill
v1.0.0Consult the ollama dolphin-llama3 model for survival, emergency preparedness, and prepper-related questions. Use when users ask about disaster recovery, emergency supplies, survival techniques, water purification, shelter, food preservation, first aid, medical emergencies, repair, agriculture, electrical systems, chemistry, communications, or any other survival/prepper topics where practical, uncensored knowledge is needed.
⭐ 0· 985·3 current·3 all-time
byJosh Levitsky, CISSP@jlevitsk
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The skill's stated purpose (query a local Ollama survival model) matches the included script and documentation: it expects a locally running Ollama instance and the dolphin-llama3 model. However the registry metadata claims 'Required binaries: none' while SKILL.md and references/setup.md clearly require the 'ollama' binary and a locally pulled model. That metadata omission is an inconsistency users should notice.
Instruction Scope
SKILL.md and the script are scoped to querying a local Ollama service and then (optionally) using the agent's active model (Claude) to validate/merge results. The instructions explicitly recommend using Ollama as an 'uncensored fallback' when the other model refuses (including weapons/unconventional medicine), which is coherent with the skill's stated intent but also intentionally bypasses typical content filters — this is a design choice with safety implications.
Install Mechanism
There is no install spec in the skill itself (instruction-only + helper script). Setup docs instruct installing Ollama via its official site/Homebrew or using the official install.sh; those are standard approaches. The skill does not download arbitrary third-party code itself. Users must still install Ollama and pull the large model separately.
Credentials
The skill declares no required environment variables or credentials and the included script contacts a local http://localhost:11434 endpoint only. No credentials or external endpoints are requested by the skill. (Note: references/setup.md mentions an OLLAMA_HOST possibility, but the script uses a hardcoded OLLAMA_HOST constant.)
Persistence & Privilege
The skill does not request persistent/autostart privileges or alter other skills’ configs. always:false and default invocation settings are appropriate. There is no evidence the skill attempts to persist or escalate privileges.
What to consider before installing
What to consider before installing/using this skill:
- Metadata mismatch: The skill metadata says 'no required binaries' but the README and script require the 'ollama' binary and a locally pulled model (dolphin-llama3). Make sure you install Ollama from its official source and pull the model before use.
- Local-only model: The script contacts only localhost:11434, so model queries stay local if Ollama is local. That reduces remote exfiltration risk, but also means you must trust the model's provenance and content because it can return uncensored instructions.
- Safety and legality: The skill intentionally provides an 'uncensored fallback' (including topics mainstream models refuse such as weapons or unconventional medical instructions). This is coherent with the skill's stated purpose (prepper knowledge) but increases the chance it will produce harmful/legal-risk content. Only use if you understand and accept those risks.
- Verify model source: 'dolphin-llama3' is not accompanied by a homepage or provenance here. If you care about safety/trust, obtain models from known sources or verify checksums and training provenance before running locally.
- Installation caution: The setup docs suggest running an install script via curl | sh for Linux only if you trust https://ollama.ai. Prefer package manager installs (Homebrew) or reviewing any install script before executing.
- Audit and limits: Consider running the model in an isolated environment (VM/container) and auditing outputs before using them for safety-critical decisions. If you don't want uncensored/harmful outputs, do not rely on the 'use Ollama when Claude refuses' policy described here.
If you want to proceed: install Ollama from its official site, pull the dolphin-llama3 model only if you trust its source, and be aware the skill’s design explicitly prioritizes uncensored/local answers in some cases.Like a lobster shell, security has layers — review code before you run it.
latestvk9728g2eaqajxd03sedhsh9pxd810w8f
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
