Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Clarity AI

v2.2.0

AI-Powered Intent Parser - Transform messy input into crystal-clear instructions. Converts verbose Chinese/English input into structured intent. Supports 7 i...

1· 75·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description (intent parser) match the included code: a rule-based transformer with an optional call to a local Ollama model. No unrelated credentials, binaries, or paths are requested.
Instruction Scope
SKILL.md only instructs using the rule engine and optionally installing/using Ollama locally. The runtime instructions and code operate on the provided input and call only localhost (Ollama) if available; they do not read arbitrary files or request unrelated secrets.
!
Install Mechanism
The skill is instruction-only (no official install spec), but README includes a high-risk curl -sL https://raw.githubusercontent.com/.../install.sh | bash one-liner. Running an unknown remote install script is dangerous and not necessary for the rule-engine behavior; the optional Ollama commands (brew install, ollama pull) are more reasonable but do involve downloading models from external sources.
Credentials
No environment variables, credentials, or config paths are required by the skill. The code only targets a localhost Ollama endpoint and does not access system secrets.
Persistence & Privilege
Skill does not request permanent presence (always:false) and does not modify other skills or system configs. Autonomous invocation is allowed (platform default) but not combined with other high privileges.
What to consider before installing
The skill itself is coherent: it parses intents with a local rule engine and optionally uses a local Ollama model at localhost:11434. However, do NOT run the README's curl | bash installer without inspecting it first — that single command fetches and executes a remote script and is the primary risk here. If you want to use the Ollama enhancement, prefer installing Ollama and pulling models through verified package managers (brew, official Ollama docs) and inspect any model sources. Recommended steps before installing: 1) Inspect the repository/install script linked in the README (https://github.com/yjin94606/clarity-ai) to see what it does; 2) avoid piping unknown remote scripts to bash — download and review them first; 3) run the skill in a sandboxed environment if you must test it; 4) if privacy is important, verify that model downloads and Ollama are configured to run entirely locally and that no network proxies are used. Overall the code calls only localhost and requests no secrets, but the provided install instruction is the main red flag.

Like a lobster shell, security has layers — review code before you run it.

latestvk971ex4et9y5ttv9s68vp3sces83qm9k

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments