Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Pywayne Llm Chat Ollama Gradio

v0.1.0

Gradio-based chat interface for Ollama with multi-session management. Use when working with pywayne.llm.chat_ollama_gradio module to create a web-based chat...

0· 573·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill's stated purpose (Gradio UI for Ollama) matches the instructions, but the registry metadata lists no required binaries or environment variables while the SKILL.md explicitly requires the 'ollama' CLI, the 'gradio' package, and the 'pywayne.llm.chat_bot' module. That omission is an inconsistency between what the skill says it needs and what it actually instructs the agent to use.
Instruction Scope
The instructions stay within the stated purpose: they import a pywayne module, launch a Gradio server, and call 'ollama list' to discover local models. The SKILL.md does not instruct reading unrelated system files, exfiltrating data, or contacting external endpoints beyond the local Ollama API and a locally hosted Gradio UI. It does, however, expect a local service and CLI to be present.
Install Mechanism
This is an instruction-only skill with no install spec, so nothing will be downloaded or written by the skill itself. That lowers install risk, but the lack of an install spec combined with unmet runtime dependencies (ollama CLI, gradio, pywayne package) means a user may have to install components manually — verify sources before doing so.
Credentials
The skill does not request environment variables or credentials in metadata. The SKILL.md mentions an 'api_key' parameter for Ollama compatibility but does not require secrets or other unrelated credentials. No disproportionate credential access is requested.
Persistence & Privilege
The skill does not request always: true and is user-invocable only. It describes running a local Gradio server (default port 7870) and keeping session history in memory; it does not indicate modifying other skills or system-wide settings.
What to consider before installing
What to check before installing or using this skill: - The SKILL.md requires the 'ollama' CLI, 'gradio', and the 'pywayne.llm.chat_bot' module, but the registry metadata lists none of these — verify you have these dependencies from trusted sources before running anything. - The skill will call 'ollama list' (a local CLI command) and launch a local web server (default port 7870). Run it in an isolated environment (container/VM) if you are unsure of the origin. - Inspect the actual pywayne.llm.chat_ollama_gradio and pywayne.llm.chat_bot code (or obtain them from a reputable source) to confirm there are no unexpected network calls, file reads, or credential handling. - Confirm your Ollama CLI was installed from a trusted release (official site/GitHub) so that 'ollama list' isn't an unexpected binary. - If you need higher confidence: ask the publisher for the source repository, a checksum for any required packages/binaries, or a clear install spec. If those cannot be provided, treat the skill as untrusted and run it in isolation.

Like a lobster shell, security has layers — review code before you run it.

latestvk97bxn6h2r412t9es13zthc5998195eb

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments