Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Prompt Engineering

v0.1.5

Master prompt engineering for AI models: LLMs, image generators, video models. Techniques: chain-of-thought, few-shot, system prompts, negative prompts. Mode...

0· 815·14 current·15 all-time
byÖmer Karışman@okaris
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name and description match the SKILL.md: the file is a prompt-engineering guide with examples for LLM, image, and video prompting and demonstrates usage of an external CLI (infsh). There are no unrelated declared requirements (no unexpected cloud creds, etc.).
!
Instruction Scope
The instructions repeatedly direct the user/agent to install and use the third‑party CLI (infsh) and to call external inference endpoints (openrouter, falai, google/veo, and inference.sh). That means user prompts and any data sent to those commands will be transmitted to third parties — a data‑exfiltration risk if sensitive data is used. The guide also instructs a curl | sh install flow (download-and-execute), which expands the agent's attack surface and should be treated cautiously.
!
Install Mechanism
Although the skill bundle itself has no install spec, the SKILL.md instructs running a remote installer via curl -fsSL https://cli.inference.sh | sh and downloading binaries from dist.inference.sh. Running a remote script piped to sh is a high-risk pattern; the file claims checksums are available (some mitigation), but the installer source is an external domain not documented in the skill metadata and the skill does not embed the checksum verification steps or show verifying before execution.
!
Credentials
The registry metadata lists no required environment variables or credentials, yet the guide calls out 'infsh login' and demonstrates invoking third‑party models that will require authentication/keys. This mismatch reduces transparency: users/agents must supply credentials (or interactively login) to use the CLI, but the skill does not declare what secrets will be needed or how they are used/stored.
Persistence & Privilege
The skill itself is instruction-only and not marked 'always'. However, following the guide installs a third‑party CLI binary that persists on disk and can be invoked later. The skill does not request system-wide config changes or modify other skills, but installation is an explicit action with potential long-lived effects.
What to consider before installing
This guide is coherent for learning prompt engineering, but exercise caution before following its runtime instructions. Do not run curl | sh blindly — inspect the install script at https://cli.inference.sh first and verify checksums from the claimed checksum URL. Understand that using the shown commands will send whatever you include in prompts to third‑party inference services (inference.sh/openrouter/falai/google), so never include secrets, passwords, private keys, or sensitive data in prompts. If you prefer less risk, consider using a vetted package manager, a self‑hosted/local model, or reading the guide without installing the CLI. If you need to use the CLI, confirm the provider's privacy/security policy and where credentials are stored (local config, environment variables, or remote storage) before logging in.

Like a lobster shell, security has layers — review code before you run it.

latestvk97f06wepvd0kxaq8eyx8f84vn81dfmy

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments