Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Ai Content Pipeline
v0.1.5Build multi-step AI content creation pipelines combining image, video, audio, and text. Workflow examples: generate image -> animate -> add voiceover -> merg...
⭐ 0· 773·2 current·2 all-time
byÖmer Karışman@okaris
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The declared purpose (orchestration of multi-step AI content pipelines) aligns with the workflow examples, but the metadata lists no required binaries, no install steps, and no credentials while the SKILL.md clearly assumes the infsh CLI is installed and that you will log into the inference.sh platform and call many hosted apps. The lack of declared requirements (especially authentication) is inconsistent with the workflows shown.
Instruction Scope
SKILL.md explicitly instructs running curl -fsSL https://cli.inference.sh | sh, running infsh login, and executing many infsh app run commands that will transmit content to third-party services. These instructions direct the agent (or user) to download and execute code and to send potentially sensitive or proprietary content to external endpoints. The docs also rely on interactive login and implicit saved credentials, none of which are declared in the metadata.
Install Mechanism
Although there is no formal install spec in the registry, the instructions recommend piping a remote installer (cli.inference.sh) to sh and downloading binaries from dist.inference.sh. These are download-and-execute operations from a domain that is not a well-known package host; while checksums are referenced, running arbitrary remote install scripts is a higher-risk install mechanism and should be validated manually.
Credentials
The skill metadata declares no required environment variables or primary credential, but the workflows require running infsh login and will likely need API keys/tokens (for inference.sh and for the many backend apps like openrouter, falai, bytedance, etc.). This omission means the skill does not declare the credentials it will need or store, which is a meaningful mismatch and raises the risk of undocumented credential usage or accidental exposure.
Persistence & Privilege
always:false (good), and the skill is not requesting special platform privileges. However, the install instruction will place a binary on disk and infsh login will typically persist credentials locally; these side effects are not declared by the registry metadata and could create persistent agent-facing credentials/configuration on the host.
What to consider before installing
This skill appears to be an orchestrator that expects you to install and log into the inference.sh CLI and then call many third‑party services. Before installing: (1) do not blindly run curl | sh — download the installer, inspect it, and verify checksums manually; (2) confirm you trust cli.inference.sh / dist.inference.sh and read their privacy/security docs; (3) be prepared that logging in will persist credentials locally and that your content will be sent to external inference providers; (4) avoid supplying high‑privilege or broad tokens — use least privilege and ephemeral keys where possible; (5) if possible, trial the workflow in an isolated/sandboxed environment or VM to limit blast radius; (6) ask the publisher for an explicit list of required credentials and what the CLI stores and transmits. The inconsistencies (no declared install/credentials despite clear install/login steps) make this risky until you validate the external tooling.Like a lobster shell, security has layers — review code before you run it.
latestvk9710379hwyzz6kytkkme9mvbx81cp4j
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
