Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

AI Engineer

v1.0.0

AI/ML engineering specialist for building intelligent features, RAG systems, LLM integrations, data pipelines, vector search, and AI-powered applications. Us...

0· 354·1 current·2 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (AI/ML engineering, RAG, vector DBs, LLM integrations) aligns with the content: examples cover embeddings, Chroma, Pinecone, Qdrant, OpenAI, local models, etc. That breadth is consistent for an "AI Engineer" reference skill. However, the skill references multiple external providers and tools (OpenAI, Pinecone, Qdrant, Weaviate, Ollama, etc.) but the metadata does not declare any required credentials or binaries — an implementation/manifest gap.
!
Instruction Scope
Runtime instructions include concrete code that reads env vars (e.g., os.environ['OPENAI_API_KEY']), calls cloud APIs, stores documents to persistent DBs, spawns sub-agents, and logs prompts/usages. While each action is expected for building RAG/agent systems, the SKILL.md gives the agent broad discretion (memory saving, sub-agent delegation) that could cause sensitive data to be stored or transmitted if used with real credentials or production data. The instructions do not limit or specify where secrets or logs go.
Install Mechanism
This is an instruction-only skill with no install spec and no code files — minimal on-disk impact and no third-party binaries are installed by the skill itself. That lowers supply-chain risk.
!
Credentials
The SKILL.md explicitly uses environment variables (OPENAI_API_KEY in examples) and references many services that require credentials (Pinecone, Qdrant, Weaviate, Ollama, etc.), but the skill metadata lists no required env vars or a primary credential. This mismatch means the skill could prompt the agent to access secrets that aren't declared up-front, and a user might inadvertently provide high-privilege credentials when enabling the skill. Logging guidance ('Log everything — prompts, completions, latency, token usage') increases the risk that sensitive content (including secrets) will be persisted.
Persistence & Privilege
always:false and default autonomous invocation are appropriate. The skill references spawning OpenClaw sub-agents and saving memories to stores, which are normal for the domain, but there is no evidence it requests permanent platform-level privileges or modifies other skills' configs.
What to consider before installing
This is an instruction-only reference for building RAG/agent systems and appears coherent in purpose, but metadata omits environment variables and credential requirements shown in the examples (e.g., OPENAI_API_KEY). Before installing or running with real data or real credentials: (1) treat the skill as documentation/examples only — it contains code that will read env vars and call external APIs; (2) do not provide high-privilege or production credentials until you confirm which specific keys the skill will use; (3) run any code snippets in an isolated/sandbox environment and review logging/storage targets (memory_store, persistent DB paths) to avoid accidental data retention; (4) ask the publisher or maintainer to declare required env vars and scopes, and to limit instructions that persist user data; (5) consider limiting or auditing sub-agent delegation and automatic memory persistence when testing. The absence of scan findings is not evidence of safety — the main issue is the manifest/instruction mismatch and potential for sensitive data to be logged or stored.

Like a lobster shell, security has layers — review code before you run it.

latestvk97eq6w8r1ktvyn5ggm0e0waad82jx96

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments