Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Rag Accuracy Optimizer
v1.3.0Optimize accuracy for RAG (Retrieval-Augmented Generation) systems. Covers: DB schema design, chunking strategies, retrieval optimization, accuracy testing,...
⭐ 0· 80·0 current·0 all-time
byeddie Luong@eddieluong
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name, description, and included reference docs/scripts align with an end-to-end RAG accuracy optimizer: chunking, embeddings, hybrid retrieval, reranking, evaluation frameworks and orchestrator patterns. The presence of examples for multiple embedding providers, rerankers, and vector DBs is coherent with the stated purpose.
Instruction Scope
SKILL.md and the reference files contain concrete runtime instructions and code to call embedding/LLM providers, DB clients, and run evaluation pipelines. That scope is appropriate for a RAG optimizer, but parts of the SKILL.md include prompt-injection examples/patterns and LLM prompts — these are probably defensive (detection) patterns but flagged by pre-scan. Also the instructions and examples reference environment variables and external endpoints (OpenAI/Gemini/Anthropic/Cohere, qdrant, Postgres) that are not declared in the registry metadata.
Install Mechanism
This is instruction-first with no install spec. No remote downloads or install scripts are specified, which reduces supply-chain risk. Code files are included in the skill bundle (Python scripts) but nothing in the manifest indicates an installer that would fetch remote code at install time.
Credentials
Registry lists no required env vars or credentials, yet numerous code examples and scripts reference provider keys and connection strings (e.g., OPENAI_API_KEY, GEMINI_API_KEY, ANTHROPIC_API_KEY, Cohere key, Qdrant/Postgres connection info). That mismatch is an incoherence risk: the skill will need multiple external credentials to run as documented but does not declare them. Additionally, the examples suggest connecting to DBs and external services—ensure any keys used are least-privilege and that you understand where network traffic will go.
Persistence & Privilege
The skill does not request always:true, does not declare changes to platform-wide configuration, and is user-invocable. No elevated persistence or forced-inclusion privileges are requested in the manifest.
Scan Findings in Context
[prompt-injection:ignore-previous-instructions] expected: SKILL.md explicitly documents unsafe/prompt-injection patterns and includes them in a rule list (UNSAFE_PATTERNS). The pre-scan flagged this token, but its presence appears to be defensive (examples of injections to detect) rather than an instruction to ignore model prompts.
[prompt-injection:system-prompt-override] expected: Similar to above: the SKILL.md contains references to 'system prompt' and 'prompt injection' as part of adversarial testing and orchestrator safety rules. The static scanner flagged these strings; in context they are used for detection/defense.
What to consider before installing
This skill contains substantial, plausible RAG guidance and runnable Python scripts, but the package metadata currently omits the many provider keys and connection strings the code references. Before installing or running: (1) ask the publisher to list required environment variables and config (OpenAI/Gemini/Anthropic/Cohere API keys, Qdrant/Postgres URLs, DB credentials, etc.); (2) review the included scripts (scripts/*.py) locally to confirm there are no unexpected network endpoints or hardcoded URLs; (3) run the code in a sandbox or isolated environment and provide only least-privilege credentials (use test accounts or scoped API keys); (4) if you plan to let the agent call the skill autonomously, be cautious—LLM prompts in the skill mention system prompts and injection testing, so verify the orchestration code won't leak sensitive content (system prompts, secrets) to external models; (5) consider scanning the code for outbound network calls (requests.post, socket, subprocess, etc.) and restrict network access if you cannot audit thoroughly. If the owner cannot or will not clarify the missing credential declarations, treat the omission as a red flag and prefer not to enable the skill in production.references/testing-frameworks.md:295
Prompt-injection style instruction pattern detected.
SKILL.md:355
Prompt-injection style instruction pattern detected.
About static analysis
These patterns were detected by automated regex scanning. They may be normal for skills that integrate with external APIs. Check the VirusTotal and OpenClaw results above for context-aware analysis.Like a lobster shell, security has layers — review code before you run it.
latestvk97152kfq9yxex5cbwcq8mtc4183jneg
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
