N8N QDrant workflow expert
v1.0.0Expertise in designing, building, and troubleshooting production-grade n8n workflows for Qdrant ingestion, retrieval, hybrid search, and RAG pipelines.
⭐ 0· 39·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
The name/description (n8n workflows for Qdrant ingestion, retrieval, hybrid search, RAG) match the provided SKILL.md and example workflows. References to Qdrant, LangChain vector store, OpenAI/Gemini embeddings, Slack, and optional sparse encoders are expected for this use case.
Instruction Scope
SKILL.md and docs are comprehensive and stay within the domain of designing n8n workflows. They include instructions to call external APIs (OpenAI, Qdrant, Google Gemini), to deploy or call a sparse-encoder service, and to use code nodes inside n8n (JS). No instructions attempt to read unrelated system files or secretly exfiltrate data, but the workflows explicitly send user content to third‑party services — review that carefully before enabling in production.
Install Mechanism
There is no install spec and no code shipped that would be downloaded or executed by the platform. This is instruction-only documentation and example JSON for n8n — lowest install risk.
Credentials
The registry metadata lists no required env vars/credentials, but the workflows and docs clearly expect credentials and environment variables for OpenAI, Qdrant, Slack, Google/Gemini, and possibly a sparse-encoder service. Those credentials are appropriate for the skill's purpose, but the metadata omission means users must manually supply and configure them in n8n; verify all credential placeholders (e.g., CONFIGURE_ME_*, $env.OPENAI_API_KEY) before use.
Persistence & Privilege
always:false and there is no code that modifies other skills or system-wide settings. The skill does not request persistent platform privileges.
Assessment
This skill is documentation and example workflows for n8n→Qdrant RAG pipelines and appears internally consistent. Before importing or running any of the example workflows: 1) Replace all CONFIGURE_ME_* placeholders and verify the intended credential method (n8n credentials vs environment variables). 2) Confirm where sensitive data will be sent — examples call OpenAI, Qdrant, Gemini, Slack, and show an optional sparse-encoder HTTP endpoint; avoid sending confidential data to third-party endpoints you don't control. 3) Don’t paste production secrets into workflow JSON — configure them via n8n credentials or secure env storage. 4) If you run the sparse encoder as a service, host it in your environment or add authentication to avoid leaking queries. 5) Test in a sandbox account and audit logs/rate limits/costs before enabling scheduled ingestion. The only metadata inconsistency: the registry declares no required env vars while the examples depend on external service credentials — that’s expected for instruction-only skills but worth noting.Like a lobster shell, security has layers — review code before you run it.
latestvk974ytd75adja0k29g2m2122dn844v0c
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
