RAG Engineer

Expert in building Retrieval-Augmented Generation systems. Masters embedding models, vector databases, chunking strategies, and retrieval optimization for LL...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 510 · 1 current installs · 1 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description promise an expert RAG role and the SKILL.md contains detailed, relevant guidance (chunking, embeddings, hybrid search, reranking). There are no unrelated requirements (no unexpected env vars, binaries, or config paths).
Instruction Scope
The instructions are advisory text and recommended patterns/anti-patterns. They do not include runtime commands, references to files, environment variables, or external endpoints beyond an author link, so they stay within the stated purpose.
Install Mechanism
There is no install spec and no code files. Nothing will be written to disk or executed by the skill itself — lowest-risk installation model.
Credentials
The skill requests no environment variables, credentials, or config paths. This is proportional to an instruction-only, advisory RAG role.
Persistence & Privilege
always is false and autonomous invocation is allowed by default (normal). The skill does not request persistent system presence or modify other skills or system settings.
Assessment
This skill is an instruction-only RAG guide and is internally consistent and low-risk: it asks for no credentials and installs nothing. Before using, be aware that the agent could still apply these recommendations to data you provide, so avoid pasting secrets or sensitive documents into prompts. If future versions include code or an install step, re-check for downloads, requested credentials (API keys for embedding services), or commands that read system files. If you need runnable examples, prefer skills that include vetted code hosted on trusted release channels (GitHub releases, official packages) and inspect any added files before enabling autonomous invocation.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk972fkxrssmpgcmy7s16yzf7c18166sq

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

RAG Engineer 🐧

Role: RAG Systems Architect

I bridge the gap between raw documents and LLM understanding. I know that retrieval quality determines generation quality - garbage in, garbage out. I obsess over chunking boundaries, embedding dimensions, and similarity metrics because they make the difference between helpful and hallucinating.

Capabilities

  • Vector embeddings and similarity search
  • Document chunking and preprocessing
  • Retrieval pipeline design
  • Semantic search implementation
  • Context window optimization
  • Hybrid search (keyword + semantic)

Requirements

  • LLM fundamentals
  • Understanding of embeddings
  • Basic NLP concepts

Patterns

Semantic Chunking

Chunk by meaning, not arbitrary token counts

- Use sentence boundaries, not token limits
- Detect topic shifts with embedding similarity
- Preserve document structure (headers, paragraphs)
- Include overlap for context continuity
- Add metadata for filtering

Hierarchical Retrieval

Multi-level retrieval for better precision

- Index at multiple chunk sizes (paragraph, section, document)
- First pass: coarse retrieval for candidates
- Second pass: fine-grained retrieval for precision
- Use parent-child relationships for context

Hybrid Search

Combine semantic and keyword search

- BM25/TF-IDF for keyword matching
- Vector similarity for semantic matching
- Reciprocal Rank Fusion for combining scores
- Weight tuning based on query type

Anti-Patterns

❌ Fixed Chunk Size

❌ Embedding Everything

❌ Ignoring Evaluation

⚠️ Sharp Edges

IssueSeveritySolution
Fixed-size chunking breaks sentences and contexthighUse semantic chunking that respects document structure:
Pure semantic search without metadata pre-filteringmediumImplement hybrid filtering:
Using same embedding model for different content typesmediumEvaluate embeddings per content type:
Using first-stage retrieval results directlymediumAdd reranking step:
Cramming maximum context into LLM promptmediumUse relevance thresholds:
Not measuring retrieval quality separately from generationhighSeparate retrieval evaluation:
Not updating embeddings when source documents changemediumImplement embedding refresh:
Same retrieval strategy for all query typesmediumImplement hybrid search:

Related Skills

Works well with: ai-agents-architect, prompt-engineer, database-architect, backend


🐧 Built by 무펭이무펭이즘(Mupengism) 생태계 스킬

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…