LangChain
Avoid common LangChain mistakes — LCEL gotchas, memory persistence, RAG chunking, and output parser traps.
MIT-0 · Free to use, modify, and redistribute. No attribution required.
⭐ 2 · 981 · 13 current installs · 14 all-time installs
byIván@ivangdavila
MIT-0
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description (LangChain gotchas and best practices) matches the content of SKILL.md. The only declared runtime requirement is python3, which is reasonable for LangChain-related advice; no unexpected credentials, binaries, or config paths are requested.
Instruction Scope
SKILL.md contains high-level usage guidance and warnings only — it does not instruct the agent to read files, call external endpoints, or access credentials. Note: instructions are advisory; they could be used to guide actions if an agent is later asked to execute code, but the skill itself does not command execution or data collection.
Install Mechanism
No install spec and no code files are present, so nothing is written to disk or downloaded. This is the lowest-risk install profile (instruction-only).
Credentials
The skill declares no required environment variables, credentials, or config paths. There is no disproportionate request for secrets or unrelated service tokens.
Persistence & Privilege
always is false and the skill is user-invocable; it does not request permanent presence or elevated platform privileges. Autonomous model invocation remains enabled by platform default but is not a special property of this skill.
Assessment
This skill is a read-only guide for LangChain best practices and appears coherent and low-risk: it asks for nothing sensitive and contains only advisory text. Before installing, confirm you trust the skill source (source/homepage are unknown). Because it's instruction-only, there is no code to execute now, but if the skill is later updated to include install steps or code files, re-check those for downloads, required credentials, or instructions that run arbitrary commands. If you plan to have an agent execute LangChain code using these tips, ensure the agent's execution environment (python3, installed packages) and access to data/credentials are controlled and limited to what you expect.Like a lobster shell, security has layers — review code before you run it.
Current versionv1.0.0
Download ziplatest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
🦜 Clawdis
OSLinux · macOS · Windows
Binspython3
SKILL.md
LCEL Basics
|pipes output to next —prompt | llm | parserRunnablePassthrough()forwards input unchanged — use in parallel branchesRunnableParallelruns branches concurrently —{"a": chain1, "b": chain2}.invoke()for single,.batch()for multiple,.stream()for tokens- Input must match expected keys —
{"question": x}not justxif prompt expects{question}
Memory Gotchas
- Memory doesn't auto-persist between sessions — save/load explicitly
ConversationBufferMemorygrows unbounded — useConversationSummaryMemoryfor long chats- Memory key must match prompt variable —
memory_key="chat_history"needs{chat_history}in prompt return_messages=Truefor chat models —Falsereturns string for completion models
RAG Chunking
- Chunk size affects retrieval quality — too small loses context, too large dilutes relevance
- Chunk overlap prevents cutting mid-sentence — 10-20% overlap typical
RecursiveCharacterTextSplitterpreserves structure — splits on paragraphs, then sentences- Embedding dimension must match vector store — mixing models causes silent failures
Output Parsers
PydanticOutputParserneeds format instructions in prompt — call.get_format_instructions()- Parser failures aren't always loud — malformed JSON may partially parse
OutputFixingParserretries with LLM — wraps another parser, fixes errorswith_structured_output()on chat models — cleaner than manual parsing for supported models
Retrieval
similarity_searchreturns documents —.page_contentfor textkparameter controls results count — more isn't always better, noise increases- Metadata filtering before similarity —
filter={"source": "docs"}in most vector stores max_marginal_relevance_searchfor diversity — avoids redundant similar chunks
Agents
- Agents decide tool order dynamically — chains are fixed sequence
- Tool descriptions matter — agent uses them to decide when to call
handle_parsing_errors=True— prevents crash on malformed agent output- Max iterations prevents infinite loops —
max_iterations=10default may be too low
Common Mistakes
- Prompt template variables case-sensitive —
{Question}≠{question} - Chat models need message format —
ChatPromptTemplate, notPromptTemplate - Callbacks not propagating — pass
config={"callbacks": [...]}through chain - Rate limits crash silently sometimes — wrap in retry logic
- Token count exceeds context — use
trim_messagesor summarization for long histories
Files
1 totalSelect a file
Select a file to preview.
Comments
Loading comments…
