Install
openclaw skills install langchainAvoid common LangChain mistakes — LCEL gotchas, memory persistence, RAG chunking, and output parser traps.
openclaw skills install langchain| pipes output to next — prompt | llm | parserRunnablePassthrough() forwards input unchanged — use in parallel branchesRunnableParallel runs branches concurrently — {"a": chain1, "b": chain2}.invoke() for single, .batch() for multiple, .stream() for tokens{"question": x} not just x if prompt expects {question}ConversationBufferMemory grows unbounded — use ConversationSummaryMemory for long chatsmemory_key="chat_history" needs {chat_history} in promptreturn_messages=True for chat models — False returns string for completion modelsRecursiveCharacterTextSplitter preserves structure — splits on paragraphs, then sentencesPydanticOutputParser needs format instructions in prompt — call .get_format_instructions()OutputFixingParser retries with LLM — wraps another parser, fixes errorswith_structured_output() on chat models — cleaner than manual parsing for supported modelssimilarity_search returns documents — .page_content for textk parameter controls results count — more isn't always better, noise increasesfilter={"source": "docs"} in most vector storesmax_marginal_relevance_search for diversity — avoids redundant similar chunkshandle_parsing_errors=True — prevents crash on malformed agent outputmax_iterations=10 default may be too low{Question} ≠ {question}ChatPromptTemplate, not PromptTemplateconfig={"callbacks": [...]} through chaintrim_messages or summarization for long histories