Langchain Skill Vmisep 2026
ReviewAudited by ClawScan on May 10, 2026.
Overview
This is a mostly coherent LangChain assistant, but it uses undeclared LLM provider credentials, including a hardcoded DeepSeek key-like value, so users should review it before installing.
Only install if you are comfortable with your prompts being sent to Gemini and/or DeepSeek. Before use, replace any hardcoded key with your own declared environment variable, pin the Python dependencies, and avoid entering secrets or private data unless the provider configuration is clear.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Your requests may run under an unknown provider account or fail unpredictably, and the embedded key-like value could be exposed or misused.
The skill is designed to call DeepSeek with an embedded API key-like value rather than a declared, user-supplied credential. This creates unclear account, billing, scope, and revocation boundaries.
openai_api_key="sk-e7ec5...39506694", # key DeepSeek của Sếp
Remove hardcoded provider keys and require a clearly declared environment variable or credential setting that the user controls.
Prompts, including any sensitive text the user types, can be processed by external LLM providers.
The full user query is sent to Gemini for routing before the final model is chosen, and may also be sent to DeepSeek or Gemini for the final answer.
router_llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash") ... selected_model = router_chain.run(query=query)
Avoid entering secrets or private data unless you are comfortable with the configured providers, and document the provider data flow clearly.
Different dependency versions could change behavior or introduce upstream package risk.
The setup guidance uses unpinned Python packages and there is no install spec or lockfile in the provided artifacts. This is common for a LangChain skill but weakens reproducibility and dependency review.
pip install langchain langchain-community langchain-core
Provide a reviewed install spec or requirements file with pinned versions, including all packages imported by the code.
Earlier conversation content can influence later answers during the same run, and sensitive details may be included in model context.
Conversation history is intended to be summarized and reintroduced into the model context. The artifacts do not show persistent storage, so this appears scoped to runtime memory.
ConversationSummaryBufferMemory(llm=llm, max_token_limit=2000, memory_key="chat_history", return_messages=True)
Be cautious about sharing secrets in chat and clearly document how long memory is retained.
