Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

langgraph-for-agents

v1.0.2

Use LangGraph/LangChain to build agents

0· 70·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description and the reference code files all focus on LangGraph/LangChain agent patterns, which is coherent. However, the bundle contains many example scripts that import numerous third-party libraries (langchain_openai, langgraph, langchain_community, pydantic, bs4, etc.) even though the skill declares no required binaries, packages, or env vars in its metadata; this is a mismatch between claimed minimal requirements and the examples' real dependency surface.
Instruction Scope
SKILL.md instructs the agent to read the ./references examples (README + sample .py files) and to use placeholder API_KEY values rather than hard-coding secrets. It also suggests using search/browse/fetch tools if available and points to a specific external URL (https://context7.com/...). The instructions do not explicitly tell the agent to execute the example code, but they do encourage reading and potentially fetching external content; that external fetch target is not the official project docs and may be a scraped/third-party mirror.
Install Mechanism
There is no install spec (instruction-only), which minimizes install-time risk. However, the included reference scripts require many runtime packages; the absence of an install specification or dependency list in the metadata is surprising and could mislead users about what must be installed to actually run the examples.
!
Credentials
The skill metadata declares no required environment variables, yet every reference file calls load_dotenv() and uses os.getenv('your-api-key') repeatedly. SKILL.md mentions a generic API_KEY placeholder but doesn't declare the exact env var name. This mismatch is misleading: the examples clearly require credentials to call LLMs but the registry entry doesn't request or document any specific credential names. Additionally, some example code uses eval() (calculator tool) which could execute untrusted input if the examples were run.
Persistence & Privilege
The skill does not request always:true, does not declare system config paths or attempt to modify other skills, and is user-invocable with normal agent invocation allowed. No elevated persistence or privilege is requested in the metadata.
What to consider before installing
This skill is a collection of example code and a guidance document for building agents with LangGraph/LangChain. Things to consider before installing or using it: - Missing credential declaration: The registry metadata lists no required env vars, but the examples repeatedly call load_dotenv() and use os.getenv('your-api-key'). Confirm which exact environment variables you must set (and where) before running anything. Do not paste real API keys into example files; follow the SKILL.md advice to use placeholders and let the user supply keys at runtime. - Dependencies are not documented: The references import many third-party packages (langchain, langgraph, bs4, pydantic, etc.). Expect to install these to execute the examples — the skill metadata does not provide an install spec. Only read the examples unless you intentionally install and run their dependencies. - Exercise caution running examples: Several example scripts demonstrate risky patterns if executed as-is. For example, references/langchain_chatmodel_custom_tool.py defines a calculator tool using eval(), which can execute arbitrary Python code. Avoid running provided tools or code directly on untrusted input. - External fetch URL: SKILL.md suggests fetching content from a context7.com URL. That is not an official project domain and may be a mirrored or scraped page. Verify any fetched content before using it. - If you want to use this skill safely: treat it as documentation only (read-only). If you intend to run examples, inspect them carefully, create a controlled environment (isolated VM/container), install only the dependencies you trust, and ensure you provide credentials via secure secrets management rather than embedding them in files. If you want stronger assurance, ask the author to (1) list exact environment variables required and their purpose, (2) provide a requirements.txt or install spec, (3) remove or flag code that uses eval or other risky constructs, and (4) replace third-party fetch links with official documentation URLs.
references/langchain_chatmodel_custom_tool.py:24
Dynamic code execution detected.
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

latestvk9765cbrtj5xqtayqjhjkhnd9d83dg5y

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments