Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

contract-review-cn

v1.0.1

专业中文合同审查,自动识别法律风险,提供修订建议,生成原文-修改-原因三栏对照表,支持PDF、Word和TXT格式。

0· 64·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The code and docs match the stated purpose (PDF/Word/TXT parsing, extracting clauses, calling an LLM to produce risks/revisions). However there are configuration mismatches: SKILL.md / config.example.json ask you to put an API key in config.json and set api_provider='openai', while the default model in the config is 'zai/glm-4.7-flash' (not an OpenAI model). The analyzer constructs a LangChain ChatOpenAI client but does not read or use the api_key from config.json — it appears to rely on the environment (e.g., OPENAI_API_KEY). These inconsistencies are unexpected for a user following the docs.
Instruction Scope
Runtime instructions are narrow and limited to installing Python deps, creating config.json, and running local parsers/analyzer. They do instruct you to place an API key into config.json, but the code does not propagate that key to the LLM client; instead the LLM client will likely rely on environment variables. The skill will send full contract text to an external LLM provider when analyzing — a privacy/collection risk that is consistent with its purpose but should be explicitly highlighted to users.
Install Mechanism
No install spec is provided (instruction-only install). The README/QUICKSTART ask to run pip install -r requirements.txt. That is a standard, low-risk approach; dependencies come from PyPI and are expected for the functionality (PyPDF2, python-docx, langchain, etc.).
!
Credentials
The skill requires an API key to call an external LLM, which is proportional to its purpose. But the metadata declares no required env vars while SKILL.md asks for an API key in config.json — the code doesn't use config['api_key'] and instead constructs a ChatOpenAI instance that typically expects credentials from environment variables. Additionally the model/provider fields are inconsistent (api_provider=openai vs model='zai/glm-4.7-flash'), raising the risk of misconfiguration or accidental use of the wrong provider. Users may unknowingly expose contract text to whichever provider the runtime actually uses.
Persistence & Privilege
The skill does not request persistent platform privileges (always=false). It doesn't modify other skills or system-wide settings. It only reads files you provide and writes report files in the current working directory.
What to consider before installing
This skill's purpose and code largely align (it parses files and sends text to an LLM to produce risks and suggested edits), but there are configuration inconsistencies you should fix before use: - API key handling: SKILL.md asks you to put the API key into config.json, but the code never reads config['api_key'] when constructing the LLM client. The LLM client will likely use environment variables (e.g., OPENAI_API_KEY). Either set the expected environment variable or modify the code to securely read the key from your configuration or secret store. - Model vs provider mismatch: config.example.json sets api_provider='openai' but the default model value is 'zai/glm-4.7-flash' (not an OpenAI model). Confirm which provider and model you intend to use and update both the config and code accordingly. - Privacy: the analyzer sends the full extracted contract text to an external LLM. Do not run confidential or highly sensitive contracts against this skill until you confirm the provider, where data is sent, and your data-retention policy with that provider. - Test safely: run the demo on non-sensitive sample contracts locally first. Inspect and, if desired, patch the _call_ai_model method to explicitly pass credentials from config (or to use a local/private model endpoint) so behavior matches the docs. - Dependency hygiene: pip-installing requirements is normal, but audit dependency versions (langchain/langchain-openai) and run in an isolated environment (virtualenv) to reduce supply-chain risk. If you need, I can point to exactly where to modify the code so config.api_key is used and how to ensure the skill targets the intended model/provider.

Like a lobster shell, security has layers — review code before you run it.

latestvk976zb65rdcetexbn4vv3dvctn83zkq3

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments