Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
钉钉 AI 表格跨表格洞察分析
v1.6.10钉钉 AI 表格跨表格洞察分析。支持按关键词筛选特定业务/项目的 AI 表格,进行综合分析,识别风险点、数据异常、业务洞察。Use when user wants to analyze multiple AI tables by keyword/topic for insights, risks, and ano...
⭐ 0· 58·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Purpose (cross-table analysis of DingTalk AI tables) aligns with requesting a DingTalk MCP token and depending on dingtalk-ai-table. However the manifest only declares python3 as a required binary while the runtime instructions and scripts also call mcporter and the openclaw CLI (openclaw agent / sessions_send). Those additional CLI tools are necessary for the described functionality but are not listed in required binaries — a documentation/manifest mismatch that should be clarified.
Instruction Scope
SKILL.md and the included scripts perform the expected tasks (list tables, read sheets, sample records, build prompts, call LLMs), but the runtime code uses shell-based mcporter invocations (run_dingtalk_command builds and executes a shell command with interpolated arguments) and subprocess calls. Some arguments originate from user-provided keywords and other inputs — those are embedded into shell strings without clear sanitization, which creates a shell-injection risk. The skill also constructs large prompts and forcibly manipulates LLM outputs (inserting headers/sections), and a prompt-injection pattern ('system-prompt-override') was detected in SKILL.md — this indicates the repo contains content that could attempt to influence LLM/system prompts. The scripts read MCP config files (which can contain tokens) and may access DINGTALK_MCP_CONFIG or default config paths not listed in requires.env.
Install Mechanism
No install spec is provided (instruction-only install), which reduces supply-chain risk from remote downloads. The skill includes two substantial Python scripts in-repo — those will run locally. There are no external archive downloads or obscure URLs in the install docs. Still, the code relies on external CLIs (mcporter, openclaw) that users must install manually; the manifest omission of these tools is noteworthy.
Credentials
The declared primary environment variable DINGTALK_MCP_TOKEN is appropriate for reading DingTalk AI tables. However the scripts also reference and may read DINGTALK_MCP_CONFIG (a config file path) and default MCP config files that can contain server URLs/keys. Those additional env/config accesses are not listed in requires.env. The skill also retains UIDs in some internal summaries (used for stats) per docs; while not a secret, that is additional personal data. Overall credential access is not excessive, but the undeclared config path and potential reading of local config files containing tokens warrant caution.
Persistence & Privilege
The skill is not marked always:true and does not request permanent platform privileges. It appears to run locally and use temporary files and local caches. It does call OpenClaw agent to invoke a large model (normal for its purpose) but it does not claim to modify other skills or system settings.
Scan Findings in Context
[system-prompt-override] unexpected: SKILL.md and the LLM-integration docs include strong system/user prompt templates and explicit instructions that force header/version insertion and sectioning of LLM outputs. The static scanner flagged a system-prompt-override pattern — for a skill that calls an LLM this is suspicious because the repo attempts to control system-level prompt content and to post-process (and enforce) LLM outputs. This is not necessarily malicious, but it increases the attack surface for prompt-injection or for covertly influencing agent/system prompts.
What to consider before installing
What to check before installing or running this skill:
1) Confirm required CLIs and declare them: the scripts call mcporter and the OpenClaw CLI (openclaw agent), but the manifest only lists python3. Install mcporter/openclaw from trusted sources and ensure the skill's docs are updated to list them.
2) Inspect shell command usage: review run_dingtalk_command in scripts/analyze_tables.py — it builds a shell command string with user-supplied arguments and runs it via shell redirection. If you plan to use untrusted keywords or inputs, this is a shell-injection risk. Prefer running in a sandbox or patch the code to use subprocess with argument lists (no shell) and proper escaping.
3) Protect credentials and config: the skill reads MCP config files (DINGTALK_MCP_CONFIG or workspace/default paths). Ensure those config files do not contain unrelated secrets, and do not run the skill with high-privilege tokens. Use a token with minimal read-only scope and consider rotating it after testing.
4) Prompt-injection / LLM behavior: the repo contains templates and code that strongly control system prompts and force inserted headers/sections. If you share this workspace with others or run untrusted inputs, LLM outputs may be manipulated. Review analyze_with_llm.py logic and the prompt templates to ensure they don't accidentally leak sensitive data or override global agent/system prompts.
5) Run in an isolated environment first: test the skill in a disposable workspace or VM with a minimal read-only token and sample data. Verify which files it reads/writes (~/.cache/dingtalk-ai-table-insights/ and temp files) and confirm no unexpected network endpoints are contacted beyond your configured MCP server and OpenClaw.
6) Request fixes from the maintainer (or patch locally): (a) declare mcporter/openclaw in required binaries; (b) avoid shell=True and properly sanitize/escape user inputs; (c) declare DINGTALK_MCP_CONFIG in requires.env if used; (d) remove or clearly document forced system-prompt overrides.
If you don't have the skills to audit the code, avoid using this skill with production secrets or run it only with least-privilege test tokens and isolated data.references/llm_integration.md:305
Prompt-injection style instruction pattern detected.
About static analysis
These patterns were detected by automated regex scanning. They may be normal for skills that integrate with external APIs. Check the VirusTotal and OpenClaw results above for context-aware analysis.Like a lobster shell, security has layers — review code before you run it.
latestvk975m4nwyabgfay9vrxnqw3f4x83f5e9
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
📊 Clawdis
Binspython3
EnvDINGTALK_MCP_TOKEN
Primary envDINGTALK_MCP_TOKEN
Dependencies
dingtalk-ai-tableother
