Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Android Static Analyzer
v1.1.1分析 Android 项目源码,用 LLM 从多维度生成 AI 自动化测试所需的先验知识文档,打包上报测试平台。核心价值:让 AI 测试 Agent 在运行前就知道「测什么、怎么断言、有哪些陷阱」。触发词:「分析我的 Android 项目」「生成测试画像」「理解这个 App 的业务」「提取测试先验知识」「帮我分析...
⭐ 0· 87·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The name/description (static analysis → produce test priors) aligns with the included scripts: analyze.py reads manifest and source, extract_nav.py parses navigation graphs, and SKILL.md describes generating multiple JSON documents. However the description and SKILL.md say the static-profile will be '打包上报测试平台' (packaged and reported to a test platform) while no upload code, API endpoints, or required credentials are declared — the scripts only print metadata/prompts and optionally accept a platform URL argument but do not perform any network upload. This is an unexplained mismatch (missing credentials or upload steps).
Instruction Scope
SKILL.md instructs the agent to read the user's project (find/ read Activity files, manifests, gradle files) and to build full LLM prompts containing project source. analyze.py explicitly builds a prompt containing source code and tells the agent to call an LLM with it. This is coherent for analysis, but it means the user's source code will be sent to whatever LLM the agent uses — a privacy/exfiltration risk if the user doesn't expect that. Also SKILL.md says it accepts APK/GitHub link/pasted code, but the provided scripts operate on a local filesystem path; the agent would need to clone/fetch external repos or unpack APKs — which expands the agent's actions beyond the scripts. Finally, SKILL.md claims to produce 9 documents and 'never duplicate AITestSDK structure' while the scripts do extract structural info (manifest, nav graphs) — minor scope/overlap ambiguity.
Install Mechanism
No install spec and no external binaries are required. The skill is instruction + two small Python scripts; there is no network install or archive download in the bundle. This is low installation risk.
Credentials
The skill declares no required environment variables or credentials, but the text references reporting to a test platform and includes an optional platform_url argument — yet there is no mechanism for supplying or using API keys/tokens. Also, the analyze.py behavior means the agent will include portions of source code in an LLM prompt and thus send repository contents to the agent's configured LLM endpoint. Even without declared secrets, that is sensitive: source files, hardcoded test data, and any secrets in code could be transmitted to external services. The lack of an explicit, justified credential requirement for uploading/reporting is an inconsistency.
Persistence & Privilege
The skill does not request always:true and does not modify other skills. It may write nav-graphs.json into the project's res parent directory (extract_nav.py saves an output file inside the project), which is a reasonable local action for a project-focused tool. No elevated platform privileges or persistent autonomous inclusion are requested.
What to consider before installing
This skill reads and packages source code into LLM prompts and will cause your project source (including any hard-coded test credentials) to be sent to the agent's configured LLM when run. It also claims to upload results to a test platform but provides no upload code, API endpoint, or required credentials — verify how uploads are intended to occur before using. Recommendations: 1) Only run on non-sensitive or sanitized projects unless you trust the LLM endpoint and understand its data retention/privacy. 2) Confirm where the agent will call LLMs (cloud provider, on-prem) and whether you want that. 3) If you expect automatic reporting to a platform, require the skill to declare and use explicit platform_url and API token env vars and show upload code — otherwise treat the 'reporting' claim as unimplemented. 4) If you need to avoid sending source externally, run the included scripts locally and review their stdout (they print the LLM prompt) rather than letting the agent call an external model automatically. 5) Test on a small sample project first and inspect any files written (nav-graphs.json) to ensure no unexpected behavior.Like a lobster shell, security has layers — review code before you run it.
latestvk973qeskhx3s4mt0rk6cxf8pks83ms8h
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
