Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Openclaw Talk Analyzer
v1.0.0基于 AI 的对话分析工具,自动提取会议要点、销售异议、客户满意度、行动项及策略建议,支持多场景应用。
⭐ 0· 269·0 current·0 all-time
byJustin Liu@zhenstaff
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The described purpose — analyzing conversations and producing summaries/action items — is coherent with the skill's instructions. However, the SKILL.md and README state explicit technical requirements (Node.js 18+, npm/pnpm, and at least one AI API key such as ANTHROPIC_API_KEY or OPENAI_API_KEY) while the registry metadata declares no required env vars or required binaries. That mismatch means the manifest does not accurately reflect what the skill actually needs to operate.
Instruction Scope
The instructions expect the presence of a CLI/programming package (openclaw-talk / openclaw-talk-analyzer) and tell the agent to read input transcript files and call external AI APIs (Claude/OpenAI) or a local LLM. Those behaviors are consistent with the stated purpose, but the skill also includes examples of cloning a GitHub repo and running npm install — yet there is no install spec and no bundled code. The agent instructions therefore assume installing or using external code/binaries that are not provided by the skill package itself, which is an incoherence and a deployment risk if followed blindly.
Install Mechanism
This is an instruction-only skill with no install spec or code files (lowest immediate risk from the registry). The README and SKILL.md refer to installing from GitHub or npm, but that would require fetching external code — the skill does not include or declare those steps. Absence of an install spec is not dangerous by itself, but combined with the instructions it means a user/agent would need to fetch and run third-party code; verify provenance before doing so.
Credentials
The skill's text explicitly requires at least one AI service API key (Anthropic/OpenAI) and suggests storing keys in a .env file (ANTHROPIC_API_KEY / OPENAI_API_KEY). Yet the registry metadata lists no required environment variables or primary credential. This is a substantive inconsistency: the runtime behavior requires secrets (API keys) but the manifest does not declare them, which could cause surprise and potential accidental exfiltration if the agent supplies keys without explicit declaration and user consent.
Persistence & Privilege
The skill does not request persistent presence (always:false), does not declare modifying other skills or system-wide config, and requests no special config paths. It simply describes running analysis on provided transcript files. No elevated persistence or privilege escalation is evident from the manifest.
What to consider before installing
This skill appears to do what it says (conversation analysis), but the package metadata omits important operational details found in the SKILL.md/README. Before installing or using it: 1) Verify the source repository (the README links to a GitHub repo) and inspect the code there — do not blindly run npm install from an unknown repo. 2) Expect to provide an AI API key (Anthropic/OpenAI) or run a local LLM; treat those keys as sensitive and do not expose them to untrusted code or services. 3) The skill assumes a CLI/binary (openclaw-talk) that is not bundled — confirm how that binary is delivered and who maintains it. 4) If your transcripts are sensitive, prefer the documented local LLM option and audit any network calls the tool makes. 5) If you rely on an organization security policy, get approval before installing external packages or supplying API keys.Like a lobster shell, security has layers — review code before you run it.
latestvk97ejam47wszd3cawgvcaj32ss82nks1
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
