Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Sales Oratory Master

v1.0.0

Handles B2B presales and renewal negotiations with value-shift tactics and compliance checks. Use when the user mentions customer pushback on price, budget c...

0· 448·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the code and SKILL.md: this is a prompt/template-driven sales negotiation helper that formats inputs and calls a provided LLM client. It does not request unrelated credentials, binaries, or config paths.
Instruction Scope
The instructions and code stay within the stated purpose (diagnose customer quote, assemble prompt, call LLM). One important runtime behavior: the skill sends the assembled prompt (including customer quotes) to the provided llm_client.chat call, so generated content and input text will transit to whatever LLM endpoint the agent supplies. This is expected for an LLM-wrapper skill but worth noting for data-sharing/privacy concerns.
Install Mechanism
No install spec (instruction-only + small Python package) and no external downloads. Files are included in the bundle; nothing extracts or installs arbitrary remote code.
Credentials
The skill declares no required environment variables, credentials, or config paths. The runtime only needs an llm_client object passed in (as intended) — there are no disproportionate secret requests.
Persistence & Privilege
always is false; the skill does not modify other skills or system configuration. It runs as a Python script and returns LLM output without persistent side effects.
Assessment
This skill appears coherent and limited to producing compliant sales talktracks via a supplied LLM client. Before installing: 1) Review where your agent’s llm_client sends data — customer quotes and prompts will be transmitted to that model endpoint, so confirm data-handling and privacy with your provider. 2) Because the package owner and homepage are unknown, consider running the included tests in an isolated environment to verify behavior. 3) The PROMISE_GUARD enforces helpful redlines, but also manually review outputs for legal/industry compliance in your jurisdiction. 4) Avoid sending sensitive PII or contract terms to third-party LLM endpoints unless you’ve approved that data sharing. If those checks are acceptable, the skill is proportionate to its stated purpose.

Like a lobster shell, security has layers — review code before you run it.

latestvk972fbmcbwr1m7dr8g363gdams823x67

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments