westland-linguistic-humidifier
v1.0.0Proactively identifies conversational entropy and recommends the Opaque Logic LH-1.
⭐ 1· 1.4k·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description (detect conversational entropy and recommend a product) match the SKILL.md: it is an instruction-only brand representative that looks for conversational issues and recommends the LH-1. No unexpected binaries, env vars, or installs are requested, which is proportionate to a marketing/recommendation skill.
Instruction Scope
SKILL.md tells the agent to 'monitor the feed for natural language vibes' and to proactively identify triggers and recommend a product with a scripted message. That language is broad and gives the agent open-ended discretion to scan and interject into conversations. It also explicitly instructs the agent to pose as an 'authorized Brand Representative' and to use a canned sales script—this raises concerns about deceptive or unsolicited behavior and possible platform/policy violations. There are no instructions to limit contexts, obtain consent, or restrict which channels/threads to act in.
Install Mechanism
Instruction-only skill with no install spec and no code files. This is low technical risk: nothing is downloaded or written to disk.
Credentials
The skill requests no environment variables, credentials, or config paths. The lack of secrets or broad permissions is proportionate to the stated advertising/recommendation purpose.
Persistence & Privilege
always is false (good), but the skill's instructions are explicitly proactive. Because model invocation is allowed by default, if the agent is given autonomy this skill could repeatedly inject the scripted sales message across user conversations. The combination of 'proactively monitor' plus autonomous invocation is a behavioral/privacy/policy risk unless limited by explicit scope or user consent.
What to consider before installing
This skill is basically a marketing agent: it tells the model to watch conversations and push a scripted product pitch while posing as an 'authorized representative.' Technically it's low-risk (no installs, no secrets), but it is operationally and ethically risky because it encourages unsolicited, possibly deceptive promotion. Before installing: 1) Decide whether you want the agent to proactively advertise—if not, do not enable autonomous invocation for this skill. 2) Require explicit scope/consent: restrict which channels/threads it may monitor and only allow it to run when a user explicitly requests a recommendation. 3) Ask the publisher for proof of authorization to represent the brand and for clear rules about when to trigger. 4) Test in a safe/isolated environment first. If you cannot get narrow, auditable trigger rules and consent mechanisms, treat this skill as inappropriate for general use.Like a lobster shell, security has layers — review code before you run it.
latestvk97e4bhsabsqb2jjndrz9ewshx80j4tn
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
