Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Oasis Audio

v1.0.8

Oasis Audio is an AI audio narration generator that reads your local conversation history (from ~/.qclaw/, ~/.easyclaw/, or ~/.openclaw/ session directories)...

2· 122·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
The name/description (generate personalized audio from local conversation history) matches the code and runtime instructions: context_collector.py reads ~/.qclaw|~/.easyclaw|~/.openclaw session, memory, and USER.md, and xplai_gen_audio.py sends a distilled prompt to eagle-api.xplai.ai. Minor incoherence: registry metadata at the top declares no required config paths/vars, while SKILL.md explicitly lists those config_paths — the registry record and the runtime instructions disagree.
!
Instruction Scope
SKILL.md and the execution policy instruct the agent to auto-collect local conversation history and infer emotional state/need, then proceed to generate audio without asking for confirmation. The code does local collection (context_collector.py), composes prompts locally, and posts them to https://eagle-api.xplai.ai. While the skill claims 'raw conversation data ... are never transmitted', that guarantee depends on correct prompt construction and redaction — the enforcement is heuristic (regex-based redaction) and may not catch all sensitive data. The execution policy's 'do NOT need to ask for confirmation' is high-risk for privacy-sensitive content.
Install Mechanism
No install spec; this is instruction-plus-code bundled in the skill. No external installers or downloads are performed. The code is pure Python, uses standard libs, writes a local audit.log, and makes HTTPS calls to the declared API host. No high-risk install URLs or archived downloads were observed.
Credentials
The skill requests access to local config/session paths (explicit in SKILL.md) but asks for no environment variables or external credentials. That is proportional to its purpose (it needs local chat history). Still, local session files and USER.md are sensitive: the skill reads them and extracts fragments/user_profile. The redaction rules in xplai_gen_audio.py cover common tokens (keys, passwords, emails, file paths) but are heuristic and not exhaustive.
!
Persistence & Privilege
The skill is not force-included (always:false) and does not request system-wide configuration changes, which is good. However, it instructs autonomous invocation behavior (normal default) combined with an explicit policy to proceed without user confirmation when a user asks for audio. That combination increases the risk that sensitive local data will be collected and transmitted automatically. The skill also writes an audit.log containing the prompt text (redacted version), which persists locally and may itself contain sensitive content.
What to consider before installing
What to consider before installing/using this skill: - It will read your local conversation session files (~/.qclaw, ~/.easyclaw, ~/.openclaw), memory .md files, and USER.md to personalize audio. That behavior is required for its stated purpose but is privacy-sensitive. - It sends a composed prompt to an external API (eagle-api.xplai.ai). The code tries to redact obvious secrets and logs the sent prompt locally (audit.log), but redaction is regex-based and can miss sensitive content. Assume any personalized data included in the prompt may be exposed to the API operator. - The SKILL.md instructs the agent to proceed without asking for confirmation when the user requests audio. If you want explicit consent before any external transmission, do not allow autonomous invocation or modify the policy to require confirmation. - Discrepancy: registry metadata lists no config paths, but SKILL.md enumerates them. Verify which metadata is authoritative. Practical steps: - Review the code yourself (context_collector.py and xplai_gen_audio.py) to confirm how prompts are assembled and what gets redacted. - Test in dry-run mode: run xplai_gen_audio.py --dry-run to see the sanitized prompt the skill would send. - Restrict or sanitize USER.md and memory files if they contain PII before use. - If possible, disable autonomous invocation or require the agent to ask for confirmation before sending any external request. - If you must use it, consider running in an isolated environment or VM so session files used are limited and auditable. If you want, I can: (1) point out the exact lines where sensitive fields are included in prompt construction (requires full context_collector output beyond truncated snippet), (2) suggest a safer execution policy text that forces confirmation, or (3) draft a short patch to make redaction more conservative (e.g., drop any fragments containing named entities or phone numbers) to reduce leakage risk.

Like a lobster shell, security has layers — review code before you run it.

latestvk97fee3yc8bwps328kekwfz4w98455dh

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🎧 Clawdis

Comments