Speak

Configure TTS in OpenClaw. Adapt speech output to user preferences.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
2 · 949 · 9 current installs · 9 all-time installs
byIván@ivangdavila
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (configure/adapt TTS) match the provided materials: SKILL.md instructs the agent to learn user voice preferences and config.md describes TTS providers and settings. Nothing requested (no env vars, binaries, installs) is out of scope for a TTS config helper.
Instruction Scope
Instructions direct the agent to observe user feedback, mirror style, confirm after 2+ signals, and fill compact entries in SKILL.md. Observing conversation history to infer preferences is expected for this purpose. However the instructions imply the agent will write/update preference entries ("Observe and fill") and check config.md/criteria.md; verify whether the agent is allowed to persist edits and what storage is used.
Install Mechanism
No install spec and no code files — instruction-only skill. This is low risk because nothing is downloaded or written by an installer step.
Credentials
Skill declares no required env vars or credentials. config.md documents optional provider API keys (OpenAI/ElevenLabs) as examples; asking for or using such keys would be reasonable for TTS but is not required by the skill itself. If you supply keys, ensure they are limited to TTS use and stored securely.
Persistence & Privilege
always:false (normal). The docs reference applying changes via a gateway config.patch to update TTS settings — that is coherent for a TTS configuration skill but is a system-level operation. Confirm the agent's permissions for making gateway/config changes and where preference entries will be persisted (SKILL.md, a user profile, or system config).
Assessment
This skill is coherent for adjusting TTS preferences and does not request secrets or install code. Before installing: (1) confirm whether the agent will be permitted to edit skill files or call gateway config.patch (these can change global TTS behavior); (2) if you plan to use OpenAI/ElevenLabs voices, provide API keys only if you trust the agent to use them solely for TTS and confirm where they will be stored; (3) decide whether you want the skill to auto-update preferences or ask for confirmation before persisting changes.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk975erfv6zmq7mdx0awm17mneh80zfb6

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🗣️ Clawdis
OSLinux · macOS · Windows

SKILL.md

Voice Output Adaptation

This skill auto-evolves. Learn how the user wants to be spoken to and configure TTS accordingly.

Rules:

  • Detect patterns from user feedback on voice output
  • Mirror user's communication style when generating spoken text
  • Confirm preferences after 2+ consistent signals
  • Keep entries ultra-compact
  • Check config.md for OpenClaw TTS setup, criteria.md for format

Voice

<!-- Preferred voice/provider. Format: "provider: voice" -->

Style

<!-- How they want to be spoken to. Format: "trait" -->

Spoken Text

<!-- Formatting for TTS output. Format: "rule" -->

Avoid

<!-- What doesn't work for them spoken -->

Empty sections = no preference yet. Observe and fill.

Files

3 total
Select a file
Select a file to preview.

Comments

Loading comments…