jm-call

v1.0.3

Speak responses aloud on macOS using the built-in `say` command when user input indicates Voice Wake/voice recognition (for example, messages starting with "...

0· 231·2 current·2 all-time
bykiefer@kieferhuan

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for kieferhuan/voice-wake.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "jm-call" (kieferhuan/voice-wake) from ClawHub.
Skill page: https://clawhub.ai/kieferhuan/voice-wake
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install voice-wake

ClawHub CLI

Package manager switcher

npx clawhub@latest install voice-wake
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description match the SKILL.md instructions: the skill only asks the agent to call the local macOS `say` command when the incoming message begins with the exact trigger phrase. No unrelated credentials, binaries, or install steps are requested.
Instruction Scope
Instructions are narrow and specify exactly when to use `say` and how to format spoken text. Minor issues: the SKILL.md references optional environment variables (SAY_VOICE, SAY_RATE) that are not declared in the skill metadata, and the skill does not declare an OS restriction even though it only works on macOS. The runtime snippets run a simple local shell pipeline (printf | say); this is expected for the stated purpose but will fail on non-macOS hosts.
Install Mechanism
No install spec and no code files — instruction-only skill. Nothing is written to disk and no external packages or downloads are requested.
Credentials
The skill declares no required environment variables or credentials, which is proportionate. It does reference optional env vars (SAY_VOICE, SAY_RATE) in SKILL.md; these are harmless for functionality but are not listed in requires.env, which is a minor metadata mismatch.
Persistence & Privilege
always is false and the skill does not request persistent installation or elevated privileges. It does not modify other skills or system-wide settings according to the provided files.
Assessment
This skill is coherent and low-risk: it only runs the local macOS `say` command when a message exactly starts with the trigger phrase. Before installing, confirm that (1) the agent will run on macOS (otherwise `say` won't be available), (2) you are comfortable with the agent executing a simple shell command that will speak whatever response it generates (including potentially sensitive text), and (3) you accept that optional env vars SAY_VOICE and SAY_RATE control voice/pace but are not declared in the metadata. If you want stronger safety, restrict the skill to macOS hosts and verify it will not be invoked for messages that might expose secrets or private content.

Like a lobster shell, security has layers — review code before you run it.

latestvk97c2z11a7epn99pgfjg395r4n83gasz
231downloads
0stars
4versions
Updated 1mo ago
v1.0.3
MIT-0

Voice Wake Say

Overview

Use macOS say to read the assistant's response out loud whenever the conversation came from Voice Wake/voice recognition. Do not use the tts tool (it calls cloud providers).

When to Use say (CHECK EVERY MESSAGE INDIVIDUALLY)

IF the user message STARTS WITH: User talked via voice recognition

  • Step 1: Acknowledge with say first (so the user knows you heard them)
  • Step 2: Then perform the task
  • Step 3: Optionally speak again when done if it makes sense

IF the user message does NOT start with that exact phrase

  • THEN: Do NOT use say. Text-only response only.

Critical:

  • Check EACH message individually — context does NOT carry over
  • The trigger phrase must be at the VERY START of the message
  • For tasks that take time, acknowledge FIRST so the user knows you're working

Workflow

  1. Detect Voice Wake context
  • Trigger ONLY when the latest user/system message STARTS WITH User talked via voice recognition
  • If the message instructs "repeat prompt first", keep that behavior in the response.
  1. Prepare spoken text
  • Use the final response text as the basis.
  • Strip markdown/code blocks; if the response is long or code-heavy, speak a short summary and mention that details are on screen.
  1. Speak with say (local macOS TTS)
printf '%s' "$SPOKEN_TEXT" | say

Optional controls (use only if set):

printf '%s' "$SPOKEN_TEXT" | say -v "$SAY_VOICE"
printf '%s' "$SPOKEN_TEXT" | say -r "$SAY_RATE"

Failure handling

  • If say is unavailable or errors, still send the text response and note that TTS failed.

Comments

Loading comments...