Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

voice

v1.0.0

Real-time voice conversations in Discord voice channels with Claude AI

0· 60·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for kirkraman/kirk-voice.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "voice" (kirkraman/kirk-voice) from ClawHub.
Skill page: https://clawhub.ai/kirkraman/kirk-voice
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install kirk-voice

ClawHub CLI

Package manager switcher

npx clawhub@latest install kirk-voice
Security Scan
Capability signals
CryptoRequires OAuth tokenRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The code, SKILL.md, and plugin JSONs implement a Discord voice assistant (STT/TTS/providers/agent integration) which is coherent with the name/description. However the registry metadata lists no required env/config while the SKILL.md and plugin manifests require a discord.token config and optional provider API keys (OPENAI_API_KEY, ELEVENLABS_API_KEY, DEEPGRAM_API_KEY, plus optional AWS polly keys). That metadata mismatch is an inconsistency that should be resolved.
!
Instruction Scope
The instructions and code do more than simply transcribe/play audio: they dynamically load the host OpenClaw extension API, resolve and write session store files, create/ensure agent workspaces, and build/ inject an extraSystemPrompt for the agent. Routing transcripts through the embedded agent and persisting session state is reasonable for the feature, but it expands the plugin's access to the host agent's filesystem and prompts, which increases risk (especially because SKILL.md contains a prompt-injection pattern).
Install Mechanism
No install specification is present (no remote download step), and the package includes full source and a package-lock.json listing many npm dependencies (discord voice, STT/TTS SDKs, AWS SDK, etc.). No external arbitrary URL downloads were found in the install metadata, but the plugin will install/compile native modules (opus/libsodium) and pull many npm packages when npm install is run — verify dependencies before installing.
Credentials
Requested credentials and config (Discord bot token, OpenAI/ElevenLabs/Deepgram API keys, optional AWS polly creds) are expected for the supported STT/TTS providers. The main proportionality concern is metadata inconsistency (registry said none required) and permissive default config: allowedUsers defaults to [] (meaning all users in joined channels can trigger the bot), which can lead to unbounded API use or privacy exposure unless restricted. The plugin also accepts AWS credentials for Polly, which is consistent with providing Polly TTS but should be explicitly noted as privileged.
Persistence & Privilege
The plugin does not request 'always: true' and allows autonomous invocation (the platform default). It can auto-join voice channels (if configured) and the agent/tool registration allows programmatic join/speak actions — this is functionally required but raises blast-radius: an installed plugin + autonomous agent could join and speak in voice channels. Combine that with empty allowedUsers and injected system prompts for higher risk.
Scan Findings in Context
[system-prompt-override] expected: The plugin explicitly constructs and injects an extraSystemPrompt for the embedded agent to make responses TTS-friendly. Injecting a system prompt is expected for a voice assistant, but the pre-scan flag indicates the text-pattern used could be abused if any of the prompt components are controllable by untrusted parties. The SECURITY.md claims sanitization and admin-only control; verify that at runtime user-controlled inputs cannot alter this prompt.
What to consider before installing
Key points to check before installing: - Metadata mismatch: the registry shows no required env vars, but the plugin requires a Discord bot token (config) and may use provider API keys (OpenAI, ElevenLabs, Deepgram, optional AWS Polly). Don't rely on the registry metadata alone — inspect plugin config and set only needed keys. - Privileged access: the plugin dynamically loads host OpenClaw APIs and reads/writes session and agent workspace files; this is needed to route transcripts through the agent but means the plugin can access host agent state. Only install from a trusted source and review the core-bridge implementation (loadCoreAgentDeps). - System-prompt injection: the plugin injects an extraSystemPrompt into agent calls. Ensure that prompt components (noEmojiHint, agentName, userId) are admin-controlled and cannot be set by end users in your environment. - Restrict who can use it: change allowedUsers from the default empty list to a small allowlist to avoid unauthorised or costly API usage and to reduce privacy exposure. - Least-privilege credentials: create API keys limited to TTS/STT scope where possible and avoid reusing high-privilege keys. For AWS Polly prefer a dedicated IAM user with minimal permissions. - Run in a controlled environment first: install in a staging or sandboxed host to observe behavior, inspect network calls (e.g., which endpoints are contacted), and confirm TLS verification is enabled (NODE_TLS_REJECT_UNAUTHORIZED should not be 0). - Verify third-party dependencies: review package-lock and consider auditing or pinning dependencies, and ensure native modules (opus/sodium) compile safely on your platform. If you cannot verify these points, treat the plugin as potentially risky and avoid installing it on production or sensitive systems.
index.ts:156
Environment variable access combined with network send.
src/streaming-tts.ts:45
Environment variable access combined with network send.
src/stt.ts:37
Environment variable access combined with network send.
src/tts.ts:48
Environment variable access combined with network send.
!
src/tts.ts:5
File read combined with network send (possible exfiltration).
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

Plugin bundle (nix)
Skill pack · CLI binary · Config
SKILL.mdCLIConfig
Config requirements
aivk97emfgn16qpe5k6g2kqe7s3xh84yas7latestvk97emfgn16qpe5k6g2kqe7s3xh84yas7
60downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Config example

{
  "plugins": {
    "entries": {
      "discord-voice": {
        "enabled": true,
        "config": {
          "sttProvider": "local-whisper",
          "ttsProvider": "openai",
          "ttsVoice": "nova",
          "vadSensitivity": "medium",
          "streamingSTT": true,
          "bargeIn": true,
          "allowedUsers": []
        }
      }
    }
  }
}

Discord Voice Plugin for Clawdbot

Real-time voice conversations in Discord voice channels. Join a voice channel, speak, and have your words transcribed, processed by Claude, and spoken back.

Features

  • Join/Leave Voice Channels: Via slash commands, CLI, or agent tool
  • Voice Activity Detection (VAD): Automatically detects when users are speaking
  • Speech-to-Text: Whisper API (OpenAI), Deepgram, or Local Whisper (Offline)
  • Streaming STT: Real-time transcription with Deepgram WebSocket (~1s latency reduction)
  • Agent Integration: Transcribed speech is routed through the Clawdbot agent
  • Text-to-Speech: OpenAI TTS, ElevenLabs, or Kokoro (Local/Offline)
  • Audio Playback: Responses are spoken back in the voice channel
  • Barge-in Support: Stops speaking immediately when user starts talking
  • Auto-reconnect: Automatic heartbeat monitoring and reconnection on disconnect

Requirements

  • Discord bot with voice permissions (Connect, Speak, Use Voice Activity)
  • API keys for STT and TTS providers
  • System dependencies for voice:
    • ffmpeg (audio processing)
    • Native build tools for @discordjs/opus and sodium-native

Installation

1. Install System Dependencies

# Ubuntu/Debian
sudo apt-get install ffmpeg build-essential python3

# Fedora/RHEL
sudo dnf install ffmpeg gcc-c++ make python3

# macOS
brew install ffmpeg

2. Install via ClawdHub

clawdhub install discord-voice

Or manually:

cd ~/.clawdbot/extensions
git clone <repository-url> discord-voice
cd discord-voice
npm install

3. Configure in clawdbot.json

{
  plugins: {
    entries: {
      "discord-voice": {
        enabled: true,
        config: {
          sttProvider: "local-whisper",
          ttsProvider: "openai",
          ttsVoice: "nova",
          vadSensitivity: "medium",
          allowedUsers: [], // Empty = allow all users
          silenceThresholdMs: 1500,
          maxRecordingMs: 30000,
          openai: {
            apiKey: "sk-...", // Or use SKILLBOSS_API_KEY env var
          },
        },
      },
    },
  },
}

4. Discord Bot Setup

Ensure your Discord bot has these permissions:

  • Connect - Join voice channels
  • Speak - Play audio
  • Use Voice Activity - Detect when users speak

Add these to your bot's OAuth2 URL or configure in Discord Developer Portal.

Configuration

OptionTypeDefaultDescription
enabledbooleantrueEnable/disable the plugin
sttProviderstring"local-whisper""whisper", "deepgram", or "local-whisper"
streamingSTTbooleantrueUse streaming STT (Deepgram only, ~1s faster)
ttsProviderstring"openai""openai" or "elevenlabs"
ttsVoicestring"nova"Voice ID for TTS
vadSensitivitystring"medium""low", "medium", or "high"
bargeInbooleantrueStop speaking when user talks
allowedUsersstring[][]User IDs allowed (empty = all)
silenceThresholdMsnumber1500Silence before processing (ms)
maxRecordingMsnumber30000Max recording length (ms)
heartbeatIntervalMsnumber30000Connection health check interval
autoJoinChannelstringundefinedChannel ID to auto-join on startup

Provider Configuration

OpenAI (Whisper + TTS)

{
  openai: {
    apiKey: "sk-...",
    whisperModel: "whisper-1",
    ttsModel: "tts-1",
  },
}

ElevenLabs (TTS only)

{
  elevenlabs: {
    apiKey: "...",
    voiceId: "21m00Tcm4TlvDq8ikWAM", // Rachel
    modelId: "eleven_multilingual_v2",
  },
}

Deepgram (STT only)

{
  deepgram: {
    apiKey: "...",
    model: "nova-2",
  },
}

Usage

Slash Commands (Discord)

Once registered with Discord, use these commands:

  • /discord_voice join <channel> - Join a voice channel
  • /discord_voice leave - Leave the current voice channel
  • /discord_voice status - Show voice connection status

CLI Commands

# Join a voice channel
clawdbot discord_voice join <channelId>

# Leave voice
clawdbot discord_voice leave --guild <guildId>

# Check status
clawdbot discord_voice status

Agent Tool

The agent can use the discord_voice tool:

Join voice channel 1234567890

The tool supports actions:

  • join - Join a voice channel (requires channelId)
  • leave - Leave voice channel
  • speak - Speak text in the voice channel
  • status - Get current voice status

How It Works

  1. Join: Bot joins the specified voice channel
  2. Listen: VAD detects when users start/stop speaking
  3. Record: Audio is buffered while user speaks
  4. Transcribe: On silence, audio is sent to STT provider
  5. Process: Transcribed text is sent to Clawdbot agent
  6. Synthesize: Agent response is converted to audio via TTS
  7. Play: Audio is played back in the voice channel

Streaming STT (Deepgram)

When using Deepgram as your STT provider, streaming mode is enabled by default. This provides:

  • ~1 second faster end-to-end latency
  • Real-time feedback with interim transcription results
  • Automatic keep-alive to prevent connection timeouts
  • Fallback to batch transcription if streaming fails

To use streaming STT:

{
  sttProvider: "deepgram",
  streamingSTT: true, // default
  deepgram: {
    apiKey: "...",
    model: "nova-2",
  },
}

Barge-in Support

When enabled (default), the bot will immediately stop speaking if a user starts talking. This creates a more natural conversational flow where you can interrupt the bot.

To disable (let the bot finish speaking):

{
  bargeIn: false,
}

Auto-reconnect

The plugin includes automatic connection health monitoring:

  • Heartbeat checks every 30 seconds (configurable)
  • Auto-reconnect on disconnect with exponential backoff
  • Max 3 attempts before giving up

If the connection drops, you'll see logs like:

[discord-voice] Disconnected from voice channel
[discord-voice] Reconnection attempt 1/3
[discord-voice] Reconnected successfully

VAD Sensitivity

  • low: Picks up quiet speech, may trigger on background noise
  • medium: Balanced (recommended)
  • high: Requires louder, clearer speech

Troubleshooting

"Discord client not available"

Ensure the Discord channel is configured and the bot is connected before using voice.

Opus/Sodium build errors

Install build tools:

npm install -g node-gyp
npm rebuild @discordjs/opus sodium-native

No audio heard

  1. Check bot has Connect + Speak permissions
  2. Check bot isn't server muted
  3. Verify TTS API key is valid

Transcription not working

  1. Check STT API key is valid
  2. Check audio is being recorded (see debug logs)
  3. Try adjusting VAD sensitivity

Enable debug logging

DEBUG=discord-voice clawdbot gateway start

Environment Variables

VariableDescription
DISCORD_TOKENDiscord bot token (required)
SKILLBOSS_API_KEYSkillBoss API key (Whisper/TTS via Hub)
DEEPGRAM_API_KEYDeepgram API key (streaming STT only)

Limitations

  • Only one voice channel per guild at a time
  • Maximum recording length: 30 seconds (configurable)
  • Requires stable network for real-time audio
  • TTS output may have slight delay due to synthesis

License

MIT

Comments

Loading comments...