Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

senseaudio-voice-ab-lab

v1.0.1

Use when a team wants to generate multiple ad, spoken-copy, sales, or promo voice variants from one typed or spoken creative brief, transcribe voice memos wi...

0· 236·0 current·0 all-time
byWu Ruixiao@kikidouloveme79

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for kikidouloveme79/senseaudio-voice-ab-lab.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "senseaudio-voice-ab-lab" (kikidouloveme79/senseaudio-voice-ab-lab) from ClawHub.
Skill page: https://clawhub.ai/kikidouloveme79/senseaudio-voice-ab-lab
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install senseaudio-voice-ab-lab

ClawHub CLI

Package manager switcher

npx clawhub@latest install senseaudio-voice-ab-lab
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
Name/description (generate A/B voice variants, transcribe briefs) matches the code's core behavior. However the registry metadata claims no required environment variables or config paths, while the scripts clearly use SENSEAUDIO_API_KEY, optionally SENSEAUDIO_PLATFORM_TOKEN, SENSEAUDIO_ASR_MODEL, and rely on local config (audioclaw_paths.get_config_path/get_workspace_root) and a Feishu helper to fetch tenant tokens. Those credentials and config access are expected for the stated purpose, but the metadata omission is a coherence problem that could mislead users about what secrets/config are needed.
!
Instruction Scope
SKILL.md instructs the agent to save user audio, run ASR, build variants, synthesize via SenseAudio TTS, and (optionally) send audio into Feishu. The code implements exactly that. Concerns: (1) the SKILL.md and agent prompt encourage automatically sending variants to Feishu when the user asks to '试听/发语音/飞书', which will post user audio to an external chat service; (2) scripts read local config and helper modules (audioclaw_paths, _shared/*, feishu sender) that are not listed in the metadata; (3) scripts call system tools (ffmpeg, afinfo) and run subprocesses. These actions are within the stated purpose but expand the skill's access surface and require explicit credentials/config that are not declared.
Install Mechanism
No install spec (instruction-only) and all bundled code is local. There are no remote downloads in the install. That lowers supply-chain risk. However the package depends on helper modules in a parent _shared directory and on local environment/tooling (ffmpeg, afinfo), so runtime failures or implicit path traversal may occur if the expected repository layout isn't present.
!
Credentials
Registry shows 'no required env vars', but the code uses and/or checks: SENSEAUDIO_API_KEY (default for TTS/ASR open API), SENSEAUDIO_PLATFORM_TOKEN (platform upload mode), SENSEAUDIO_ASR_MODEL, and expects Feishu app_id/app_secret via a feishu config loaded from get_config_path(). The skill will fetch tenant tokens and upload audio to Feishu and post to SenseAudio endpoints (https://api.senseaudio.cn and https://platform.senseaudio.cn). Requesting these secrets is reasonable for the described functionality, but the metadata omission is misleading and increases risk if users supply broad-scoped credentials without understanding where they go.
Persistence & Privilege
This skill is not always:true and is user-invocable; it does not request persistent platform privileges. It can be invoked autonomously (default allowed) which is normal for skills; combine that with the credential/config mismatches above if you want extra caution, but there is no evidence it modifies other skills or system-wide settings.
What to consider before installing
This skill performs the advertised tasks but the package metadata understates what it needs. Before installing or running it: 1) Expect to provide a SenseAudio API key (SENSEAUDIO_API_KEY) and possibly a SENSEAUDIO_PLATFORM_TOKEN for platform uploads; the skill will call https://api.senseaudio.cn and https://platform.senseaudio.cn. 2) If you want Feishu delivery, the skill expects Feishu app credentials/config (app_id/app_secret) accessible via its local config path — review where those are stored and how tenant tokens are fetched. 3) Review the missing shared helpers (audioclaw_paths, senseaudio_env, senseaudio_api_guard, feishu_audio_sender) before trusting runtime behavior — they may be in a parent repo in expected deployments but are not included in the manifest. 4) Limit API key scopes and use test/isolated credentials first; avoid giving production-wide keys until you audit the code paths. 5) Be aware the scripts will transcode (ffmpeg) and may call system utilities (afinfo); run in an environment where those binaries are safe and available. 6) If you need a definitive safety assessment, ask the publisher for corrected metadata listing required env vars and for the missing _shared modules, or run the skill in an isolated container and observe network endpoints it contacts.

Like a lobster shell, security has layers — review code before you run it.

latestvk97e0z83zjepx89f4kr2y2enpx83czcr
236downloads
0stars
2versions
Updated 2h ago
v1.0.1
MIT-0

AudioClaw Voice AB Lab

What this skill is for

This skill is for commercial teams who need to test which spoken script performs best, while keeping the same voice across all variants.

That matters because otherwise too many variables change at once:

  • copy
  • tone
  • rhythm
  • voice persona

This skill keeps the voice fixed and lets you vary:

  • ad tone
  • hook style
  • urgency level
  • trust level
  • conversational warmth
  • regional wording style

Best business scenarios

1. Short-video ad hooks

Generate 4 to 8 spoken openers for the same product:

  • trust-first
  • benefit-first
  • urgency-first
  • concise-direct

Then synthesize all of them with the same voice for fast creative screening.

2. Livestream and promo voiceovers

Use the same host-like voice to test:

  • stronger urgency
  • softer recommendation
  • more premium wording
  • more sales-driven wording

3. Sales or private-domain follow-up

Generate multiple voice-note versions for:

  • reopening a lead
  • reminding a customer
  • sending a soft CTA
  • reducing pushiness while keeping conversion intent

4. Regional wording experiments

This skill can generate regional phrasing styles for comparison, while keeping the same voice.

Important:

  • this is wording-level regional style, not guaranteed full dialect TTS
  • it is useful for testing “which phrasing feels closer to the target audience”

Workflow

  1. Start from either:
    • a typed campaign brief
    • or a spoken voice memo that follows labeled fields such as 产品 / 人群 / 卖点 / 优惠 / 行动
  2. If the input is audio, run scripts/senseaudio_asr.py, then scripts/extract_spoken_brief.py.
  3. If the input is already typed and structured enough, run scripts/run_typed_brief_pipeline.py directly, or call scripts/build_voice_ab_variants.py yourself.
  4. Run scripts/build_voice_ab_variants.py to generate variants.
  5. Pick one fixed voice_id.
    • If you have already created a cloned voice on the AudioClaw platform, use that cloned voice_id.
    • A prepared cloned voice id commonly looks like vc-..., and can be passed directly with --clone-voice-id.
    • If not, use one validated system voice.
  6. If you want faster perceived processing for spoken briefs, enable stream ASR in scripts/senseaudio_asr.py or scripts/run_spoken_brief_pipeline.py.
  7. Run scripts/batch_tts_variants.py to synthesize every variant with the same voice. This skill already uses AudioClaw streaming TTS under the hood and now records stream chunk metadata.
    • If the chosen voice is a clone id like vc-..., the batch TTS step now auto-routes to SenseAudio-TTS-1.5.
  8. If the user wants to hear the results directly in Feishu or AudioClaw, run scripts/send_ab_variants_to_feishu.py after synthesis, or use scripts/run_spoken_brief_pipeline.py --send-feishu-audio / scripts/run_typed_brief_pipeline.py --send-feishu-audio.
    • This step reuses the previously built Feishu voice-reply path instead of sending plain files.
    • It transcodes the generated .mp3 variants into .ogg/.opus and sends them one by one as real audio messages.
  9. Review:
    • generated copy
    • estimated points
    • output audio files
    • variant metadata for A/B tracking
    • optional Feishu send results

AudioClaw Trigger Pattern

Use this skill as an explicit task mode, not as a hidden background guess.

Recommended user trigger:

用 $senseaudio-voice-ab-lab 处理我刚发的语音。
产品:轻量保温杯
人群:通勤上班族
卖点:轻便保温不漏水
优惠:第二件半价
行动:现在点击下单
clone voice_id:your_clone_voice_id
生成 4 条口播,输出到 /tmp/voice_ab_run

If the user already sent a voice memo, the agent should:

  1. Save the audio locally.
  2. Run scripts/run_spoken_brief_pipeline.py.
  3. Return:
    • a short summary of the extracted brief
    • the output directory
    • the best 2 to 4 audio variants for review

If the user says "一条一条发语音给我听" or "直接发到飞书里试听", the agent should:

  1. Run the normal A/B pipeline first.
  2. Then run scripts/send_ab_variants_to_feishu.py, or add --send-feishu-audio to scripts/run_spoken_brief_pipeline.py.
  3. Prefer sending the variants one by one as Feishu audio messages instead of replying with local paths.
  4. If the user only wants part of the set, use --limit or --variant-ids.

If the user gave a typed brief and also says "直接一条一条发语音给我听", the agent should:

  1. Extract or confirm these fields:
    • campaign_name
    • product
    • audience
    • key_message
    • cta
    • optional offer
    • optional proof
  2. Run scripts/run_typed_brief_pipeline.py.
  3. Add --send-feishu-audio.
  4. Do not stop at returning local audio paths unless the user explicitly asked for files only.

If the user does not provide a cloned voice, ask for either:

  • a prepared clone voice_id
  • or permission to fall back to a validated system voice_id

Design rules

  • Keep each script short enough to test quickly.
  • Change one creative dimension at a time if possible.
  • For spoken briefs, keep the input structured enough for deterministic extraction.
  • For real A/B testing, keep:
    • the same voice
    • the same audio format
    • the same sample rate
    • similar script length
  • Treat regional_style as a wording choice, not an official dialect model.
  • Official clone support is a two-step chain:
    • create the clone on the AudioClaw platform first
    • then pass the prepared clone voice_id into this skill for generation

API key lookup

For the generation side of this skill:

  • TTS-oriented scripts now default to SENSEAUDIO_API_KEY

Practical rule:

  • scripts/run_spoken_brief_pipeline.py, scripts/run_typed_brief_pipeline.py, and scripts/batch_tts_variants.py now default to SENSEAUDIO_API_KEY
  • If the host app injects SENSEAUDIO_API_KEY as a login token such as v2.public..., the shared bootstrap replaces it with the real sk-... value from ~/.audioclaw/workspace/state/senseaudio_credentials.json before the synthesis step starts
  • The ASR scripts keep their own existing defaults and are intentionally not changed here

Resources

  • scripts/build_voice_ab_variants.py
    • Builds an A/B manifest from one campaign brief
  • scripts/senseaudio_asr.py
    • Calls AudioClaw ASR using either the official open API host or the official platform endpoint
    • Defaults to the official sense-asr-deepthink model for spoken briefs
  • scripts/extract_spoken_brief.py
    • Extracts a structured campaign brief from an ASR transcript
  • scripts/run_spoken_brief_pipeline.py
    • Runs the full spoken-brief pipeline end to end
    • Supports --stream-asr, --clone-voice-id, and --send-feishu-audio
  • scripts/run_typed_brief_pipeline.py
    • Runs the full typed-brief pipeline end to end
    • Supports --clone-voice-id and --send-feishu-audio
  • scripts/batch_tts_variants.py
    • Generates all audio variants with the same voice_id
  • scripts/send_ab_variants_to_feishu.py
    • Reuses the Feishu voice-reply delivery path to transcode and send the generated variants one by one as audio messages
  • scripts/export_ab_review_csv.py
    • Produces a review sheet for creative, growth, or Feishu-based internal scoring
  • references/commercial_ab_patterns.md
    • High-value use cases, testing advice, and regional-style notes
  • references/asr_brief_pipeline.md
    • Official ASR findings, constraints, and the recommended spoken brief format

Comments

Loading comments...