Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Nex Voice

v1.0.0

Voice note transcription and intelligent action item extraction for capture and organization of verbal communication. Record and transcribe voice notes, voic...

1· 22·0 current·0 all-time
byNex AI@nexaiguy
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
Capability signals
CryptoCan make purchases
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name, description, and required binaries (python3, whisper, ffmpeg) match the provided code and CLI behavior. The code implements local transcription, storage under ~/.nex-voice, search, and optional LLM-based extraction as advertised.
!
Instruction Scope
Runtime instructions and CLI stay within the stated domain (transcribe, extract actions, search, manage tasks). However the LLM integration sends transcripts to the configured API endpoint via a subprocess curl call (the code constructs a curl command with the API key and transcript). That transmits user data externally when enabled and also exposes the API key on the system command line (process list), which is a credential-leak risk.
Install Mechanism
There is no remote download/install spec (no arbitrary network installs). setup.sh is included and is idempotent and only creates local directories. One minor code/installation mismatch: setup.sh attempts to initialize the DB by running lib/storage.py --init, but storage.py does not expose a CLI entrypoint for --init, so the DB initialization step may not work as intended (an engineering bug, not evidence of malice).
!
Credentials
The skill declares no required env vars but the code optionally reads AI_API_KEY/AI_API_BASE/AI_MODEL and stores an api_key in ~/.nex-voice/config.json when you run config set-api-key. Storing API keys in plaintext config plus passing the key on the command line to curl are disproportionate risks relative to the feature: the LLM feature requires a key but the implementation leaks it to process listings and to disk.
Persistence & Privilege
The skill does not request always:true and only writes to its own data directory (~/.nex-voice) and creates a local SQLite DB. It does not modify other skills or system-wide settings. Audio and transcripts are stored locally by default; external transmission only occurs if the optional LLM feature is configured and used.
What to consider before installing
What to consider before installing: - The core functionality is coherent: the skill transcribes audio using Whisper and stores transcripts/actions under ~/.nex-voice. - If you enable the optional LLM feature (set an API key and use --use-llm), the skill will send transcripts to the configured API base. That is expected behavior but be aware it uploads your transcript data to that external service. - Implementation risk: the skill invokes curl with the API key on the command line. That can expose the API key to other users/processes on the same machine via process listings. If you plan to use LLM features, prefer to (a) not set an API key unless necessary, (b) use an ephemeral API key you can rotate, or (c) inspect and modify the code to use a secure HTTP client (requests) that sends the key in headers without exposing it on the command line. - The config file (~/.nex-voice/config.json) stores the API key in plaintext when you use config set-api-key; treat that file as sensitive and protect it with filesystem permissions or avoid storing keys there. - The setup.sh DB initialization step may be nonfunctional; verify the database is created after setup or run a manual initialization by running the CLI to save a recording. Review the setup.sh before running it. If you want to reduce risk: - Do not configure LLM/API key (use purely local Whisper). - Audit or patch lib/action_extractor.py:_extract_actions_llm to call the API via a library (requests) using environment or secure header handling rather than passing the key and transcript on the command line. - Keep data and the config file on a single-user, trusted machine; do not enable LLM features for sensitive transcripts unless you trust the external provider. Confidence note: medium — the code is readable and mostly matches its description, but the command-line curl usage and config storage choices raise clear security concerns that justify the 'suspicious' classification. If the LLM call used a secure HTTP client (no API key on the command line) and the setup.sh DB step was corrected, confidence would increase toward benign.

Like a lobster shell, security has layers — review code before you run it.

latestvk97e6et00h4f01zc4208p1m8cs848a5k

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🎤 Clawdis
Binspython3, whisper, ffmpeg

Comments