Percept Speaker ID

v1.0.0

Identifies and tracks speakers in multi-person conversations, mapping speaker labels to names and managing voice command authorization levels.

0· 441·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description (speaker identification, mapping labels to names, authorization gating) line up with the SKILL.md. The skill expects percept-listen and an Omi pendant — both are plausible dependencies for this feature and explain why no other credentials or binaries are required.
Instruction Scope
SKILL.md stays on-topic: it describes how transcripts with SPEAKER_x labels are resolved using a local speakers registry and how the is_user flag is used. It references a local JSON registry (percept/data/speakers.json) and a local dashboard for management; these are consistent with the stated behaviour and do not ask the agent to read unrelated system data.
Install Mechanism
No install spec and no code files are present, which matches the skill being instruction-only. There are no downloads or package installs referenced in the instructions.
Credentials
The skill requests no environment variables, credentials, or external secrets. Its use of a local data file and a companion skill/hardware is proportionate to its function.
Persistence & Privilege
always:false and default model invocation settings are reasonable. The skill does not request permanent or elevated platform privileges, nor does it claim to modify other skills or system-wide settings.
Assessment
This skill appears internally consistent, but before installing consider: (1) It assumes percept-listen and an Omi pendant—verify you actually run those components. (2) The speaker registry is a local JSON file (percept/data/speakers.json); make sure only intended, non-sensitive mappings are stored there and check file permissions. (3) Management is via a local dashboard on port 8960—ensure that dashboard is not exposed to untrusted networks. (4) Future voice-embedding functionality mentions pyannote but is not implemented—if that feature is added later it will require additional packages and potentially model/data downloads. (5) Verify the referenced GitHub repo (https://github.com/GetPercept/percept) yourself if you want source confirmation. If any of these assumptions are not true in your environment, treat the skill as incompatible or require additional review.

Like a lobster shell, security has layers — review code before you run it.

latestvk974trvr4eh4mr3zenh158q7y181njg5

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments