Percept Ambient
WarnAudited by ClawScan on May 10, 2026.
Overview
This skill asks for always-on background capture and long-term indexing of ambient conversations, with unclear consent, retention, and external embedding boundaries.
Only install this if you intentionally want an always-on ambient conversation memory system. Before enabling it, verify the companion skills, confirm microphone consent and recording indicators, choose local-only embeddings if possible, set short retention, and test that pause, delete, and export controls work.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Private or offhand conversations could be stored, searched, and used to influence future agent behavior even when the user did not explicitly provide that context for a task.
The skill proposes persistent summaries, embeddings, and retrieval over all ambient conversations, then reuses that context across future agent actions without clearly bounded sources, approvals, or trust handling.
All conversations are continuously captured and summarized ... Context packets assembled on demand for any agent action ... Full-text search (FTS5) + vector search (LanceDB) for retrieval
Require explicit opt-in, visible recording status, source exclusions, per-action confirmation before using ambient context, clear retention defaults, and treatment of overheard content as untrusted.
The agent may continue learning from conversations after the immediate task is over, creating privacy and consent risks for the user and nearby people.
The documented behavior is long-running background monitoring rather than a bounded user-invoked task, and the artifacts do not clearly define stop controls, recording indicators, or consent boundaries for people being captured.
Runs in the background ... passively learns context from ambient speech ... without needing explicit commands
Provide clear enable/disable controls, microphone indicators, session-based activation, consent guidance, and a reliable purge/export mechanism before using this skill.
Conversation transcripts or derived text could be sent outside the local machine for embedding unless the user explicitly configures a local NIM endpoint.
The primary embedding path may involve an external provider for utterance embeddings, while the privacy section only states that data is stored locally; the data boundary and credential requirements are not clearly disclosed.
Semantic search over utterances using NVIDIA NIM embeddings (primary) with all-MiniLM-L6-v2 as offline fallback
Clarify whether NVIDIA NIM is local or remote, make offline/local embedding the default for ambient transcripts, declare any required credentials, and obtain explicit consent before sending transcript-derived data to a provider.
The actual microphone capture, transcript generation, dashboard, and retention controls are implemented outside this reviewed artifact.
The high-sensitivity capture and summarization behavior depends on other skills that are not included in the provided artifact set; this is not proof of harm, but users cannot verify those components here.
Requirements: - percept-listen skill installed and running - percept-summarize skill installed
Review the referenced Percept skills and their install sources, permissions, microphone handling, storage paths, and network behavior before enabling ambient mode.
