Percept Ambient

WarnAudited by ClawScan on May 10, 2026.

Overview

This skill asks for always-on background capture and long-term indexing of ambient conversations, with unclear consent, retention, and external embedding boundaries.

Only install this if you intentionally want an always-on ambient conversation memory system. Before enabling it, verify the companion skills, confirm microphone consent and recording indicators, choose local-only embeddings if possible, set short retention, and test that pause, delete, and export controls work.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Private or offhand conversations could be stored, searched, and used to influence future agent behavior even when the user did not explicitly provide that context for a task.

Why it was flagged

The skill proposes persistent summaries, embeddings, and retrieval over all ambient conversations, then reuses that context across future agent actions without clearly bounded sources, approvals, or trust handling.

Skill content
All conversations are continuously captured and summarized ... Context packets assembled on demand for any agent action ... Full-text search (FTS5) + vector search (LanceDB) for retrieval
Recommendation

Require explicit opt-in, visible recording status, source exclusions, per-action confirmation before using ambient context, clear retention defaults, and treatment of overheard content as untrusted.

What this means

The agent may continue learning from conversations after the immediate task is over, creating privacy and consent risks for the user and nearby people.

Why it was flagged

The documented behavior is long-running background monitoring rather than a bounded user-invoked task, and the artifacts do not clearly define stop controls, recording indicators, or consent boundaries for people being captured.

Skill content
Runs in the background ... passively learns context from ambient speech ... without needing explicit commands
Recommendation

Provide clear enable/disable controls, microphone indicators, session-based activation, consent guidance, and a reliable purge/export mechanism before using this skill.

What this means

Conversation transcripts or derived text could be sent outside the local machine for embedding unless the user explicitly configures a local NIM endpoint.

Why it was flagged

The primary embedding path may involve an external provider for utterance embeddings, while the privacy section only states that data is stored locally; the data boundary and credential requirements are not clearly disclosed.

Skill content
Semantic search over utterances using NVIDIA NIM embeddings (primary) with all-MiniLM-L6-v2 as offline fallback
Recommendation

Clarify whether NVIDIA NIM is local or remote, make offline/local embedding the default for ambient transcripts, declare any required credentials, and obtain explicit consent before sending transcript-derived data to a provider.

What this means

The actual microphone capture, transcript generation, dashboard, and retention controls are implemented outside this reviewed artifact.

Why it was flagged

The high-sensitivity capture and summarization behavior depends on other skills that are not included in the provided artifact set; this is not proof of harm, but users cannot verify those components here.

Skill content
Requirements: - percept-listen skill installed and running - percept-summarize skill installed
Recommendation

Review the referenced Percept skills and their install sources, permissions, microphone handling, storage paths, and network behavior before enabling ambient mode.