Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Percept Ambient

v1.0.0

Continuously captures and summarizes ambient conversations to build a local knowledge graph for context-aware assistance without explicit commands.

0· 589·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for jarvis563/percept-ambient.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Percept Ambient" (jarvis563/percept-ambient) from ClawHub.
Skill page: https://clawhub.ai/jarvis563/percept-ambient
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install percept-ambient

ClawHub CLI

Package manager switcher

npx clawhub@latest install percept-ambient
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name and description (continuous ambient capture / knowledge graph) align with the SKILL.md. It correctly references complementary skills (percept-listen, percept-summarize) needed to capture and summarize audio. However, the skill claims components and runtime services (LanceDB, NVIDIA NIM embeddings, FTS5, local dashboard on port 8960) without declaring required binaries, credentials, or host requirements (GPU, model access, storage). Those undeclared resource/credential needs are a material mismatch.
!
Instruction Scope
Instructions direct passive, continuous capture of conversations and building of searchable transcripts and entity graphs — a significant privacy action. The SKILL.md describes assembling context packets and serving a dashboard, but provides no enforcement or explicit steps for obtaining user consent, access controls for the dashboard/API, transcript retention/encryption, or how 'no audio stored' is guaranteed. As an instruction-only skill, there is nothing in the package ensuring the described privacy guarantees.
Install Mechanism
No install spec (instruction-only), which reduces direct supply-chain risk from this package itself. But the instructions rely on external components (LanceDB, NVIDIA NIM embeddings, FTS5, percept-listen/summarize skills) and a GitHub project link; those components may require downloads, GPU support, or network access that are not documented here. The lack of an install spec leaves those potentially risky installs out-of-band and unvetted.
!
Credentials
Registry metadata lists no required env vars or credentials, yet SKILL.md references NVIDIA NIM embeddings (likely requiring model access, a server, or credentials) and a local HTTP dashboard. The skill will write transcripts and vectors to local storage (SQLite + LanceDB) but does not declare config paths, encryption settings, or retention parameters beyond saying 'TTL auto-purge (configurable)'. That mismatch between claimed operations and declared environment/access is disproportionate and under-specified.
!
Persistence & Privilege
Although always:false, the skill's purpose is continuous background listening and context accumulation — a persistent capability with high privacy impact. Autonomous invocation is allowed (platform default), meaning an agent could run this continuously without repeated explicit user prompts. There are no explicit safeguards in the instructions to limit when or how long listening runs, or to require per-session consent.
What to consider before installing
This skill enables always-on passive capture of conversations and creates searchable transcripts and vectors — a high privacy and attack-surface item. Before installing, do the following: 1) Review and audit the referenced components (percept-listen, percept-summarize, LanceDB, any embedding provider) — inspect their source code and network behavior. 2) Confirm how transcripts are stored: where files live, whether they are encrypted at rest, and whether TTL/auto-purge is actually implemented and enforced. 3) Verify whether NVIDIA NIM embeddings require remote API access, credentials, or GPU hardware; the skill does not declare these needs. 4) Ensure the dashboard is bound to localhost, protected by authentication, and not inadvertently exposed to the LAN/Internet. 5) Require explicit, revocable user consent and per-session controls for listening; log and surface when recording/transcription occurs. 6) If you cannot audit the complementary skills and runtime components, avoid installing this — consider alternatives that explicitly document consent, storage, encryption, and data-flow guarantees.

Like a lobster shell, security has layers — review code before you run it.

latestvk97drt2gmgpkyt37jbtmf209g981n9g7
589downloads
0stars
1versions
Updated 15h ago
v1.0.0
MIT-0

percept-ambient

Ambient intelligence mode — continuous context awareness without explicit commands.

What it does

Runs in the background, building a knowledge graph of conversations, entities, and relationships over time. Your agent passively learns context from ambient speech — who you talk to, what projects are active, what decisions were made — without needing explicit commands.

When to use

  • User wants always-on context awareness
  • Agent needs background knowledge from daily conversations
  • User asks "what do you know about [person/project]?" based on overheard context

Requirements

  • percept-listen skill installed and running
  • percept-summarize skill installed (for entity extraction)

How it works

  1. All conversations are continuously captured and summarized
  2. Entities (people, companies, projects, topics) extracted automatically
  3. Relationships mapped between entities (works_on, client_of, mentioned_with)
  4. Context packets assembled on demand for any agent action
  5. Full-text search (FTS5) + vector search (LanceDB) for retrieval

Context packets

When your agent needs context, Percept assembles a Context Packet:

{
  "recent_conversations": [...],
  "resolved_entities": [...],
  "relationships": [...],
  "relevant_history": [...]
}

This gives the agent rich situational awareness without loading entire conversation histories.

Vector search

Semantic search over utterances using NVIDIA NIM embeddings (primary) with all-MiniLM-L6-v2 as offline fallback. Stored in LanceDB (local, zero-infra).

# Search via dashboard (port 8960) or API
curl localhost:8960/api/search?q=project+deadline&mode=hybrid

Privacy controls

  • All data stored locally in SQLite + LanceDB
  • TTL auto-purge (configurable retention periods)
  • No audio stored — only transcripts
  • Dashboard → Settings → Privacy for granular controls

Real-time dashboard

Monitor ambient intelligence at http://localhost:8960:

  • Live conversation feed
  • Entity graph visualization
  • Search across all conversations
  • Analytics and usage stats

Links

Comments

Loading comments...