Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

sam-faces

v1.1.0

Face recognition and identity memory for AI assistants. Enroll known people with reference photos, then automatically identify faces in inbound images — with...

1· 88·1 current·1 all-time
bySam Cox@jasonacox-sam

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for jasonacox-sam/sam-faces.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "sam-faces" (jasonacox-sam/sam-faces) from ClawHub.
Skill page: https://clawhub.ai/jasonacox-sam/sam-faces
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: sam-faces
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install sam-faces

ClawHub CLI

Package manager switcher

npx clawhub@latest install sam-faces
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
Name and description match the required binary (sam-faces) and CLI invocations. Asking for a sam-faces binary is coherent for face recognition. However, the skill references a workspaceDir and persistent databases for encodings without declaring any required config/path or permissions; that is an unstated requirement.
!
Instruction Scope
Runtime instructions tell the agent to automatically identify faces when users send images and to prepend the produced llm_context (names, confidences, positions) into image descriptions used for LLM prompts. That creates a clear privacy/exfiltration vector: even though inference is local, identity data would be injected into LLM prompts which may be sent to remote model APIs. Instructions also assume the agent can write temp files and to {workspaceDir}/faces/ — the skill does not declare or request explicit file-paths or permissions.
Install Mechanism
The registry lists no install spec, but SKILL.md includes a pip install suggestion (pip package 'sam-faces'). Installing from PyPI is plausible for this tool, but pip installs run arbitrary package install scripts. The absence of an explicit registry-level install spec plus the instruction-only nature means installers may not automatically vet or run that install; users should review the PyPI package source before installing.
!
Credentials
No environment variables or credentials are requested (which is appropriate). However, the skill stores face encodings and unknown face crops on-disk ({workspaceDir}/faces/people.db and /faces/unknown/) yet doesn't declare or require a workspace path or explicit file permissions. The combination of stored identities plus the instruction to include identities in LLM context is disproportionate without explicit safeguards or consent controls.
Persistence & Privilege
always:false (no forced always-on presence) and autonomous invocation is allowed by default. Autonomous invocation plus automatic face ID behavior is potentially risky: the agent could insert PII into prompts without additional per-message consent. This is not a configuration bug by itself, but it amplifies privacy risk and should be managed via user-facing controls (e.g., opt-in, per-image consent).
What to consider before installing
Before installing or enabling this skill, consider the following: - Review the sam-faces PyPI project and source code before pip installing (pip packages can run arbitrary code at install and runtime). - Understand where face data will be stored on disk (the SKILL.md references {workspaceDir}/faces/) and ensure that location is acceptable and secured; the skill does not declare or request these paths explicitly. - The skill's inference is local, but it instructs the agent to prepend identified names into LLM prompts (llm_context). If your assistant uses remote LLM APIs, that will transmit personally identifiable information to third parties — disable automatic injection or require explicit user consent per image. - If you need stricter privacy, require explicit user confirmation before identification or enrollment and audit what is being sent to models or external services. - If you proceed, install only on trusted devices, limit filesystem access to a dedicated workspace, and consider legal/privacy implications of storing and processing biometric identifiers in your jurisdiction.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

👤 Clawdis
Binssam-faces
latestvk975cznte183ceq6fe9wgjmkyn85nqhg
88downloads
1stars
7versions
Updated 1d ago
v1.1.0
MIT-0

sam-faces

Give your OpenClaw agent a real face memory. Enroll people with reference photos, then automatically identify them in inbound images.

Automatic behavior

When the user intentionally sends a photo or image to the assistant:

The user sharing an image with their AI colleague is an act of intentional sharing — they want the assistant to see and understand what's in the image. In this context, face identification should happen automatically as part of understanding the image content.

  1. Save the image to a temp path.
  2. Run face identification: sam-faces identify <path>
  3. Parse the JSON response.
  4. If face_count > 0, prepend llm_context to your image description.
  5. If any face has "unknown": true, ask the user who they are, then offer to enroll them.

This should happen as part of normal image processing — the user has already chosen to share the image.

Installation

pip install sam-faces

Or use the OpenClaw skill installer:

openclaw skills install sam-faces

Setup

The sam-faces command is available after installation:

sam-faces --help

Enroll a person

sam-faces enroll --name "Jane Smith" --photo /path/to/photo.jpg
  • One reference photo is usually enough (default threshold: 0.55).
  • Enroll 2–3 photos across different lighting for best accuracy.
  • Encodings are stored in {workspaceDir}/faces/people.db.

Identify faces

sam-faces identify /path/to/image.jpg

Returns JSON with names, confidence scores, bounding boxes, and an llm_context string:

{
  "face_count": 2,
  "faces": [
    {
      "name": "Jane Smith",
      "confidence": 0.646,
      "unknown": false,
      "bounding_box": {
        "top": 220,
        "right": 340,
        "bottom": 350,
        "left": 210
      },
      "center": [275, 285],
      "position_desc": "middle-left"
    }
  ],
  "llm_context": "2 faces detected: Jane Smith (at 22% left, 33% down, 64% confidence); John Smith (at 92% left, 31% down, 57% confidence)."
}

Visualize faces (draw bounding boxes + labels)

sam-faces visualize /path/to/image.jpg

Creates image_faces.jpg with boxes and name labels overlaid.

sam-faces visualize /path/to/image.jpg -o /path/to/output.jpg

List enrolled people

sam-faces list

Manage unknown faces

sam-faces unknowns

Shows all unknown face crops waiting to be enrolled.

Thresholds

  • Default: --threshold 0.55 (good balance of precision and recall)
  • Stricter: --threshold 0.45 — fewer false positives
  • Looser: --threshold 0.65 — better recall in varied lighting

Notes

  • All inference runs locally via face_recognition (dlib). Nothing leaves the machine.
  • Database: {workspaceDir}/faces/people.db
  • Unknown face crops saved to: {workspaceDir}/faces/unknown/
  • Works with existing face databases — no migration needed.

When to use

  • User sends a photo with people in it
  • Adding a new person to the face database
  • Checking who is enrolled

When NOT to use

  • Images with no faces (skip automatically)
  • Processing large batches of images (one at a time)

Comments

Loading comments...