Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Amber Hunter

Amber-Hunter is a local AI memory engine that encrypts, stores, and summarizes OpenClaw/Claude sessions and files for instant recall and optional cloud sync.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
1 · 142 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Code and documentation largely match the stated purpose (local capture/recall, AES-256-GCM encryption, optional cloud sync to huper.org, OpenClaw session parsing, vector/keyword search). Use of OS keychain helpers, secret-tool, and curl is expected for this functionality. However, README/SKILL.md repeatedly claim the master password is never written to disk and 'zero user interruption' for proactive capture, while the code includes headless/config.json fallbacks and silent proactive capture — a substantive mismatch between claims and implementation.
!
Instruction Scope
SKILL.md and README instruct running install.sh, setting LaunchAgents/systemd/cron for proactive capture, and expose endpoints for automatic /ingest and /recall. The package contains proactive scripts that scan OpenClaw session history and workspace files and can write capsules automatically. Although related to the stated purpose, the instructions and included proactive tooling allow silent, periodic capture and background writes (cron/LaunchAgent), which expands scope beyond single-user, on-demand captures and raises privacy concerns.
Install Mechanism
Registry has no formal install spec, but the package includes install.sh, freeze.sh, release scripts and requirements.txt (sentence-transformers, numpy, cryptography). Installing involves pip/OS tooling and may create auto-start agents; the presence of an installer script that the docs tell users to run means arbitrary shell commands will be executed on the host if the user follows docs. That is expected for a local Python service but requires manual review of install.sh before execution.
!
Credentials
The skill declares no required env variables, which is consistent with the registry metadata, but the code reads/writes credentials from multiple sources: system keychains, AMBER_TOKEN env var, and ~/.amber-hunter/config.json. Critically, core/keychain.py implements fallbacks that place credentials (including in some cases the master_password) into config.json in headless Linux or Windows fallback scenarios—contradicting the repeated claim that master_password 'never' leaves the OS keychain. Optional cloud sync to huper.org is justified by purpose, but storing master credentials in plaintext config.json plus automatic sync increases exfiltration risk if auto-sync is enabled or install scripts are misused.
!
Persistence & Privilege
The skill is not marked always:true, but it includes proactive components and explicit instructions to install LaunchAgent/systemd/cron jobs that run every 10–15 minutes and capture session data silently. That gives the skill persistent background behavior beyond user-invoked runs. Autonomous agent invocation is allowed by default, which combined with background cron/agents raises the blast radius for unwanted data capture if misconfigured.
What to consider before installing
This package mostly does what its description promises, but there are important mismatches you should address before installing: - Inspect install.sh, freeze.sh and any LaunchAgent/systemd/cron instructions before running them. These scripts can create persistent background jobs that will scan OpenClaw sessions and workspace files automatically. - Verify how your master password is stored on your OS: the docs claim it stays in the keychain, but core/keychain.py contains headless and Windows fallbacks that write credentials to ~/.amber-hunter/config.json in plaintext in some environments. Do not enable cloud auto-sync until you are satisfied that the master password and config.json will not be exposed. - If you plan to use this on a headless VPS, assume credentials may be written to disk by default; consider manual key management or disabling auto-sync/proactive capture. - If you do install, set review_required to true and disable proactive auto-capture if you want explicit user review before any memory is written or synced. - Run the service with least privilege (do not run as root) and consider testing in an isolated environment first. If you want, I can: (1) summarize the contents of install.sh and proactive scripts line-by-line, (2) point out exact lines that write config.json or schedule jobs, or (3) suggest a minimal safe install checklist for this skill.
proactive/hooks/openclaw/handler.js:7
Environment variable access combined with network send.
proactive/hooks/openclaw/handler.ts:21
Environment variable access combined with network send.
!
proactive/hooks/openclaw/handler.js:65
File read combined with network send (possible exfiltration).
!
proactive/hooks/openclaw/handler.ts:66
File read combined with network send (possible exfiltration).
!
proactive/scripts/proactive-check.js:43
File read combined with network send (possible exfiltration).
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.2.2
Download zip
ai-memoryvk9753b84fm06ap2e1t3qfy69jn83cpp7cross-platformvk9753b84fm06ap2e1t3qfy69jn83cpp7encryptionvk9753b84fm06ap2e1t3qfy69jn83cpp7freezevk9753b84fm06ap2e1t3qfy69jn83cpp7hupervk9753b84fm06ap2e1t3qfy69jn83cpp7latestvk978nnj5xyd9wjcf2st3ggcn1h83zfy0localvk9753b84fm06ap2e1t3qfy69jn83cpp7memoryvk9753b84fm06ap2e1t3qfy69jn83cpp7privacyvk9753b84fm06ap2e1t3qfy69jn83cpp7sessionvk9753b84fm06ap2e1t3qfy69jn83cpp7

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Amber-Hunter Skill

Universal AI memory backend for Huper琥珀 Version: 1.2.1 | 2026-03-31


amber-hunter runs on the user's local machine (Mac / Linux / Windows). Local AI clients communicate via localhost:18998. External AI clients (ChatGPT, Claude.ai) use the cloud API at huper.org/api.


What It Does

Amber-Hunter is the capture and recall layer of Huper琥珀 — a personal memory protocol that works across any AI client and any platform.

  • Free & open — works immediately after install, no account needed
  • Universal capture — works for developers AND everyday life memories
  • AI-initiated writes — any AI can push memories via /ingest; user reviews and approves
  • Active recall/recall?q=<query> retrieves relevant past memories before responding
  • E2E encrypted — AES-256-GCM, master_password stored in OS keychain, never uploaded
  • Cross-platform — macOS / Windows / Linux (desktop + headless server)
  • Cloud sync — optional, encrypted upload to huper.org for cross-device access

Memory Category System (v1.1.9+)

琥珀 uses a two-level taxonomy: category (8 fixed domains) + tags (specific labels).

The 8 Categories

categoryemojiLabelCovers
thought💭想法Fleeting ideas, insights, eureka moments
learning📖学习Reading notes, courses, new knowledge
decision🎯决策Choices made, directions set
reflection🌱成长Reflections, reviews, emotional records
people🤝关系Conversations with others, notes about people
life🏃生活Health, food, daily observations
creative🎨创意Design ideas, things to build
dev💻开发All developer-specific content (code, errors, APIs, etc.)

Auto-detection Keywords

The system auto-tags based on content keywords. AI clients should also suggest category when calling /ingest:

thought    → "想到", "突然想", "realize", "just thought"
learning   → "读了", "看了", "reading", "book says"
decision   → "决定", "选择了", "decided", "going with"
reflection → "反思", "复盘", "reflecting", "looking back"
people     → "和...聊", "talked to", "met with"
life       → "运动", "睡眠", "sleep", "exercise"
creative   → creative/design keywords
dev        → python/js/git/docker/api/sql/error keywords (all existing dev rules)

Multi-Client Integration Guide

Which endpoint to use

AI ClientNetworkEndpointAuth
openclawlocalhostPOST /ingestBearer token
Claude CodelocalhostPOST /ingestBearer token
Claude in Coworklocalhost (Desktop Commander)POST /ingestBearer token
ChatGPTinternet (cloud)POST https://huper.org/api/ingestUser JWT / API key
Claude.aiinternet (cloud)POST https://huper.org/api/ingestUser JWT / API key

Get the local API token

curl http://localhost:18998/token
# → {"api_key": "ahk_xxxx..."}

What's Worth Capturing — Judgment Rules

Use these rules when deciding whether to call /ingest during a conversation:

SignalExampleconfidencereview_required
User explicitly asks to save"记住这个" / "save this"1.0false
Clear decision made"决定用 SQLite" / "we're going with plan B"0.9true
Preference expressed"我更喜欢..." / "I prefer TypeScript"0.85true
Personal fact revealedname, location, job, relationship0.8true
End-of-conversation summaryAI extracts 1-2 key takeaways0.7true
AI judges it might be usefulgeneral insight or observation0.6true

Default behavior: when in doubt, set review_required: true. The user reviews in the dashboard and accepts/rejects. Accepted/rejected history improves future judgment.

Never capture: conversation scaffolding ("can you help me"), ephemeral context ("right now I need"), common knowledge, task details that won't recur.


API Endpoints (v1.1.9)

Core

EndpointMethodAuthDescription
/statusGETnoneService health
/memoriesGETnone (localhost)Local memory snapshot
/tokenGETlocalhost onlyGet local API key
/recallGETBearer / ?token=Retrieve relevant memories (?q=<query>&limit=3&rerank=true for LLM reranking)
/rerankPOSTBearer / ?token=Re-rank memory candidates with LLM — POST body: {query, memories}
/freezeGET/POSTBearer / ?token=Capture current dev session context
/capsulesGETBearerList local capsules
/capsulesPOSTBearerCreate capsule manually
/syncGETBearer / ?token=Sync to huper.org cloud
/configGET/POSTBearer / ?token=Read/set config (auto_sync etc.)

New in v1.1.9 — AI Memory Writes

EndpointMethodAuthDescription
/ingestPOSTBearer / ?token=AI pushes a memory → queue or direct capsule
/queueGETBearer / ?token=List pending memories awaiting user review
/queue/{id}/approvePOSTBearer / ?token=Accept → writes to capsules
/queue/{id}/rejectPOSTBearer / ?token=Dismiss → status=rejected
/queue/{id}/editPOSTBearer / ?token=Edit then accept → writes modified to capsules

Localhost-only (security restricted)

EndpointMethodDescription
/master-passwordPOSTSet master_password (stored in OS keychain)
/bind-apikeyPOSTUpdate huper.org API key in config

/ingest Request Format

POST http://localhost:18998/ingest?token={api_key}
Content-Type: application/json

{
  "memo": "Anke prefers SQLite over Postgres for simpler deployment",
  "context": "During database selection discussion for amber project",
  "category": "decision",
  "tags": "decided",
  "source": "claude_cowork",
  "confidence": 0.9,
  "review_required": true
}

Response:

// Goes to review queue:
{"queued": true, "queue_id": "abc123", "message": "Added to review queue"}

// Written directly (confidence≥0.95 and review_required=false):
{"queued": false, "capsule_id": "xyz456", "message": "Saved directly"}

Usage Patterns by Client

openclaw / Claude Code

# 1. At conversation start — retrieve relevant context
TOKEN=$(curl -s http://localhost:18998/token | python3 -c "import sys,json; print(json.load(sys.stdin)['api_key'])")
curl "http://localhost:18998/recall?token=$TOKEN&q=YOUR_QUERY&limit=3"

# 2. During conversation — push a memory when something worth keeping surfaces
curl -X POST "http://localhost:18998/ingest?token=$TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "memo": "User decided to use SQLite for simpler ops",
    "category": "decision",
    "tags": "decided",
    "source": "claude_code",
    "confidence": 0.9,
    "review_required": true
  }'

# 3. End of conversation — auto-extract 1-2 key takeaways (confidence=0.7)
curl -X POST "http://localhost:18998/ingest?token=$TOKEN" \
  -d '{"memo":"Summary: ...", "source":"claude_code", "confidence":0.7, "review_required":true}'

Claude in Cowork

Claude in Cowork uses Desktop Commander to call localhost:

# Push a memory
mcp__Desktop_Commander__start_process(
  command='curl -X POST "http://localhost:18998/ingest?token=TOKEN" \
    -H "Content-Type: application/json" \
    -d \'{"memo":"...","category":"decision","confidence":0.9,"review_required":true,"source":"claude_cowork"}\''
)

ChatGPT (via GPT Action / cloud API)

Users configure their huper.org API key in the GPT:

curl -X POST https://huper.org/api/ingest \
  -H "Authorization: Bearer USER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "memo": "User mentioned they prefer async Python patterns",
    "category": "dev",
    "tags": "python",
    "source": "chatgpt",
    "confidence": 0.8,
    "review_required": true
  }'

GPT Action OpenAPI spec: see tasks/gpt-action-schema.yaml in the amber-site repo.


LLM Provider Abstraction (v1.2.0)

amber-hunter uses a unified LLM interface — configure the provider once, all LLM-powered features use it.

Supported Providers

ProviderConfig keyNotes
MiniMaxminimaxDefault; reads from ~/.openclaw/openclaw.jsonmodels.providers.minimax-cn.apiKey
OpenAIopenaiSet api_key and base_url in ~/.amber-hunter/config.jsonllm
Local (Ollama/LM Studio)localSet base_url to your local server URL

Config Location

// ~/.amber-hunter/config.json
{
  "llm": {
    "provider": "minimax",
    "model": "MiniMax-M2.7-highspeed",
    "api_key": "sk-cp-...",
    "base_url": "https://api.minimaxi.com/anthropic/v1/messages"
  }
}

For MiniMax, amber-hunter auto-detects the API key from:

  1. MINIMAX_API_KEY env var
  2. ~/.openclaw/openclaw.jsonmodels.providers.minimax-cn.apiKey
  3. Legacy ~/.amber-hunter/config.json → root-level api_key (if it looks like an LLM key)

LLM-powered features: /classify (LLM fallback), /rerank (LLM reordering), proactive extraction.


Platform Support

PlatformAuto-startKeychain/ingest/api/ingest
macOSLaunchAgent (launchctl)macOS Keychain
WindowsTask SchedulerWindows Credential Manager (pywin32)
Linux desktopsystemd user serviceGNOME Keyring (secret-tool)
Linux headless (VPS)systemdconfig.json fallback (明文)N/A

Installation

# macOS / Linux
bash ~/.openclaw/skills/amber-hunter/install.sh

# Verify
curl http://localhost:18998/status
curl http://localhost:18998/memories

Auto-start commands

PlatformCommand
macOSlaunchctl load ~/Library/LaunchAgents/com.huper.amber-hunter.plist
Linuxsystemctl --user start amber-hunter
WindowsConfigured automatically by install.sh via schtasks

Config & Storage

  • ~/.amber-hunter/config.json — API key, Huper URL, other settings
  • ~/.amber-hunter/hunter.db — local SQLite (capsules + memory_queue)
  • ~/.amber-hunter/amber-hunter.log — service log
  • OS keychain — stores master_password, never written to disk in production

Troubleshooting

# Service not running
curl http://localhost:18998/status
tail -f ~/.amber-hunter/amber-hunter.log

# Linux: secret-tool not found
sudo apt install libsecret-tools        # Ubuntu/Debian
sudo dnf install libsecret             # Fedora
sudo pacman -S libsecret               # Arch

# Windows: pywin32 not installed (Credential Manager fallback to config.json)
pip install pywin32

# Check pending memories
curl "http://localhost:18998/queue?token=$(curl -s localhost:18998/token | python3 -c 'import sys,json;print(json.load(sys.stdin)[\"api_key\"])')"

Version History

  • v1.2.0 (2026-03-31): LLM abstraction layer (core/llm.py) — unified interface for MiniMax/OpenAI/Local; auto-detects API key from OpenClaw config. /classify LLM fallback — keyword matching primary, LLM triggers when results insufficient. /rerank endpoint — LLM re-ranks recall candidates with relevance scores. Proactive capture fixes — session selection by message count (not mtime), filters .deleted. files, deduplicates by session_id. Cron path corrected to ~/.openclaw/skills/amber-hunter/proactive/.
  • v1.1.9 (2026-03-31): Universal memory taxonomy (8 life categories + tags); /ingest endpoint for AI-initiated writes; memory_queue table + approve/reject/edit flow; source_type + category DB fields; dashboard review queue card; ChatGPT GPT Action schema; SKILL.md multi-client guide; _background_sync() + 30min periodic scheduler; Private Network Access CORS headers.
  • v0.9.6 (2026-03-28): /bind-apikey localhost endpoint; dashboard retry-on-401 token refresh; sync timeout 120s.
  • v0.9.5 (2026-03-28): amber-proactive V4 — self-contained cron, LLM extraction, 15min interval.
  • v0.9.2 (2026-03-26): Fix semantic search — sentence-transformers + numpy; remove unused mac-keychain.
  • v0.9.1 (2026-03-26): Remove hardcoded personal Telegram session ID; generic session capture.
  • v0.8.4 (2026-03-22): Cross-platform support (macOS/Linux/Windows), E2E encryption, /memories no-auth, Claude Cowork session.

Built with 🔒 by Anke Chen for the Huper琥珀 ecosystem.

Files

22 total
Select a file
Select a file to preview.

Comments

Loading comments…