Has Anonymizer

v1.0.3

HaS (Hide and Seek) on-device text and image anonymization. Text: 8 languages (zh/en/fr/de/es/pt/ja/ko), open-set entity types. Image: 21 privacy categories...

11· 804·3 current·3 all-time
byHuiming Liu@xuanwuskill
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the included artifacts: CLI wrappers, Python implementations for text/image anonymization, model downloads for a text GGUF and an image YOLO .pt, and references to llama-server for local inference. Required binaries (uv, llama-server) are reasonable for the described on-device workflow.
Instruction Scope
SKILL.md and the CLI scripts narrowly instruct the agent to run local scan/hide/restore and image mask operations. The runtime guidance focuses on scanning and masking and explicitly warns not to overwrite originals. There are no instructions that read unrelated system secrets or forward user data to unexpected external endpoints.
Install Mechanism
The install spec downloads two large model files from HuggingFace (well-known host) and references two brew formulas (uv, llama.cpp). Downloading models from HF is expected for on-device ML. Two caveats: (1) the brew install entries are macOS-specific but the skill declares no OS restriction, so users on Linux/Windows must manually provide binaries; (2) the scripts use 'uv run' which will install Python dependencies from PyPI at runtime — this is normal but means pip packages will be fetched/installed on the machine.
Credentials
The skill does not request credentials or secrets. Optional environment variables relate to model paths and concurrency (HAS_TEXT_MODEL_PATH, HAS_IMAGE_MODEL, HAS_TEXT_MAX_PARALLEL_REQUESTS) and are appropriate for runtime configuration.
Persistence & Privilege
The skill is not always-enabled and is user-invocable. Install writes model files to disk (expected for offline models) but it does not request persistent elevated privileges or attempt to modify other skills or system-wide agent configurations.
Scan Findings in Context
[subprocess-run] expected: references/eval/eval.py uses subprocess.run to invoke the CLI during evaluation — expected for a test harness that exercises the local CLI.
[download-url-huggingface] expected: Install spec downloads model files from huggingface.co. This is expected for on-device ML models; HuggingFace is a known host.
[uv-run] expected: Wrapper scripts invoke code via 'uv run' to manage Python dependencies. This will fetch/install Python packages from PyPI, which is expected but worth noting.
Assessment
This skill appears coherent and implements local text/image anonymization using on-device models. Before installing: (1) be aware the model files are large and will be downloaded to disk (HuggingFace links are used); verify you trust and want those models locally and that you have enough disk space. (2) The install entries provide brew formulas for macOS only — on Linux/Windows you'll need to install 'uv' and 'llama-server' yourself. (3) The runtime uses 'uv run' which will install Python packages (ultralytics, opencv, etc.) from PyPI; review those dependencies if you require pinning or vetting. (4) No credentials are requested and the CLI is local, but treat models and any outputs as sensitive when processing private data. If you need higher assurance, review the full source files (they are included) and the downloaded model checksums, or run the tool in an isolated environment (VM/container) before use.

Like a lobster shell, security has layers — review code before you run it.

latestvk9716rd8a1mn5fbnyjzwm1hp1x83k4n3

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🔒 Clawdis
Binsllama-server, uv

Install

Install uv (brew)
Bins: uv
brew install uv
Install llama.cpp (brew)
Bins: llama-server
brew install llama.cpp

Comments