humanize

v0.1.6

Use this skill when the user wants to generate or optimize Chinese communication copy so it sounds more human, more natural, less templated, and less like po...

0· 110·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
Capability signals
Crypto
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill is a local CLI-driven copy-optimization tool that bootstraps a Python runtime, downloads a local reranker model, calls a host LLM (via CoPaw bridge or a local HTTP endpoint), generates candidates, and scores them. The files, scripts, and optional environment variables align with that purpose. There are no unrelated credentials or surprising external services required by default.
Instruction Scope
The runtime instructions require agents to invoke the CLI with the user's full request verbatim and to relay any final-marked output exactly. The code also attempts to autodiscover a local model endpoint (parsing a local copaw.log) and will read/write run folders under the CoPaw working dir. These behaviors are coherent for an offline/local optimization tool but do mean the skill will carry and expose the raw user input to the local model pipeline and will read local CoPaw logs to discover endpoints.
Install Mechanism
No remote binary install spec in the skill manifest; bootstrapping creates a dedicated venv and downloads the specified Hugging Face scoring model (BAAI/bge-reranker-v2-m3). The model source (Hugging Face) and typical Python dependencies are explicit in requirements.lock. No obscure download URLs or URL shorteners are used in the code shown.
Credentials
The skill declares no required environment credentials. It reads optional environment variables (HUMANIZE_GENERATION_BACKEND, HUMANIZE_LLM_BASE_URL, HUMANIZE_LLM_MODEL, COPAW_WORKING_DIR, etc.) to select generation backends; these are reasonable for choosing a local model endpoint. There are no requests for unrelated cloud secrets. Note: if you point HUMANIZE_LLM_BASE_URL at a remote HTTP endpoint, user input will be sent to that endpoint.
Persistence & Privilege
always:false and the skill does not demand elevated platform privileges. The optional installer script (scripts/install_to_copaw.py) will copy the repo into ~/.copaw skill_pool and workspace skills and will modify the workspace skill manifest to mark the skill (optionally) enabled. That is expected for an installer but does write to the host agent's workspace configuration, so run that step only if you trust the skill and want it installed into your CoPaw workspace.
Assessment
This skill appears to do what it claims: it bootstraps a local Python venv, downloads a Hugging Face reranker, invokes a local or host LLM (CoPaw bridge or a local HTTP endpoint), and iteratively optimizes Chinese copy. Before installing or running: - If you care about privacy, avoid passing secrets or sensitive data in the user prompt—the skill's preservation rule explicitly forwards the entire user request verbatim to the local generation/scoring pipeline (and to any configured HTTP endpoint). If you set HUMANIZE_LLM_BASE_URL to a remote service, your input will be sent there. - The bootstrap step downloads models and installs Python packages into a dedicated venv under your CoPaw working dir; ensure you have disk space and trust the source. The repository's default model is from Hugging Face (BAAI/bge-reranker-v2-m3). - The install_to_copaw.py script copies files into ~/.copaw skill_pool and workspace skills and can enable the skill in the workspace manifest—only run that if you want the skill persistently installed in CoPaw. - The skill reads local CoPaw logs to autodetect endpoints and will attempt to call your host active model; this is normal for local integration but means it reads files under your CoPaw working dir. If you want minimal exposure, run the CLI locally in an isolated environment (skip running install_to_copaw.py) and avoid configuring a remote HUMANIZE_LLM_BASE_URL. Review the repository (especially scripts/bootstrap_runtime.py, scripts/install_to_copaw.py, and scripts/local_generation.py) before running to verify it matches your security expectations.

Like a lobster shell, security has layers — review code before you run it.

ai-writingvk97fft5e4kcwhjz99v3r51ce1h84pfrjchinesevk97fft5e4kcwhjz99v3r51ce1h84pfrjcopawvk97fft5e4kcwhjz99v3r51ce1h84pfrjcopywritingvk97fft5e4kcwhjz99v3r51ce1h84pfrjhumanizevk97fft5e4kcwhjz99v3r51ce1h84pfrjlatestvk97fft5e4kcwhjz99v3r51ce1h84pfrjopenclawvk97fft5e4kcwhjz99v3r51ce1h84pfrjzhvk97fft5e4kcwhjz99v3r51ce1h84pfrj

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments