Pro Zh Summary
ReviewAudited by ClawScan on May 10, 2026.
Overview
This is a purpose-aligned local Chinese summarizer, but it auto-starts a persistent background local service and uses unpinned external ML components, so users should review it before installing.
Install only if you are comfortable with a local ML server being started in the background. Check the dependency and model sources, monitor RAM/GPU use, and make sure you know how to stop the localhost service after use.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A first summary request may leave a local model server running after the task, consuming CPU/RAM/GPU and occupying a localhost port until the user manually finds and stops it.
The client automatically launches a resident background server when the health check fails, and the provided artifacts do not show a stop command, idle timeout, PID tracking, or user confirmation before keeping the model service alive.
检查后端服务是否存活,如果死了就静默拉起它 ... subprocess.Popen([sys.executable, "server.py"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
Require explicit first-run consent, document how to stop the service, add an idle shutdown or lifecycle manager, and avoid suppressing logs needed for troubleshooting.
If run from an unexpected or writable directory, the skill could fail or execute an unintended local `server.py` file.
The background process is started using the relative path `server.py`, so the executed file depends on the current working directory unless the platform guarantees execution from the skill directory.
subprocess.Popen([sys.executable, "server.py"], creationflags=subprocess.CREATE_NO_WINDOW)
Resolve server.py relative to the skill file location, for example with `Path(__file__).with_name("server.py")`, and verify the target before launching it.
The model or dependency content may change over time, and first use may download external ML artifacts.
The skill uses an external Hugging Face model by name without a pinned revision or checksum. This is expected for the summarization purpose, but it creates provenance and reproducibility risk.
MODEL_NAME = "heack/HeackMT5-ZhSum100k" ... T5Tokenizer.from_pretrained(MODEL_NAME) ... MT5ForConditionalGeneration.from_pretrained(MODEL_NAME)
Pin model revisions and package versions, publish hashes where practical, and document the external sources used during setup or first run.
On a shared or compromised machine, another local process could call the summarization service or consume its resources.
The skill sends full text to a local HTTP API. It is bound to localhost, which limits exposure, but the endpoint has no authentication or caller identity checks.
@app.post("/summarize") ... uvicorn.run(app, host="127.0.0.1", port=28199)Keep the service bound to localhost, consider adding a random local token or per-session port, and avoid processing highly sensitive text on shared machines.
