Essay Humanize Iterator

v1.0.2

Iteratively rewrite essays to reduce AI detection scores while preserving meaning, complexity, and natural human writing style within defined linguistic metr...

0· 80·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (iterative humanization of essays) match the included artifacts: measurement scripts (measure.py), iteration engine (iterate.py), pattern/docs, and weights. The skill does not request unrelated credentials or cloud APIs. The rewrite work is explicitly delegated to the orchestrating LLM, which is consistent with the skill's scope.
Instruction Scope
SKILL.md and skill.yaml instruct the agent to run local measurement scripts and to have the orchestrating LLM perform paragraph-by-paragraph rewrites using generated feedback. The instructions do not direct reading of unrelated system files, exfiltration to remote endpoints, or collection of secrets. They require the agent to preserve citations and avoid inventing sources.
Install Mechanism
There is no automated install spec (instruction-only skill), which lowers risk. However, measure.py depends on spaCy and the 'en_core_web_sm' model; SKILL/README instruct the user to install it manually. No network calls or remote downloads are present in the code; no obscure or remote install URLs are used.
Credentials
The skill declares no required environment variables, credentials, or config paths. All files needed for scoring and feedback are included. The lack of secret access requests is proportional to the stated purpose.
Persistence & Privilege
always:false and no persistent privileges are requested. The skill.yaml includes 'permissions: - shell', which allows running local shell commands (needed to invoke the included Python scripts). This is expected for a skill that runs local scripts but is a capability the user should be aware of.
Assessment
This package is internally coherent: it ships local measurement code and produces feedback for an LLM to rewrite essays locally. Before installing or using it, consider: (1) Ethical/legal risk — the tool explicitly aims to reduce AI-detection scores (it can be used to evade detection in contexts where that is dishonest or policy-violating). Use in accordance with your institution's rules. (2) Runtime requirements — you must install Python, spaCy, and the en_core_web_sm model; measure.py will raise an error if the model is missing. (3) Shell permission — the skill requests shell execution (to run the included scripts); run it in a trusted or sandboxed environment if you are cautious. (4) Review code locally — there are no network calls in the provided scripts, but always inspect third-party code before running. (5) If you want additional assurance, run the scripts in an isolated environment (container/VM) and ensure the orchestrating LLM rewrites are performed locally (no external API keys provided).

Like a lobster shell, security has layers — review code before you run it.

ai-detectionvk97bvt80ngtc3vbd37tzfgends8385m3corpus-linguisticsvk97bvt80ngtc3vbd37tzfgends8385m3educationvk97bvt80ngtc3vbd37tzfgends8385m3iterationvk97bvt80ngtc3vbd37tzfgends8385m3latestvk97f7v51rq91zfdyb0zfpecsts838nhxstyle-editingvk97bvt80ngtc3vbd37tzfgends8385m3writingvk97bvt80ngtc3vbd37tzfgends8385m3

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments