Essay Humanize Iterator

v1.0.2

Iteratively rewrite essays to reduce AI detection scores while preserving meaning, complexity, and natural human writing style within defined linguistic metr...

0· 133·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for kevin0818-lxd/essay-humanize-iterator.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Essay Humanize Iterator" (kevin0818-lxd/essay-humanize-iterator) from ClawHub.
Skill page: https://clawhub.ai/kevin0818-lxd/essay-humanize-iterator
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install essay-humanize-iterator

ClawHub CLI

Package manager switcher

npx clawhub@latest install essay-humanize-iterator
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (iterative humanization of essays) match the included artifacts: measurement scripts (measure.py), iteration engine (iterate.py), pattern/docs, and weights. The skill does not request unrelated credentials or cloud APIs. The rewrite work is explicitly delegated to the orchestrating LLM, which is consistent with the skill's scope.
Instruction Scope
SKILL.md and skill.yaml instruct the agent to run local measurement scripts and to have the orchestrating LLM perform paragraph-by-paragraph rewrites using generated feedback. The instructions do not direct reading of unrelated system files, exfiltration to remote endpoints, or collection of secrets. They require the agent to preserve citations and avoid inventing sources.
Install Mechanism
There is no automated install spec (instruction-only skill), which lowers risk. However, measure.py depends on spaCy and the 'en_core_web_sm' model; SKILL/README instruct the user to install it manually. No network calls or remote downloads are present in the code; no obscure or remote install URLs are used.
Credentials
The skill declares no required environment variables, credentials, or config paths. All files needed for scoring and feedback are included. The lack of secret access requests is proportional to the stated purpose.
Persistence & Privilege
always:false and no persistent privileges are requested. The skill.yaml includes 'permissions: - shell', which allows running local shell commands (needed to invoke the included Python scripts). This is expected for a skill that runs local scripts but is a capability the user should be aware of.
Assessment
This package is internally coherent: it ships local measurement code and produces feedback for an LLM to rewrite essays locally. Before installing or using it, consider: (1) Ethical/legal risk — the tool explicitly aims to reduce AI-detection scores (it can be used to evade detection in contexts where that is dishonest or policy-violating). Use in accordance with your institution's rules. (2) Runtime requirements — you must install Python, spaCy, and the en_core_web_sm model; measure.py will raise an error if the model is missing. (3) Shell permission — the skill requests shell execution (to run the included scripts); run it in a trusted or sandboxed environment if you are cautious. (4) Review code locally — there are no network calls in the provided scripts, but always inspect third-party code before running. (5) If you want additional assurance, run the scripts in an isolated environment (container/VM) and ensure the orchestrating LLM rewrites are performed locally (no external API keys provided).

Like a lobster shell, security has layers — review code before you run it.

latestvk97f7v51rq91zfdyb0zfpecsts838nhx
133downloads
0stars
3versions
Updated 1mo ago
v1.0.2
MIT-0

Essay Humanize Iterator — Skill Specification

Purpose

Iteratively refine essays to minimize false positives from oversensitive AI detectors by removing stereotypical AI writing patterns and aligning semantic density and syntactic complexity with native human writing norms.

When to Use

  • User submits an essay and wants to reduce AI stylistic patterns that trigger false positives
  • User asks to rehumanize, iterate humanize, or improve writing naturalness
  • User wants to improve semantic density or syntactic complexity to match human writing norms
  • User mentions AI风格优化, 减少AI痕迹, 迭代改写, 写作自然度

Workflow

1. User provides essay text
2. MEASURE: Run skill/scripts/measure.py → get AI score, MDD, TTR, CW ratio
3. CHECK: If all metrics pass → output essay + report. Done.
4. REWRITE: Generate targeted revision using feedback from measurement
5. RE-MEASURE: Run measure.py on rewritten text
6. REPEAT: Loop steps 3-5 until pass or max iterations (default 3)
7. OUTPUT: Final essay + iteration report table + change summary

Measurement Axes

AxisToolPass Criteria
AI Pattern Score24-regex weighted scan≤ 15 / 100
MDD MeanspaCy dependency parse2.15 – 2.55
MDD Varianceper-sentence MDD spread≥ 0.016
Lexical TTRcontent-word type/token≥ 0.50
Content-Word Ratiocontent / all tokens0.52 – 0.65

See skill/references/metrics.md for formulas and baselines.

Iteration Strategy

  • Iter 1: Remove highest-weight AI patterns (em dashes, markdown, bolding, cliche metaphors)
  • Iter 2: Fix remaining patterns + increase syntactic variety
  • Iter 3: Fine-tune semantic density + register naturalness

See skill/references/iteration_strategy.md for full escalation logic.

Rewrite Engine

All rewriting is performed locally by the orchestrating LLM based on targeted feedback from measure.py. No external API calls are made.

Rules for rewriting:

  • Process the essay paragraph by paragraph
  • Follow the specific feedback instructions from build_iteration_feedback()
  • Preserve all citations, references, and factual claims
  • Do not add new sources or fabricate evidence
  • Output plain text only (no markdown formatting, no LaTeX delimiters)

Output Format

Final Essay

Plain text. Preserve the original heading structure if any. No markdown artifacts.

Iteration Report

| Iter | AI Score | MDD Mean | MDD Var  | TTR    | CW Ratio | Status |
|------|----------|----------|----------|--------|----------|--------|
|    0 |     45.2 |   2.4821 |   0.0098 | 0.4712 |   0.6280 |   FAIL |
|    1 |     18.6 |   2.3891 |   0.0142 | 0.4988 |   0.5932 |   FAIL |
|    2 |     11.3 |   2.3504 |   0.0178 | 0.5124 |   0.5801 |   PASS |

Change Summary

After the table, provide a brief bullet list of what changed across iterations:

  • Which patterns were removed
  • How sentence structure was varied
  • What vocabulary changes were made

Rules

  1. Preserve argument: The author's thesis, evidence, and logical flow must remain intact
  2. Preserve citations: Never remove, alter, or fabricate citations/references
  3. Plain text output: No markdown headings (unless input had them), no bold, no em dashes
  4. No hallucination: Do not add claims, data, or sources not in the original
  5. Idempotent measurement: Always use measure.py for scoring — do not estimate scores
  6. Early exit: If the input essay already passes all thresholds, output it unchanged with a passing report
  7. Transparency: Always show the iteration table so the user sees the convergence trajectory

Supporting Files

FilePurpose
skill/scripts/measure.pyQuantitative scorer (AI patterns + MDD + semantic density)
skill/scripts/iterate.pyIteration engine (measure + feedback generation)
skill/references/patterns.md24 AI pattern definitions and fix strategies
skill/references/metrics.mdMetric formulas, baselines, thresholds
skill/references/iteration_strategy.mdPer-iteration focus and escalation logic
data/analysis/weights.jsonCorpus-derived pattern weights

Comments

Loading comments...