Essay Humanizer

v1.0.2

Rewrite AI-drafted essays into more human-like academic prose. Fine-tuned LoRA over Qwen3-8B guided by 24 Wikipedia-style AI-writing pattern weights plus MDD...

0· 173·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for kevin0818-lxd/essay-humanizer.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Essay Humanizer" (kevin0818-lxd/essay-humanizer) from ClawHub.
Skill page: https://clawhub.ai/kevin0818-lxd/essay-humanizer
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install essay-humanizer

ClawHub CLI

Package manager switcher

npx clawhub@latest install essay-humanizer
Security Scan
VirusTotalVirusTotal
Benign
View report →

Like a lobster shell, security has layers — review code before you run it.

latestvk97bdrh3ckesdvgn9f6r246wyh838ypy
173downloads
0stars
3versions
Updated 1mo ago
v1.0.2
MIT-0

Essay Humanizer (corpus-informed)

Rewrites AI-generated argumentative/academic essays toward human baseline style informed by CAWSE (M/D bands) LOCNESS, and contrast with DeepSeek-generated counterparts. Ships with a fine-tuned LoRA adapter (9.3 MB) and inference script.

Skill contract

ComponentPathNotes
Inference scriptscripts/inference.pyEntry point — humanize() function or CLI
LoRA adaptersassets/adapters/adapters.safetensors.json12.3 MB base64 JSON; auto-decoded to binary on first run
Pattern weightsdata/analysis/weights.jsonCorpus-derived, loaded by inference at runtime
Decoderscripts/decode_adapters.pyReconstructs .safetensors binary from JSON (auto or manual)
Installerscripts/install_deps.shOne-time: pip install mlx mlx-lm transformers + decode
Base modelQwen/Qwen3-8B-MLX-4bitDownloaded from HuggingFace on first run (~4.5 GB, cached)

Requirements: Apple Silicon macOS with Python 3.9+.

Quick Start

bash scripts/install_deps.sh          # one-time: installs deps + decodes adapter
python scripts/inference.py --file draft.txt   # adapter auto-decodes if not already done

Or from Python:

from scripts.inference import humanize
print(humanize("Your AI-drafted essay text here..."))

Weighted pattern table (descending priority)

When humanizing, address higher-weight rows first. Weights are data-driven from corpus analysis (Mann-Whitney); zero-weight rows were not statistically significant.

IDWeightCategoryPattern
P06_CLICHE_METAPHORS0.1358vocabularyCliche metaphors
P15_EM_DASH_OVERKILL0.1358punctuationEm dash overkill
P21_MARKDOWN_ARTIFACTS0.1358formattingMarkdown artifacts
P23_TEXTBOOK_BOLDING0.1358formattingTextbook bolding
P12_PRESENT_PARTICIPLE_TAIL0.1133rhetoricalPresent participle tailing
P10_RULE_OF_THREES0.0806rhetoricalRule of threes
P04_AI_VOCABULARY0.0621vocabularyAI vocabulary
P14_COMPULSIVE_SUMMARIES0.0598rhetoricalCompulsive summaries
P05_EXCESSIVE_ADVERBS0.0540vocabularyExcessive adverbs
P13_OVER_ATTRIBUTION0.0529rhetoricalOver-attribution
P11_FALSE_RANGES0.0341rhetoricalFalse ranges
P17_TRANSITION_OVERUSE0.0001punctuationOveruse of transition words
P01_UNDUE_EMPHASIS0.0000contentUndue emphasis
P02_SUPERFICIAL_ANALYSIS0.0000contentSuperficial analysis
P03_REGRESSION_TO_MEAN0.0000contentRegression to the mean
P07_REDUNDANT_MODIFIERS0.0000vocabularyRedundant modifiers
P08_FILLER_HEDGING0.0000vocabularyFiller hedging
P09_NEGATIVE_PARALLELISM0.0000rhetoricalNegative parallelisms
P16_EN_DASH_AVOIDANCE0.0000punctuationEn dash / hyphen misuse for ranges
P18_COLLABORATIVE_REGISTER0.0000registerCollaborative register
P19_LETTER_FORMALITY0.0000registerLetter-style formality
P20_INSTRUCTIONAL_CONDESCENSION0.0000registerInstructional condescension
P22_EXCESSIVE_LISTS0.0000formattingExcessive bulleted/numbered lists
P24_EMOJI_SYMBOL0.0000formattingEmoji/symbol injection

Syntactic complexity (MDD / ADD advisory)

Human Merit / Distinction-range writing in CAWSE often shows variable mean dependency distance (MDD); AI prose may cluster more tightly. When humanizing:

  • Reference MDD means from analysis: human ~2.333775514332394, AI ~2.4553791855163483.
  • Variance ratio (human/AI) ~1.7153931408079544: prefer natural mix of shorter and longer dependency links, not uniformly smoothed sentences.
  • Avoid flattening every sentence to minimal dependency length; that can read as a different kind of machine polish.

Mandatory rule (orchestrator)

  1. Output continuous prose suitable for submission (no chat-signoffs, no "hope this helps").
  2. Plain text only for math if any — no raw $$ LaTeX unless user explicitly requests LaTeX.
  3. Preserve author stance and citations if present; do not fabricate references.

Hosted HTTP API (optional, for non-Mac or remote use)

For non-Apple-Silicon machines or multi-user deployments, run the optional FastAPI server on a Mac host and connect via HTTP/OpenAPI:

  1. Install: pip install fastapi uvicorn[standard]
  2. Run: uvicorn api.main:app --host 0.0.0.0 --port 8765 (set HUMANIZE_API_KEY env var for auth)
  3. Point MCP / OpenAPI tools at https://<your-host>/openapi.json
  4. Call POST /v1/humanize with JSON {"text":"..."} (+ Authorization: Bearer …)

See references/hosted_api.md for details.

References

Comments

Loading comments...