Book summarizer

Use this skill when the user wants a long-form book summarized in the user's preferred language at an explicitly verified 20 percent compression ratio, with...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
1 · 60 · 0 current installs · 0 all-time installs
byDaniel Lélis Baggio@dannyxyz22
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the included scripts and instructions. The three Python helper scripts implement counting, splitting, aggregating, and ratio verification which are appropriate for a summarizer. Minor mismatch: the skill declares no required binaries but the SKILL.md and scripts assume a Python runtime (commands like `python scripts/...`).
Instruction Scope
SKILL.md confines activity to local files and the provided scripts (count, split, aggregate, verify). It also instructs the agent to draft summary batches 'in chat' — which implies sending book text to the LLM provider during summarization. The skill itself does not add remote endpoints, but using it will typically transmit source text to whatever model/service the agent invokes.
Install Mechanism
No install spec (instruction-only). Scripts are bundled and rely only on the Python standard library; nothing is downloaded or executed from external URLs.
Credentials
No environment variables, credentials, or config paths are requested. The skill does not require unrelated secrets or system credentials.
Persistence & Privilege
always is false and the skill does not request elevated or persistent system presence. It does not modify other skills or system-wide settings.
Assessment
This package is internally consistent and only operates on local text files via bundled Python scripts. Before installing: (1) ensure you have a Python runtime available (the skill's metadata omits this dependency), (2) be aware that summarization normally involves sending book text to the model/provider — do not use it for sensitive/private documents unless you trust your model endpoint, and (3) respect copyright for non-public works. The code is small and uses only the standard library; review it yourself if you want to confirm behavior locally.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.1
Download zip
latestvk97c40fsg6r0p93089279g6fb1837j2r

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Book Summarizer

Use this skill for requests like:

  • "Summarize this book at a 20% compression ratio"
  • "Generate a substantial summary and verify the ratio"
  • "Produce the summary in batches and validate the final total"

Rules

  • Default target ratio: 0.20
  • Default tolerance: 0.02
  • Accept only summaries between 18% and 22% of the original word count
  • Output must be in user's language of choice (e.g., pt-BR)
  • If the ratio is outside the allowed range, do not import the summary
  • In chat-only mode, if the source is very large, generate the summary in multiple batches and merge them before validation
  • All packaged helper scripts live in scripts/ inside this skill folder

Workflow

  1. Start from a local plain-text book file or a downloaded Project Gutenberg text.
  2. Count the source words with python scripts/book_tools.py count <original_file>.
  3. Compute the target summary length with python scripts/book_tools.py target <original_file> --ratio 0.20.
  4. If the source is too large for one reply, split it with python scripts/split_book.py <original_file> 3000.
  5. Draft summary batches in order, preserving chronology and section fidelity.
  6. Merge the batches with python scripts/book_tools.py aggregate <summary_file> <batch_files...>.
  7. Validate the final ratio with python scripts/verify_summary_ratio.py <original_file> <summary_file>.

Key Files

  • scripts/book_tools.py
  • scripts/split_book.py
  • scripts/verify_summary_ratio.py
  • SKILL.md

Notes

  • The packaged scripts use only the Python standard library.
  • Run the commands from the skill folder, or use explicit paths if you call them from elsewhere.
  • For very large books, prefer the automated pipeline over single-turn chat drafting.
  • In chat-only mode, books above roughly 80k words should be summarized over multiple turns; do not pretend a single short draft satisfies the 20% rule.

Files

4 total
Select a file
Select a file to preview.

Comments

Loading comments…