Install
openclaw skills install sequential-readRead prose sequentially with structured reflections to simulate the reading experience
openclaw skills install sequential-readRead prose (novels, non-fiction, articles) by ingesting content in semantic chunks and building structured reflections iteratively. The output captures how your perspective developed over the course of reading — predictions that were wrong, questions that got answered, opinions that shifted — not just a retroactive summary.
| Command | Description |
|---|---|
/sequential-read <path-to-file> | Run a full reading session |
/sequential-read <path-to-file> --lens <persona> | Read with a perspective (e.g., "skeptic", "literary critic", "student") |
/sequential-read list | List all sessions |
/sequential-read show <session-id> | Show the synthesis for a completed session |
The pipeline runs in spawned sub-agents. Novel-length reads are a two-phase process: a main reader that handles the bulk of chunks, then a finisher that completes the remaining chunks and writes synthesis. This is the normal flow, not an error.
When the user invokes /sequential-read:
python3 {baseDir}/scripts/session_manager.py create <source-file>
sessions_spawn with label: reader-{session-id}
Tell the agent: "Session already exists at {session-id}. Do NOT create it again."
python3 {baseDir}/scripts/session_manager.py get <session-id>For novels (~20+ chunks), the main reader typically handles ~17-20 chunks before its context fills and the session ends. This is expected behavior, not failure. The finisher picks up the remaining 2-5 chunks and writes the synthesis with full context of all prior reflections.
Spawning the finisher:
sessions_spawn with label: finisher-{session-id}, model: "opus"
Task: "Resume reading session {session-id} at {baseDir path}.
Read reflections written so far to understand context.
Continue from chunk N (the next unwritten chunk).
Write remaining reflections, then run synthesis.
Session path: {session-path}"
Do not wait or ask the user between the main reader and finisher. When the main reader returns without a synthesis, immediately spawn the finisher. The whole pipeline should be hands-off.
All Python scripts are in {baseDir}/scripts/:
{baseDir}/scripts/session_manager.py{baseDir}/scripts/chunk_manager.py{baseDir}/scripts/state_manager.pyTemplates are in {baseDir}/templates/:
{baseDir}/templates/reflection_prompt.md{baseDir}/templates/synthesis_prompt.md/sequential-read <path-to-file> [--lens <persona>]python3 {baseDir}/scripts/session_manager.py create <source-file> [--lens <persona>]
This command handles resume detection automatically:
Capture the session-id from the first line of output.
python3 {baseDir}/scripts/session_manager.py get <session-id>
Check the status field to determine where to resume:
| Status | Action |
|---|---|
preread | Run preread phase from the start |
chunked | Run reading phase (resumes from current_chunk) |
read | Run synthesis phase |
complete | Display the existing synthesis |
For a new session or preread status:
Run the preread sub-skill ({baseDir}/preread/SKILL.md) with:
SESSION_ID = the session-idSOURCE_FILE = path to the source textBASE_DIR = {baseDir}For chunked status (or after preread completes):
Run the reading sub-skill ({baseDir}/reading/SKILL.md) with:
SESSION_ID = the session-idBASE_DIR = {baseDir}LENS = the lens value (or null)For read status (or after reading completes):
Run the synthesis sub-skill ({baseDir}/synthesis/SKILL.md) with:
SESSION_ID = the session-idBASE_DIR = {baseDir}After synthesis completes, send the user:
memory/sequential_read/<session-id>//sequential-read listpython3 {baseDir}/scripts/session_manager.py list
Print the output to the user.
/sequential-read show <session-id>python3 {baseDir}/scripts/session_manager.py get <session-id>
If status is complete, read and display:
memory/sequential_read/<session-id>/output/synthesis.md
If not complete, show the session status and progress.
The reading phase is the most demanding — it runs for many iterations and must sustain quality throughout. Choose the model based on source length:
| Source Length | Recommended Model | Rationale |
|---|---|---|
| Novel (10k+ lines, 20+ chunks) | Opus | Sustained quality over many iterations; large context window handles accumulated state |
| Novella / long essay (3k-10k lines) | Opus or Sonnet | Either works; Sonnet is fine if chunks stay under 15 |
| Article / short work (<3k lines) | Sonnet | Few chunks, context stays manageable |
When spawning the sub-agent, set the model explicitly: model: "opus" for novels.
Why this matters: Lighter models degrade over long reading sessions — reflections become stubs as context accumulates. The first test run of this skill on Sonnet with a 35-chunk novel produced 4 genuine reflections and 31 placeholders. Opus is required for novel-length works.
Chunk sizing: The structural chunker targets ~550 lines per chunk (range 200-700). For a typical novel (~10-12k lines), this produces ~20 chunks. Longer texts (15k+ lines) may produce 35+ chunks and will need a finisher session (see below).
The two-phase pattern is standard. For novel-length works (20+ chunks), always expect to spawn a finisher after the main reader. The main reader handles ~80-90% of chunks; the finisher handles the rest plus synthesis. For very long texts (35+ chunks), the main reader may only get ~25 chunks. Plan accordingly -- this is the normal pipeline, not error recovery.
Pre-create sessions: Always create the session with session_manager.py create BEFORE spawning the sub-agent. Tell the agent the session already exists and not to create it again. This avoids failures from duplicate creation attempts.
If you maintain reader-mind files (accumulated reading context — character knowledge, thematic threads, critical framework), load them into the sub-agent's task prompt as preamble. This gives the reader continuity across books in a series.
Include context in the spawn task:
"Before you begin reading, here is your accumulated reader context:
=== READING CONTEXT ===
[contents of reader-mind file]
Now read [book title]..."
After synthesis, update reader-mind files with new character knowledge, thematic thread updates, and cross-reference observations. Revise rather than append. Keep under ~4000 words per file.
After synthesis is complete, you can integrate the output into whatever workflow you prefer — blog posts, reading logs, knowledge graphs, series trackers, etc. The synthesis file at output/synthesis.md is self-contained and portable.