Appendix Table Writer

v1.0.0

Curate reader-facing survey tables for the Appendix (clean layout + high information density), using only in-scope evidence and existing citation keys. **Tri...

0· 149·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for willoscar/appendix-table-writer.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Appendix Table Writer" (willoscar/appendix-table-writer) from ClawHub.
Skill page: https://clawhub.ai/willoscar/appendix-table-writer
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install appendix-table-writer

ClawHub CLI

Package manager switcher

npx clawhub@latest install appendix-table-writer
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description ask for curated appendix tables and the repo contains code and pipeline docs that implement table curation from local evidence packs, anchor sheets, and citations. Required binaries (python3/python) are proportional to the task.
Instruction Scope
SKILL.md explicitly limits inputs/outputs and guardrails (no invented facts; validate citation keys; no network). The runtime script (scripts/run.py) reads workspace artifacts listed in SKILL.md (subsection_briefs.jsonl, evidence_drafts.jsonl, anchor_sheet.jsonl, citations/ref.bib) and writes outline/tables_appendix.md and a report. It does not try to read unrelated system files or environment variables in the visible code.
Install Mechanism
No install spec (instruction-only) but the skill bundle includes executable Python scripts and supporting tooling modules; the runtime expects Python on PATH. Absence of an install spec is not itself unsafe, but users should note that code shipped with the skill will be executed by the agent (no third-party downloads seen in the inspected files).
Credentials
The skill requests no environment variables, no credentials, and no config paths. Its inputs are repository/workspace files only. There are no unexpected secret requests or unrelated cloud credentials required.
Persistence & Privilege
The skill is not marked always:true and does not attempt to modify other skills or global agent settings in the inspected code. It writes output artifacts into the provided workspace paths only.
Assessment
This skill appears coherent and implements its stated purpose using Python scripts that process local evidence files and produce Markdown tables. Before installing or allowing autonomous runs: 1) Confirm your agent will pass a workspace that only contains intended inputs (avoid running against a workspace with secrets). 2) If you have low trust, run the included scripts manually in an isolated environment to inspect outputs. 3) Note there is no network access requested, but the bundle contains non-trivial tooling files (e.g., a large quality_gate module) — if you need higher assurance, skim or audit those modules for behaviors you disallow. Running in a sandbox or review by a developer is recommended if you plan to grant autonomous invocation on sensitive data.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Any binpython3, python
latestvk970jht0kchbtncy1aew69zhrh837bm4
149downloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Appendix Table Writer (publishable survey tables)

Why this exists

The pipeline can produce index tables that are useful for planning/debugging, but read like internal artifacts.

This skill writes publishable, reader-facing tables that can live in an Appendix:

  • cleaner layout
  • higher information density
  • survey-style organization (methods/benchmarks/risks), not intermediate state

Index tables remain in outline/tables_index.md and should not be copied verbatim into the paper.

Inputs

  • outline/table_schema.md (table intent + evidence mapping)
  • outline/tables_index.md (internal index; optional but recommended)
  • outline/subsection_briefs.jsonl
  • outline/evidence_drafts.jsonl
  • outline/anchor_sheet.jsonl
  • citations/ref.bib
  • Optional: GOAL.md

Read as needed:

  • references/table_cell_hygiene.md when Appendix table cells still copy raw paper self-narration or generic result wrappers

Machine-readable assets:

  • assets/table_cell_hygiene.json

Output

  • outline/tables_appendix.md

Roles (use explicitly)

Survey Table Curator (reader lens)

Mission: choose tables a reader actually wants in a survey Appendix.

Do:

  • prefer 2-3 tables that answer big questions (methods, evaluation, risks)
  • make rows comparable (same row unit across the table)
  • make the table legible without reading the whole paper

Avoid:

  • one-row-per-H3 index dumps
  • columns named like internal axes ("axes", "blocking_missing", "evidence readiness")

Production Editor (layout)

Mission: make the table look publishable in LaTeX.

Do:

  • keep columns <= 4
  • keep cells short (phrases, not sentences)
  • use <br> sparingly (0-1 per cell; never a list dump)

Avoid:

  • 6-8 columns with tiny unreadable text
  • cells that look like notes (semicolon chains + slash lists + long parentheticals)
  • slash-separated axis markers (A/B/C) in captions/headers/cells (post-merge voice gate will flag them); use commas or 'and' instead
  • internal axis jargon that reads like an intermediate artifact once printed (e.g., calling table columns "tokens"); prefer "protocol details/metadata/assumptions"

Evidence Steward (verifiability)

Mission: prevent hallucinations.

Do:

  • every row must include citations in a dedicated column (e.g., "Key refs")
  • only restate what appears in evidence packs / anchor sheet
  • when evidence is thin, prefer fewer rows with stronger grounding

Avoid:

  • "representative works" with no supporting claim in packs/anchors
  • adding benchmark/method details not present upstream

Table contract (publishable, Appendix-ready)

outline/tables_appendix.md must:

  • contain >=2 Markdown tables
  • use a caption line before each table, e.g. **Appendix Table A1. ...**
  • contain no headings (#, ##, ###) inside the file (the merger adds an Appendix heading)
  • contain no placeholders (TODO, TBD, FIXME, ..., unicode ellipsis)
  • contain citations in rows using [@BibKey] (keys must exist in citations/ref.bib)
  • avoid pipeline jargon and index-like column names

Workflow (explicit inputs)

  • Start from GOAL.md (scope) and outline/table_schema.md (what each table must answer).
  • Use outline/tables_index.md as a shortlist source, but do not paste it verbatim.
  • Fill rows/cells using outline/subsection_briefs.jsonl, outline/evidence_drafts.jsonl, and outline/anchor_sheet.jsonl (no guessing).
  • Validate every cited key against citations/ref.bib.

Recommended Appendix tables (default set)

If you are unsure what to build, start with these two:

  1. Method/architecture map (representative works)
  • Row unit: work/system line (not H3 id)
  • Columns (example):
    • Work (short name)
    • Core idea (1 short phrase)
    • Loop + interface assumptions (1 short phrase; reader-facing)
    • Key refs (2-4 cite keys)
  1. Evaluation protocol / benchmark map
  • Row unit: benchmark / evaluation setting (or a canonical protocol dimension if benchmarks are thin)
  • Columns (example):
    • Benchmark / setting
    • Task + metric (phrases, not definitions)
    • Key protocol constraints (budget/cost/latency/steps/tool access/threat model)
    • Key refs (2-4 cite keys)

Optional third (only if it stays clean): 3) Risk / threat-surface map

  • Row unit: threat/failure mode category
  • Columns: surface; why it matters; mitigation pattern; key refs

Positive / negative examples (style)

Bad (index table / internal notes):

  • Column: "Axes"
  • Cell: planning / memory / tools / eval / safety (slash dump)
  • Rows: every H3 id with 5+ <br> lines

Good (survey table):

  • Column labels are reader-facing ("Core idea", "Task + metric", "Constraint")
  • Cells are short phrases (no narration)
  • A reader can scan and compare rows quickly

Also good (avoid intermediate-artifact tells):

  • Don't label columns as "token(s)". If you need the idea, rewrite as "protocol details/metadata/assumptions".
  • Avoid ASCII arrows like -> inside cells; prefer natural phrasing (e.g., "interleaves reasoning traces with tool actions").

When to stop / route upstream

If you cannot fill a row without guessing:

  • remove the row (prefer fewer, solid rows), and
  • route upstream: strengthen evidence-draft / anchor-sheet for that area.

Script (generator + validator)

Quick Start

  • python scripts/run.py --help
  • python scripts/run.py --workspace workspaces/<ws>

All Options

  • --workspace <workspace_dir> (required)
  • --unit-id <id> (optional; used only for runner bookkeeping)
  • --inputs <a;b;c> (optional; ignored by the validator; kept for runner compatibility)
  • --outputs <relpath> (optional; defaults to outline/tables_appendix.md)
  • --checkpoint <C#> (optional; ignored by the validator)

Examples

  • Validate the default appendix tables file:

    python scripts/run.py --workspace workspaces/e2e-agent-survey-latex-verify-YYYYMMDD-HHMMSS

  • Validate a workspace that writes appendix tables to a non-standard path:

    python scripts/run.py --workspace workspaces/<ws> --outputs outline/tables_appendix.md

Notes:

  • This script writes outline/tables_appendix.md from the existing evidence artifacts and then validates the result.
  • It always writes a short report to output/TABLES_APPENDIX_REPORT.md.

Comments

Loading comments...