Abstract Logic Writer

v0.1.1

write, critique, score, compare, and revise english academic abstracts for ai, systems, and computer science papers using computable symbolic rules, lightwei...

0· 151·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for zhiweiwei-nami/abstract-logic-writer.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Abstract Logic Writer" (zhiweiwei-nami/abstract-logic-writer) from ClawHub.
Skill page: https://clawhub.ai/zhiweiwei-nami/abstract-logic-writer
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install abstract-logic-writer

ClawHub CLI

Package manager switcher

npx clawhub@latest install abstract-logic-writer
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (abstract drafting, critique, scoring, ontology bootstrap) match the provided assets and scripts: linting, scoring, lexeme/type assets, negative examples, and an ontology bootstrap workflow are all present. No extraneous environment variables, binaries, or config paths are requested.
Instruction Scope
SKILL.md restricts behavior to building proposition sets, applying computable rules, running the included Python scripts, and optionally bootstrapping/downloading a domain ontology. The instructions reference only repository files and user-provided inputs. The one scope note: the ontology bootstrap flow explicitly supports an optional download URL (references/ontology-bootstrap.md), so running that step may fetch external files if invoked.
Install Mechanism
No install spec is provided (instruction-only/packed repo). That is lowest-risk for distribution. The skill executes local Python scripts; no third-party package installation or remote code downloads are mandated by SKILL.md by default.
Credentials
The skill declares no required environment variables, credentials, or config paths. All required data appear to be local files bundled in the repository or user-supplied abstracts/notes.
Persistence & Privilege
always is false and the skill does not request permanent presence or modify other skills. It runs scripts from its own bundle; autonomous invocation is allowed by platform default but is not unusually privileged here.
Assessment
This skill is internally coherent and appears to do what it says: it lints and scores abstracts using local rule files and can bootstrap or use a small ontology. Before installing or running: (1) Inspect scripts/ontology_bootstrap.py (omitted in the listing) if you plan to use the bootstrap -- it may fetch external ontology URLs; only allow downloads from trusted sources or run that step offline. (2) Remember the skill runs bundled Python scripts on text you provide — run them in a sandbox or environment you control if you are concerned about execution risk. (3) No credentials are required by the skill, which is appropriate; if you see prompts later requesting credentials, treat them as unexpected. If you want higher assurance, review the omitted ontology_bootstrap.py source or run the skill with network access disabled.

Like a lobster shell, security has layers — review code before you run it.

latestvk977mq1aazenrqqq9xt2395v2s83qc8f
151downloads
0stars
2versions
Updated 1mo ago
v0.1.1
MIT-0

Abstract Logic Writer

Overview

Use symbolic discourse constraints and a lightweight ontology to draft or critique English academic abstracts. Treat abstract writing as a constrained mapping from propositions to an ordered sentence sequence, not as free-form style imitation.

Core workflow

  1. Build a proposition set P = {background, status, motivation, challenge, idea, technique, evidence} from the user's notes.
  2. Choose the shortest valid role chain whose image still contains motivation, challenge, and idea. The default 4-5 sentence chain is M -> C -> I -> T -> E, with optional background or status prepended.
  3. For each sentence, write a micro-structure general -> specification -> consequence/purpose. Do not place a narrow detail before its governing concept.
  4. Load references/computable-rules.md as the primary specification. Load references/lexeme-typing.md and assets/lexeme_types.json when verb-noun fit is uncertain.
  5. If the domain terminology is sparse or unstable, load references/ontology-bootstrap.md and optionally run: python scripts/ontology_bootstrap.py --domain "..." --terms "term a,term b" --outdir ./ontology_out
  6. Before finalizing, run: python scripts/abstract_lint.py draft.txt for rule diagnostics, and run python scripts/abstract_score.py draft.txt or python scripts/abstract_score.py before.txt --compare after.txt when a formal score or pairwise comparison is needed.

Drafting discipline

  • Assign each sentence exactly one primary discourse role.
  • Never output a sentence that only labels a condition without causal or purposive load. Reject patterns like X is a challenge. unless the sentence continues with cause, consequence, or operational relevance.
  • When introducing a new concept x, attach motivation, purpose, or consequence within the same sentence or an adjacent sentence.
  • When explaining a mechanism, state what it enables, stabilizes, reduces, or preserves.
  • Prefer typed predicate selection over idiomatic guesswork. Example: traffic grows, demand increases, applications develop, systems evolve, accuracy improves, continuity is maintained.
  • Avoid common AI-sounding markers. Do not use the em dash or Unlike unless the user explicitly asks to preserve source wording.
  • Do not end with a generic recap sentence. The last sentence must carry evidence, operational implication, or measured outcome.

Output modes

1. Draft from notes

Return:

  1. an optional symbolic plan when the source notes are underspecified,
  2. the final abstract,
  3. concise lint notes only when there are nontrivial tradeoffs.

2. Critique or rewrite an existing abstract

Return:

  1. a violation list keyed to the symbolic predicates in references/computable-rules.md,
  2. a repaired abstract,
  3. the smallest possible set of lexical substitutions when the main issue is verb-noun mismatch.

3. Produce negative examples

Use references/negative-examples.md. Generate intentionally flawed rewrites that violate one or more named predicates such as summary_only, selection_mismatch, scope_inversion, or forbidden_marker. Label each negative example with the violated rules. Do not present it as recommended style.

Resource map

  • README.md: GitHub-facing quick start and repository guide.
  • references/computable-rules.md: formal sentence and discourse constraints.
  • references/lexeme-typing.md: upper ontology for noun classes and verb selection.
  • references/ontology-bootstrap.md: domain ontology construction and download workflow.
  • references/negative-examples.md: contrastive negative examples and rule tags.
  • references/source-abstract-corpus.md: raw domain corpus supplied by the user.
  • scripts/abstract_lint.py: heuristic checker for role order, banned markers, and selection mismatches.
  • scripts/abstract_score.py: formulaic scorer and comparator for one or two abstract fragments.
  • scripts/ontology_bootstrap.py: generate a seed ontology or download a public ontology file.
  • assets/discourse_rules.json: machine-readable role order, forbidden patterns, and score weights.
  • assets/lexeme_types.json: machine-readable lexeme typing rules.
  • examples/: before-and-after fragments for quick scoring demos.
  • evals/: sample scoring outputs for repository documentation.

Working defaults

When the user does not provide all paper details, infer the missing low-risk connective tissue from the available propositions and state the assumptions briefly. Keep the prose compact, domain-accurate, and hierarchy-aware. Prioritize logical fit over rhetorical flourish.

Comments

Loading comments...