Narrative Focus

v1.2.0

Narrative Focus — detect and fix "narrative weight misalignment" in technical tutorials and interview prep articles. Trigger when users ask to review technic...

0· 73·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for co-kyo/narrative-focus.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Narrative Focus" (co-kyo/narrative-focus) from ClawHub.
Skill page: https://clawhub.ai/co-kyo/narrative-focus
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install narrative-focus

ClawHub CLI

Package manager switcher

npx clawhub@latest install narrative-focus
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (detect and fix narrative weight misalignment) align with the content of SKILL.md and the reference SOPs. The skill is instruction-only and does not request unrelated credentials, binaries, or config paths.
Instruction Scope
SKILL.md and references clearly define detection/correction workflows and an 'authoritative verification' step that requires consulting external authoritative sources (official docs, MDN, RFCs). That verification implies the agent will search or fetch external documentation and compare facts. This is coherent for the stated purpose, but it means the agent may need network access and might transmit modified paragraphs to external search services or documentation endpoints during verification—so users should be aware of any sensitive content they feed into the workflow.
Install Mechanism
No install spec, no code files to run, and no archive downloads. Instruction-only skills are lowest-risk from install point of view.
Credentials
The skill declares no required environment variables, credentials, or config paths. All steps described operate on document text and public authoritative sources; nothing asks for unrelated secrets or broad system access.
Persistence & Privilege
Flags show default privileges (always: false, autonomous invocation allowed). The skill does not request permanent presence or modification of other skills or system settings. Autonomous invocation is the platform default and not by itself a concern here.
Assessment
This skill is internally coherent and low-risk: it is a prose-only SOP for labeling and adjusting narrative weight and requests no installs or secrets. Before using it, consider: (1) the 'authoritative verification' step requires consulting external docs—if your drafts contain confidential content you may not want to send to web services or third-party search; restrict network access or run verification against an internal corpus. (2) Always keep a copy of the original draft before applying automated corrections and review suggested changes manually (the skill explicitly avoids auto-fixing factual mismatches). (3) If you plan to let an agent run this skill autonomously, ensure the agent’s web/data-access policies are acceptable (where it will search and whether it may post text to external endpoints). Otherwise, the skill appears to do what it claims.

Like a lobster shell, security has layers — review code before you run it.

latestvk97d13h435bkegtemfd8msxdfh85hreb
73downloads
0stars
2versions
Updated 3d ago
v1.2.0
MIT-0

叙述重心规范 / Narrative Focus

Purpose

Prevent narrative weight misalignment in technical tutorials and interview prep articles — where a technical detail's narrative prominence doesn't match its actual role in the reader's mental model. Typical symptom: a transport-layer detail gets treated as a core architectural concept because it has a catchy or familiar name, causing readers to anchor their mental model on the wrong concept.

Target article types: Technical tutorials, deep-dive explainers, interview preparation articles, framework comparison articles — any technical content where concepts have clear causal hierarchies (architectural mechanisms vs transport details) and the reader is building a mental model.

Not applicable: API reference docs, opinion pieces, news/changelog, non-technical content.

This skill uses the AgentSkill-compatible SKILL.md format and works natively with OpenClaw and CodeBuddy. For other AI coding agents (Claude Code, Cursor, etc.), load SKILL.md and the appropriate reference file as context.

Core Concepts

Substitution Test (shared judgment rule)

The sole method for determining a detail's role: If the proposition conveyed by this detail were replaced with an alternative, would the user's observable behavior change?

  • Yes → Architectural (mechanism that determines system behavior)
  • No, only the delivery method changes → Transport (pipe that gets signals/data to the architectural mechanism)
  • Behavior unchanged, only choice/configuration differs → Configurable (switch/option on an existing mechanism)

Critical: Proposition identification before substitution. The same technical detail can convey different propositions depending on context. You must identify what proposition the detail is actually conveying in this article before applying the substitution test — do not substitute the literal term/implementation, substitute the proposition.

Example:

  • "JSX is React.createElement() syntax sugar" — the proposition is "JSX has no independent runtime semantics, it's just JS function calls." Substituting this proposition (e.g., with "JSX is a template with its own directive system") would fundamentally change user behavior → Architectural.
  • If the same sentence were read as the proposition "JSX compiles to the specific function createElement" — substituting this (e.g., with jsx()) would not change user behavior → Transport.
  • The correct reading depends on what the article is actually asserting, not what term appears in the sentence.

Proposition granularity. The same detail can be read at different granularities — e.g., "positional encoding provides location info" (conceptual) vs "sine/cosine formulas implement position encoding" (mathematical). The correct granularity depends on what the article actually elaborates. If the article spends a full section on the math, the proposition is at the math level. If it only mentions the math in passing, the proposition is at the conceptual level. See references/proposition-granularity-guide.md for detailed guidance and examples.

Three-Layer Role Labels

LabelDefinitionNarrative Weight
ArchitecturalMechanism that determines system behaviorHigh — core section, independent elaboration
TransportPipe that gets signals/data to the architectural mechanismLow — one paragraph, labeled as means
ConfigurableSwitch/option on an existing mechanismMedium — mention as needed, downgraded to supplement

Two Modes

Mode 1: Pre-processing (collection phase)

Use when the user is doing deep research / knowledge collection and wants to label collected details by role to prevent misalignment.

Entry recognition: User mentions "按叙述重心规范收集", "角色标注", "前处理", "collect with narrative focus rules", "role labeling", etc.

Workflow: Load references/pre-processing.md and follow its SOP.

Mode 2: Post-processing (detection + correction)

Use when the user wants to detect and fix narrative weight misalignment in a completed article/document.

Entry recognition: User mentions "检测叙述重心", "叙述重心错位", "审稿重心", "后处理", "detect narrative focus", "narrative weight misalignment", etc.

Workflow: Load references/post-processing.md and follow its SOP.

Notes

  • Both modes share the substitution test and three-layer role labels, but have completely different workflows
  • Pre-processing aims to "label collected items to prevent misalignment later"; post-processing aims to "detect misalignment in existing articles and surgically fix it"
  • Post-processing correction only does local weight migration — it does not rewrite the entire article. It downgrades transport concepts and upgrades architectural concepts without altering correct facts
  • Post-processing includes an authoritative verification step after correction: modified sections are checked against authoritative sources (official docs, team blogs, MDN) to ensure weight migration did not introduce technical semantic errors. If errors are found, they are reported to the user rather than auto-corrected

Comments

Loading comments...