knowledge-health-checker

v1.0.1

Audit and improve Markdown knowledge-base health across Obsidian, Logseq, Notion exports, docs folders, and wiki repositories. Detect empty placeholder notes...

1· 50·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for xb19960921/knowledge-health-checker.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "knowledge-health-checker" (xb19960921/knowledge-health-checker) from ClawHub.
Skill page: https://clawhub.ai/xb19960921/knowledge-health-checker
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install knowledge-health-checker

ClawHub CLI

Package manager switcher

npx clawhub@latest install knowledge-health-checker
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description match the included artifacts: three Python scripts implement scanning, HTML report generation, and fix‑script generation for Markdown knowledge bases. No unrelated credentials, binaries, or install steps are requested.
Instruction Scope
SKILL.md limits destructive actions (default: report only, require explicit confirmation before applying fixes). It also asks the agent to produce full reports and file contents for review. This is coherent but increases the risk that users will accidentally expose repository contents when sharing outputs—inspect outputs carefully before publishing.
Install Mechanism
No install spec; this is instruction‑plus-code. All code is bundled in the skill (no network installs/downloads). That reduces supply‑chain risk compared with remote fetches.
Credentials
The skill requests no environment variables or external credentials and only manipulates local files under the provided scan path, which is proportionate to its purpose.
Persistence & Privilege
always:false and no cross‑skill/system config changes. However the skill can generate executable scripts (shell + Python) that include rm and sed commands; if the agent/platform is permitted to execute those scripts automatically, that increases potential damage. The SKILL.md default forbids auto‑applying fixes, which mitigates this risk.
Assessment
This skill appears to do what it says: scan Markdown files, produce a prioritized health report, and generate safe fix plans. Before installing or running: (1) run it on a copy or backup of your knowledge base; (2) inspect generated fix scripts before executing—they include rm/sed operations and are executable by default; (3) do not grant the agent/platform permission to auto‑execute generated scripts without manual review; (4) note the skill outputs full file contents in reports—avoid sharing those outputs if they contain sensitive data; (5) test on a small dataset first to confirm behavior and platform compatibility (e.g., sed -i differences on macOS vs Linux). If you want higher assurance, request a quick code review of the truncated/unfinished portions (there is a small coding bug/truncation in health_check.py as provided) before relying on it at scale.

Like a lobster shell, security has layers — review code before you run it.

latestvk97asqscggw24t5p9vav1j0sy985htfr
50downloads
1stars
1versions
Updated 3d ago
v1.0.1
MIT-0

Knowledge Health Checker

Knowledge Health Checker audits a Markdown-based knowledge base as a living system, not a folder full of files.

It detects whether the knowledge garden is:

  • connected or fragmented
  • dense or hollow
  • current or stale
  • navigable or full of dead links
  • safe to auto-fix or requiring human review

The goal is not only to find problems, but to produce a prioritized, safe, actionable health report.


When to Use

Use this skill for:

  • Obsidian vault cleanup
  • Logseq / Notion Markdown export review
  • documentation repository health checks
  • wiki linting before migration or publishing
  • broken link detection
  • empty placeholder / TODO note detection
  • orphan note and graph fragmentation analysis
  • content density and structure quality review
  • periodic knowledge-base maintenance

Do not use it for semantic fact-checking. This skill checks structure, links, density, freshness, and maintainability, not whether every claim is true.


Core Principle

A healthy knowledge base has four properties:

  1. Substance — notes contain enough content to be useful.
  2. Connectivity — important notes are linked into the graph.
  3. Navigability — links, headings, and structure help readers move through knowledge.
  4. Maintainability — stale, broken, duplicate, or low-value content is visible and repairable.

A knowledge base can be large and still unhealthy. Size is not health.


Default Workflow

Step 1: Confirm scope and safety

Before scanning, identify:

Target path:
Formats: markdown / wiki links / relative links
External URL check: yes/no
Generate fix script: yes/no
Auto-apply fixes: no by default
Exclude directories:
Estimated file count:

Safe default:

scan only → report only → generate fix plan → user reviews → user applies

Never delete, rename, rewrite, or auto-apply fixes without explicit confirmation.

Step 2: Build file and heading index

Index:

  • .md files
  • normalized filenames and aliases
  • headings / anchors
  • relative paths
  • wiki links such as [[note]] and [[note#heading]]
  • markdown links such as [text](path.md)

Exclude by default:

.git/
node_modules/
__pycache__/
.obsidian/
.trash/
dist/
build/

Step 3: Detect hollow or low-value notes

Flag likely hollow notes when they match one or more:

  • fewer than 200 characters
  • no heading
  • only TODO / placeholder text
  • image-heavy with very little explanation
  • template content not filled in
  • empty exported page from Notion/Logseq

Classify severity:

SeverityMeaningTypical action
P0Empty or pure placeholderdelete, archive, or fill immediately
P1Too thin to be usefulexpand with definition, context, examples
P2Usable but weakimprove structure or add links

Step 4: Detect broken links

Check:

  • wiki file links: [[filename]]
  • wiki heading links: [[filename#heading]]
  • local markdown links: [text](../path/file.md)
  • image/embed paths
  • optional external URLs, only with user confirmation because it can be slow/noisy

For each broken link, report:

source file
link text
target
link type
probable fix if a similar file exists

Step 5: Analyze content density and structure

Measure:

  • word/character count
  • heading depth and hierarchy
  • list/table/code-block usage
  • internal link count
  • external link count
  • last modified time
  • very long files that may need splitting
  • files with no inbound or outbound links

Suggested ranges:

SignalHealthy rangeWarning
Short note300+ words or intentionally atomic<200 characters
Long notestill navigable with headings>3000 words without structure
Internal linksat least 1-3 for durable noteszero links = possible orphan
Freshnessdepends on domainstale if >90 days and marked active

Step 6: Analyze knowledge graph health

Build a graph:

node = markdown file
edge = internal link

Report:

  • total nodes
  • total edges
  • orphan nodes
  • central nodes
  • weakly connected components
  • one-way links
  • fragmented topic clusters

A perfect graph is not required. The goal is to identify the highest-value repair points.

Step 7: Score health

Default scoring:

DimensionWeightGood state
Hollow note rate25%few or no empty placeholders
Broken link rate30%no broken internal links
Content density25%most notes have useful substance and structure
Network connectivity20%important notes are connected; few accidental orphans

Health score:

health = weighted score from 0 to 100

Use labels:

ScoreLabel
90-100Excellent
75-89Healthy
60-74Needs maintenance
40-59Fragile
0-39Critical

Step 8: Generate report and fix plan

Return a concise summary first. For large scans, provide a full report path.

Fix plans must be safe:

  • generate proposed changes
  • group by risk
  • include reason for each fix
  • require user review before applying destructive changes

Never silently delete or rewrite knowledge files.


Output Format

Use this format:

## Knowledge Health Summary
- Target:
- Files scanned:
- Health score:
- Label:
- Top risks:

## Findings
| Category | Count | Severity | Notes |
|---|---:|---|---|
| Hollow notes |  |  |  |
| Broken links |  |  |  |
| Orphan notes |  |  |  |
| Overlong notes |  |  |  |
| Stale active notes |  |  |  |

## Highest-Impact Fixes
1. P0:
2. P1:
3. P2:

## Safe Fix Plan
- Auto-safe fixes:
- Needs human review:
- Do not auto-apply:

## Artifacts
- Report:
- Fix script:
- Raw JSON:

For small knowledge bases, include concrete file examples. For large ones, include top 10 examples per category and write full details to a report file.


Safe Fix Policy

Classify fixes by risk:

RiskExamplesPermission
Lowgenerate report, list broken links, suggest linksno extra confirmation
Mediumcreate fix script, add missing backlinks in draft outputask before writing files
Highdelete notes, rename files, rewrite links globally, split filesexplicit confirmation required

Default behavior: report and propose, do not mutate.


Bundled Scripts

Use these when available:

  • scripts/health_check.py — core scanner for hollow files, broken links, density, and graph stats.
  • scripts/report_generator.py — HTML report generation.
  • scripts/auto_fix.py — fix-plan or repair-script generation.

Run scripts from the skill directory or pass absolute paths. If a script lacks CLI ergonomics, inspect it and adapt safely rather than guessing destructive behavior.


Example Commands

Basic scan:

python3 scripts/health_check.py /path/to/knowledge-base

Generate a report from scan results if supported:

python3 scripts/report_generator.py results.json --output health-report.html

Generate a fix plan, not auto-apply:

python3 scripts/auto_fix.py results.json --dry-run

If the bundled script does not support these exact flags, read the script first and use its actual interface.


Test Prompts

Use test-prompts.json for Darwin-style regression evaluation. Good test coverage should include:

  • small Markdown folder with broken links
  • Obsidian-style wiki links and missing headings
  • placeholder-heavy exported notes
  • a large graph with orphan clusters
  • request for safe fix plan without auto-apply

Anti-Patterns

Avoid:

  • equating more notes with better knowledge
  • deleting or rewriting files without confirmation
  • checking external URLs by default on large vaults
  • treating all orphan notes as bad; some are intentionally private/draft
  • creating huge reports with no prioritized next action
  • producing a repair script without explaining risk
  • ignoring non-English filenames and encodings

Quality Bar

A good knowledge health check must be:

  • safe: no destructive changes without confirmation
  • specific: names files and link targets
  • prioritized: P0/P1/P2, not a flat dump
  • actionable: includes exact repair suggestions
  • scalable: summarizes large vaults without flooding context
  • portable: works for Obsidian, Logseq, Notion exports, and plain Markdown

If the output only says “you have broken links” without showing where, why it matters, and what to do next, it failed.

Comments

Loading comments...