Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Citation Anchoring

v1.0.0

Regression-check citation anchoring (citations stay in the same subsection) to prevent “polish drift” that breaks claim→evidence alignment. **Trigger**: cita...

0· 194·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for willoscar/citation-anchoring.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Citation Anchoring" (willoscar/citation-anchoring) from ClawHub.
Skill page: https://clawhub.ai/willoscar/citation-anchoring
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install citation-anchoring

ClawHub CLI

Package manager switcher

npx clawhub@latest install citation-anchoring
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill's stated purpose is a narrow, analysis-only citation anchoring check (read a baseline JSONL and the DRAFT.md, then write a report). However the bundle contains many pipeline definitions and sizeable tooling modules (tooling/*.py, pipelines/*.md, a 275kB quality_gate module, executor logic, etc.). That large pipeline/tooling footprint is disproportionate for a small regression check and suggests the skill is a general pipeline component rather than a minimal, single-purpose checker.
Instruction Scope
SKILL.md itself is well-scoped: it says 'analysis-only', 'Network: none', and describes reading output/DRAFT.md and the baseline JSONL and writing output/CITATION_ANCHORING_REPORT.md. However included code (tooling/executor.py) can run subprocesses (it constructs and runs repo_root/scripts/run.py) and reads/writes many workspace files. The instructions do not explicitly tell the agent to execute arbitrary scripts, but the bundled executor enables that behavior if used — this expands runtime scope beyond the simple file-compare described in SKILL.md.
Install Mechanism
No external install/downloads are declared (no install spec). The skill requires only a Python binary (python3 or python) which is appropriate for included Python code. No third-party network downloads are present in the provided metadata.
Credentials
The skill declares no required environment variables, no credentials, and no config paths. That aligns with the described purpose (local file analysis).
Persistence & Privilege
The skill is not marked always:true and uses the platform default (agent-invocable/autonomous allowed). It does not request to modify other skills or system-wide config in the provided files. Still, autonomous invocation combined with executor subprocess logic increases the potential blast radius if misused.
Scan Findings in Context
[subprocess.run] unexpected: tooling/executor.py uses subprocess.run to execute a script at repo_root/scripts/run.py and will capture stdout/stderr to logs. For a narrowly scoped citation-anchor check, executing repository scripts is not expected and expands the skill's runtime capabilities.
[writes_to_workspace_files] expected: The skill and bundled tooling perform file I/O (reading baseline JSONL and DRAFT.md, writing report files). File writes are expected for this purpose, but the toolkit includes wide-ranging helpers (atomic_write_text, update_status_log, backup_existing) that can modify many files in the workspace.
What to consider before installing
This skill's SKILL.md describes a safe, offline check (read baseline JSONL + DRAFT.md → produce an anchor report). However the package includes a large pipeline/tooling codebase and an executor that can run repo scripts via subprocess.run. Before installing or enabling this skill: 1) Inspect repo_root/scripts/run.py (or confirm it does not exist) — that's the executable the bundle may call. 2) Review tooling/executor.py and any entrypoint scripts to understand what will be executed and what files will be touched. 3) If you only need the simple anchor check, consider extracting or running a minimal script that performs the JSONL vs DRAFT.md comparison rather than enabling the entire bundle. 4) Run the skill in a sandbox workspace with non-sensitive files first. 5) If you do enable autonomous invocation, prefer least-privilege workspaces and ensure no secrets or sensitive files are present, because the executor could execute repo-local scripts that perform broader actions.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Any binpython3, python
latestvk973b7yve01q90gc5fsk93sjc18360n8
194downloads
0stars
1versions
Updated 23h ago
v1.0.0
MIT-0

Citation Anchoring (regression)

Purpose: prevent a common failure mode: polishing rewrites text and accidentally moves citation markers into a different ### subsection, breaking claim→evidence alignment.

Inputs

  • output/DRAFT.md
  • output/citation_anchors.prepolish.jsonl (baseline; created by draft-polisher on first run)

Outputs

  • output/CITATION_ANCHORING_REPORT.md (PASS/FAIL + drift examples)

Baseline policy

  • draft-polisher captures a baseline once per run: output/citation_anchors.prepolish.jsonl.
  • Subsequent polish runs should keep per-H3 citation sets stable.

Workflow (analysis-only)

Role:

  • Auditor: only checks and reports; does not edit.

Steps:

  1. Load the baseline anchors.
  2. Parse the current output/DRAFT.md into ### subsections and extract citation keys per subsection.
  3. Compare current sets to baseline sets:
  • keys added/removed within a subsection
  • keys that migrated across subsections
  1. Write output/CITATION_ANCHORING_REPORT.md:
  • - Status: PASS only if no drift is detected
  • otherwise, - Status: FAIL with a short diff table + examples

Notes

If you intentionally restructure across subsections:

  • delete output/citation_anchors.prepolish.jsonl and regenerate a new baseline (then treat that as the new regression anchor).

Troubleshooting

Issue: baseline anchor file is missing

Fix:

  • Run draft-polisher once to generate output/citation_anchors.prepolish.jsonl, then rerun the anchoring check.

Issue: citations intentionally moved across subsections

Fix:

  • Delete output/citation_anchors.prepolish.jsonl and regenerate a new baseline (then treat that as the new regression anchor).

Comments

Loading comments...