Publish Guard

v1.0.4

Publish Guard is a public ClawHub pre-release audit skill. Use it when the user says "publish guard", "release audit", "pre-release check", or wants to revie...

1· 174·1 current·1 all-time
byZakhar Pashkin@zack-dev-cm

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for zack-dev-cm/public-surface-review.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Publish Guard" (zack-dev-cm/public-surface-review) from ClawHub.
Skill page: https://clawhub.ai/zack-dev-cm/public-surface-review
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install public-surface-review

ClawHub CLI

Package manager switcher

npx clawhub@latest install public-surface-review
Security Scan
Capability signals
CryptoRequires walletRequires OAuth tokenRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match included artifacts: four Python scripts implement leak scanning, public-surface checks, README scoring, and audit rendering. The declared dependency (python3/python) is appropriate and proportional to the task.
Instruction Scope
SKILL.md instructs the agent to run the bundled Python scripts against a repo root. The scripts intentionally read repository files (README.md, SKILL.md, openai.yaml, .md docs, typical text files and tracked filenames) and search for secret-shaped strings and internal-language patterns. Reading repository files is expected for an audit, but the skill will scan any file in the provided repo tree (including .env-like files) so users should not point it at a workspace containing unrelated secrets.
Install Mechanism
No install spec; this is an instruction-only skill with bundled scripts. No external downloads or package installs are requested, so there is no high-risk installation behavior.
Credentials
The skill requests no environment variables or credentials. The scripts contain regexes that recognize secret patterns (OpenAI keys, AWS keys, etc.) for detection purposes — this is appropriate to a leak scanner and does not imply the skill needs or exfiltrates secrets.
Persistence & Privilege
always is false and the skill is user-invocable. It does not modify other skills or system-wide config. Autonomous invocation is allowed (platform default) but is not combined with broad credential requests here.
Assessment
Publish Guard appears to do what it says: it runs local Python scripts to detect leak patterns and audit public-facing docs. Before enabling or running it, consider: 1) it will read all scannable files under whatever repository path you give it (including README.md, SKILL.md, .env-like files and other text files), so do not point it at a workspace that contains unrelated secrets or private repos; 2) it does not request credentials or perform remote calls in the bundled scripts, and the code performs redaction of sensitive-looking snippets, but filenames and metadata may still appear in outputs; 3) confirm Python is available (python3 or python), and review the included scripts yourself if you have concerns; and 4) if you allow autonomous invocation, be aware the agent could run these scans without an explicit prompt — restrict invocation or test in a sandboxed workspace if you want an extra safety margin.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Any binpython3, python
clawhubvk9727t273x5f0mgwcd1esxkc5s85hjn5docsvk9727t273x5f0mgwcd1esxkc5s85hjn5githubvk9727t273x5f0mgwcd1esxkc5s85hjn5latestvk9727t273x5f0mgwcd1esxkc5s85hjn5lintingvk9727t273x5f0mgwcd1esxkc5s85hjn5openclawvk9727t273x5f0mgwcd1esxkc5s85hjn5readmevk9727t273x5f0mgwcd1esxkc5s85hjn5releasevk9727t273x5f0mgwcd1esxkc5s85hjn5release-engineeringvk9727t273x5f0mgwcd1esxkc5s85hjn5safetyvk9727t273x5f0mgwcd1esxkc5s85hjn5securityvk9727t273x5f0mgwcd1esxkc5s85hjn5
174downloads
1stars
5versions
Updated 2d ago
v1.0.4
MIT-0

Publish Guard

Search intent: publish guard, public surface review, release audit, pre-release check, clawhub publish audit

Goal

Audit the public surface before release:

  • README
  • SKILL.md
  • agent metadata
  • launch docs in the working tree
  • obvious leak patterns

The output should answer one question clearly: publish now, or fix specific items first.

Use This Skill When

  • the user wants to publish a GitHub repo or ClawHub skill
  • the user wants a public-surface audit before a launch
  • the repo may contain internal launch docs, secret-shaped strings, or operator-only wording
  • the README or SKILL.md feels too insider-heavy or too long
  • the first-run path may be broken, vague, or buried

Quick Start

  1. Scan for obvious leak patterns.
    • python3 {baseDir}/scripts/scan_leaks.py --root <repo> --out <json>
  2. Scan the public surface for audience-fit problems.
    • python3 {baseDir}/scripts/scan_public_surface.py --root <repo> --out <json>
  3. Score the README or primary landing page.
    • python3 {baseDir}/scripts/score_launch_copy.py --readme <repo>/README.md --out <json>
  4. Render one decision-ready audit.
    • python3 {baseDir}/scripts/render_public_audit.py --repo <repo> --leaks <json> --surface <json> --copy <json> --out <md>

Operating Rules

  • Treat README.md, SKILL.md, agents/openai.yaml, and launch docs in the working tree as public.
  • Public copy should describe the user job before it explains the internal theory.
  • A public quick start should appear near the top and should be runnable without hidden context.
  • Keep public default prompts short. Move deeper operating rules into the skill body or scripts.
  • Flag internal launch docs in the repo unless they are intentionally private and excluded from the public package.
  • Prefer a small set of concrete findings over a broad essay.

What To Flag

  • absolute filesystem paths
  • localhost URLs and websocket endpoints
  • token-shaped strings and credential-like URLs
  • missing or buried quick starts
  • giant inventory sections or excessively long SKILL.md files
  • public prompts that read like internal control instructions
  • README intros that compare the repo to five other projects before explaining what it does

Bundled Scripts

  • scripts/scan_leaks.py
    • Search the repo for obvious leak patterns and secret-shaped strings.
  • scripts/scan_public_surface.py
    • Inspect README, SKILL.md, launch docs in the working tree, and public metadata for audience-fit issues.
  • scripts/score_launch_copy.py
    • Produce a simple launch-copy score for the primary README.
  • scripts/render_public_audit.py
    • Merge the JSON outputs into one concise markdown audit.

Comments

Loading comments...