Openclaw Security Toolkit
ReviewAudited by ClawScan on May 1, 2026.
Overview
The skill matches its stated security-audit purpose, but it can inspect OpenClaw credential/access files and change authentication settings when fix commands are used.
This appears to be a purpose-aligned local security toolkit rather than a malicious skill. Before installing, be comfortable with it reading OpenClaw credential/config/access files, avoid sharing its outputs without review, and use --fix or token rotation only when you are ready to change authentication settings.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Running scans or reports may expose sensitive OpenClaw credential locations and findings in the agent session or output files.
The scanner is intentionally configured to read OpenClaw config, environment, and credential locations that may contain API keys, tokens, or other sensitive account data.
"paths": ["~/.openclaw/openclaw.json", "~/.openclaw/.env", "~/.openclaw/credentials/"]
Use this only in a trusted local environment, review generated output before sharing it, and avoid pasting reports with secret findings into public or untrusted places.
Using --fix may change how OpenClaw authentication works and may require a gateway restart; a mistaken run could disrupt existing access.
The hardening path can rotate authentication tokens and write changes to the OpenClaw configuration when fix mode is used.
result = rotate_token(length=32) ... config["gateway"]["auth"]["mode"] = "token" ... with open(CONFIG_FILE, 'w') as f: json.dump(config, f, indent=2)
Run fix or token-rotation commands only when you intend to change authentication, and consider backing up the OpenClaw config first.
Saved reports may contain security findings or file locations that reveal where secrets are stored.
Reports are generated with secret-scan information included and can be saved to a user-specified file, creating a persistent artifact that may contain sensitive findings.
report = generate_report(format=args.format, deep=args.deep, include_secrets=True) ... Path(args.output).write_text(output)
Store reports securely, delete them when no longer needed, and redact sensitive details before sharing.
