Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Design Review

v1.1.0

Core pack — always active for visual work. Quality gate for UI, components, pages, layouts, or frontend work. Triggers on any visual/design task automaticall...

0· 207·3 current·3 all-time
byai-ron@aa-on-ai

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for aa-on-ai/design-review.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Design Review" (aa-on-ai/design-review) from ClawHub.
Skill page: https://clawhub.ai/aa-on-ai/design-review
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install design-review

ClawHub CLI

Package manager switcher

npx clawhub@latest install design-review
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name, description, reference docs, and three lint-like scripts (accessibility, anti-patterns, state checks) are coherent for a design-review quality-gate skill. The files and checks align with the stated purpose of UI/design QA. However the code contains an optional telemetry ping using an environment variable (ADS_TELEMETRY_URL) that is not declared in the skill metadata, which is an unexplained capability beyond the stated purpose.
!
Instruction Scope
SKILL.md scope is mostly reasonable: it instructs reading project guidelines, reference files, channel memory for prior decisions, and running the provided verification scripts. This is appropriate for a design QA skill. Concerns: (1) it explicitly tells agents/sub-agents to read memory/channels/{channel-name}.md — that may expose stored channel memory or sensitive contextual files depending on the agent's environment; (2) the verification scripts include a 'ping_telemetry' routine that will perform an outbound HTTP GET if ADS_TELEMETRY_URL is set, but the SKILL.md does not mention any telemetry or external endpoints. The instructions to 'copy CI files into your project' are normal but do write files to disk and should be done with consent.
Install Mechanism
There is no install spec and no external download — the skill is instruction-first with local Python scripts bundled. That lowers install risk: nothing is pulled from arbitrary URLs and scripts run only if explicitly invoked.
!
Credentials
The skill metadata declares no required environment variables, but the bundled scripts reference ADS_TELEMETRY_URL to send a telemetry ping if present. An undeclared env var that points to an external server is a mismatch: either telemetry should be documented and optional env var declared, or the network call removed. While the ping appears to send only a simple 'skill-fired/<script>' GET (no file contents), it still creates outbound network activity that could leak that the skill was run or be used to fingerprint hosts if an attacker controls the endpoint.
Persistence & Privilege
The skill is not marked always:true and does not request system-wide configuration changes or credentials. It does not appear to alter other skills' configs. Autonomous invocation is allowed by default (disable-model-invocation=false) which is normal; this alone is not a red flag but combine it with the undocumented telemetry behavior and reading of memory files for caution.
What to consider before installing
This skill is largely coherent for design QA, but review the bundled scripts before running them. Specifically: - Open the scripts (scripts/*.py) and search for network calls (e.g., urllib.request.urlopen). The accessibility script includes a ping_telemetry() that calls ADS_TELEMETRY_URL if present. - If you plan to run these scripts in your environment or allow the agent to run them, ensure ADS_TELEMETRY_URL is not set or that it points to a trusted internal endpoint. Ideally the skill should document telemetry and require explicit opt-in. - Consider running the checks locally in an isolated environment (no network) or run a code review of the other scripts (anti-pattern and state checks) to confirm they do not exfiltrate content. - Be aware the SKILL asks agents to read project guidelines and channel memory files; verify that reading those files is safe for your project and does not expose secrets. - If you will use this in CI, review any CI files it suggests copying before committing them. Ask the skill author to document telemetry behavior (declare ADS_TELEMETRY_URL as optional) or remove the automatic ping to make the skill's behavior explicit.

Like a lobster shell, security has layers — review code before you run it.

latestvk970v22msez4tp35bdvr1m7eqx83fept
207downloads
0stars
2versions
Updated 1mo ago
v1.1.0
MIT-0

Design Review Skill

Core Pack — Always Active

This is a core skill. Apply it on ALL visual and frontend work, no exceptions. You do not need permission or a specific trigger to use this.

When to Use

  • Before presenting ANY visual or UX work.
  • Treat this as a quality gate, not optional polish.
  • Sub-agents doing design/frontend work MUST run this before announcing completion.

Pre-Work: Read Before Building

1. Read the project's guidelines

  • Read guidelines.md or equivalent design system doc first if it exists.
  • Follow the project's existing components, tokens, and patterns before inventing anything.
  • If no formal guidelines exist, inspect the existing product and match its logic.

2. Research before designing

  • Check how similar tools solve the same problem before inventing a pattern.
  • Use proven references when they exist.
  • Quality bar references:
    • UX Tools — editorial restraint, typography, calm hierarchy
    • Inflight by Ridd — motion, depth, data viz polish
    • Linear — dense information, excellent hierarchy, no noise
    • Vercel dashboard — spacing, typography, dark mode discipline

3. Check design memory

  • Read memory/channels/{channel-name}.md for prior design decisions.
  • If memory says Aaron rejected a pattern, don't repeat it.
  • If a project brain file is linked from channel memory, read that too.

Aaron's Core Principles

  • Restraint IS the design.
  • Spacing is the #1 tell.
  • Typography hierarchy > color for information architecture.
  • Match references at pixel level before adding your own ideas.
  • Existing patterns > new patterns.
  • Interactive elements should feel polished, not dead.
  • If the foundation is wrong, no polish fixes it.
  • Good design is centripetal, not centrifugal.

Reference Files

Read only what the task needs. Keep this SKILL lean, load detail on demand:

  • references/typography.md — hierarchy, scale, pairing, measure
  • references/color.md — restrained palettes, tinted neutrals, contrast, OKLCH
  • references/spacing.md — spacing system, rhythm, grouping, layout density
  • references/motion.md — timing, easing, reduced motion, interactive feel
  • references/anti-patterns.md — patterns Aaron will clock instantly and reject

For sub-agents

  • Read the relevant reference files based on what you're building.
  • New layout or dashboard? Read spacing + anti-patterns.
  • Type-heavy screen? Read typography + spacing.
  • Color or theming work? Read color + anti-patterns.
  • Interactive polish? Read motion + anti-patterns.
  • If in doubt, at minimum read spacing + anti-patterns.

Pre-Flight Checklist

Run this EVERY TIME before presenting work to Aaron.

Step 1: Visual verification

  • Take a screenshot of the rendered result.
  • Compare side-by-side with the reference if one exists.
  • Check the target viewport, not an arbitrary devtools width.

Step 2: Design audit

  • Spacing check — enough breathing room? Default to more.
  • Color check — did you add color that wasn't necessary?
  • Typography check — is hierarchy clear without leaning on color?
  • Pattern check — are you using the project's existing components?
  • Interaction check — hover, focus, active states exist and feel intentional.
  • Integrity check — no placeholders, dead states, broken assets, or missing data handling.

Step 3: Honesty check

  • Is it actually done?
  • Does it meet the brief, not an adjacent brief?
  • Would you be proud to show this to Aaron cold?

Step 4: Run verification scripts

if you have access to the scripts directory, run these before presenting:

# check for common agent anti-patterns
python3 skills/design-review/scripts/anti-pattern-check.py <your-file.tsx>

# verify loading, empty, and error states exist
python3 skills/design-review/scripts/state-check.py <your-file.tsx>

# check semantic HTML, aria labels, alt text, heading hierarchy
python3 skills/design-review/scripts/accessibility-check.py <your-file.tsx>

fix any warnings before presenting. these are the cheapest quality checks — they catch the obvious stuff so the human review can focus on judgment calls.

for CI integration, copy ci/design-eval.py and ci/design-eval.yml into your project to run all three checks on every PR.

Step 5: Present with evidence

  • Screenshot of the result
  • What you referenced
  • Known gaps or uncertainties
  • Link to live/deployed version if applicable

Updating This Skill

  • After Aaron gives design feedback, capture it.
  • Add redirects to references/anti-patterns.md or the relevant reference file.
  • Add project-specific decisions to channel memory.
  • Goal: don't get the same design feedback twice.

Comments

Loading comments...