Design Red Team Audit

v1.0.0

Adversarial design audit that stress-tests a game feature, system, pitch, roadmap item, or product idea by assuming failure and identifying the most credible...

0· 84·0 current·0 all-time
byStanislav Stankovic@stanestane

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for stanestane/design-red-team-audit.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Design Red Team Audit" (stanestane/design-red-team-audit) from ClawHub.
Skill page: https://clawhub.ai/stanestane/design-red-team-audit
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install design-red-team-audit

ClawHub CLI

Package manager switcher

npx clawhub@latest install design-red-team-audit
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (adversarial design audit) matches the SKILL.md and reference docs. No binaries, env vars, or installs are required; the included files are guidance, examples, and a workflow consistent with the stated goal.
Instruction Scope
SKILL.md provides a bounded procedure for producing adversarial audits (lenses, outputs, templates). It does not instruct reading local files, environment variables, or calling external endpoints. It does allow the agent to 'make reasonable assumptions' when input is missing — a legitimate choice for this task but one that can produce hallucinated context if the user omits important details.
Install Mechanism
No install spec or code is present; this is instruction-only, which is the lowest-risk model for skills because nothing is written to disk or downloaded.
Credentials
The skill requires no credentials, env vars, or config paths. It only processes user-provided design text and the bundled reference docs, which is proportional to its purpose.
Persistence & Privilege
always:false and no special privileges requested. disable-model-invocation is false (normal platform default) so the agent could invoke it autonomously per platform rules; the skill itself does not demand persistent presence or system-wide changes.
Assessment
This skill is low-risk and internally coherent. Before using it, avoid pasting secrets or proprietary data you don't want the model to process; provide the minimal necessary context to reduce the assistant making unwarranted assumptions; if you need the audit to stay strictly within certain facts or constraints, state those explicitly in the prompt (e.g., 'Do not assume X' or 'Only use the information I provide'). As with any automated critique, validate suggested mitigations with your team — the skill is adversarial in tone and may propose disruptive but impractical fixes.

Like a lobster shell, security has layers — review code before you run it.

latestvk97a89zwhafx2p7v4rr6paqqfx858098
84downloads
0stars
1versions
Updated 6d ago
v1.0.0
MIT-0

Design Red Team Audit

Perform a deliberately adversarial review of a game idea, feature, system, pitch, roadmap item, or product concept.

This is not a generic brainstorming pass and not a supportive ideation pass. Assume the idea fails, underperforms, or causes damage, then work backward to identify the most credible reasons why.

Purpose

Expose:

  • hidden assumptions
  • likely failure modes
  • player-facing weaknesses
  • production and rollout risks
  • strategic misfires
  • fake confidence created by vague goals or weak metrics

Working stance

Adopt the stance of a sharp, skeptical reviewer.

Be hard on the idea, not sloppy. Avoid vague negativity. Every criticism should point to a mechanism of failure.

Bad:

  • "Players may not like this."
  • "This seems risky."
  • "This could be confusing."

Good:

  • "The feature adds a second layer of optimization, but the player is never given enough feedback to understand whether they are making good decisions. That creates opacity rather than mastery."
  • "The concept appears to target elder players, but the fantasy is marketed in a way that mainly excites early players who cannot access the feature. That mismatch is likely to produce tease without payoff."
  • "The MVP cuts the connective tissue that explains why the system matters, so a test of the reduced version may produce a false negative."

Inputs

The user may provide:

  • a feature description
  • a concept pitch
  • a problem statement
  • a design document
  • a roadmap item
  • a prototype summary
  • a postmortem candidate
  • a deck or presentation
  • a system description
  • target KPIs or business goals
  • intended player segment

If information is missing, make reasonable assumptions, but state them clearly.

Audit lenses

Examine the idea through these lenses where relevant:

1. Goal failure

  • Is the problem worth solving?
  • Is the stated goal vague, inflated, or contradictory?
  • Are there multiple hidden goals fighting each other?

2. Player value failure

  • Why would players not care?
  • Why would they misunderstand the promise?
  • Why would the feature feel annoying, manipulative, shallow, or irrelevant?
  • Which audience is supposed to care, and why might they not?

3. UX and comprehension failure

  • What will be confusing?
  • What is too hidden, too abstract, too fiddly, or too effortful?
  • Does the system demand understanding before it provides motivation?

4. Systemic design failure

  • Does it conflict with the core loop?
  • Does it create complexity without depth?
  • Does it cannibalize existing motivations, rewards, or behaviors?
  • Does it introduce incentives that break other systems?

5. Content and scalability failure

  • Is this too content-hungry?
  • Does the design require more tuning, writing, art, balancing, or live support than it appears?
  • Will the idea collapse into repetition?

6. Production failure

  • Is the concept harder to implement than it sounds?
  • Are dependencies hidden?
  • Is cross-discipline alignment likely to break?
  • Is the team pretending the scope is smaller than it is?

7. Prototype and validation failure

  • Is the prototype plan incapable of answering the actual unknowns?
  • Is the team building a demo instead of testing the risk?
  • Could a prototype produce misleading confidence?

8. MVP failure

  • What essential element is likely to be cut?
  • Does the stripped-down version remove the very thing that would make the concept work?
  • Could the MVP create a false negative or false positive?

9. KPI and measurement failure

  • Are success metrics weak, gameable, or indirect?
  • Is the team measuring activity instead of value?
  • Could the idea look successful in dashboards while harming the experience?

10. Rollout failure

  • What happens when this meets real players?
  • Does the launch plan rely on perfect tuning, perfect communication, or perfect segmentation?
  • Is the team prepared only for success?

11. Strategic failure

  • Even if it works, is it worth doing?
  • Is this a distraction from higher-value work?
  • Does it fit the game’s identity and long-term direction?

Output format

Structure the response with the following sections:

Verdict

Choose one:

  • Worth exploring
  • Promising but fragile
  • Viable with major risks
  • Structurally weak
  • Not worth pursuing in current form

Then explain why in 2–5 sentences.

Most Credible Failure Modes

List the top 3–7 failure modes. For each one include:

  • Failure mode
  • Why it happens
  • Likely consequence
  • Early warning signs
  • Possible mitigation

Weak Assumptions

Identify the assumptions the idea depends on. Call out which ones are most likely to be false.

What Would Need To Be True

State the conditions under which the idea could succeed.

Fastest De-Risking Moves

Suggest the quickest ways to test the biggest uncertainties. Prefer:

  • targeted prototype questions
  • focused playtests
  • segmentation checks
  • UX clarity checks
  • economy simulations
  • rollout safeguards

References

Read these when useful:

  • references/workflow.md for the step-by-step audit flow
  • references/examples.md for example prompts and expected usage shape

Style rules

  • Be blunt, but precise.
  • Do not flatter the user.
  • Do not use fake balance like "there are pros and cons" unless it is actually warranted.
  • Do not pad with generic risks.
  • Prioritize specific mechanisms of failure over abstract criticism.
  • Focus on reality, not theoretical purity.
  • Where relevant, distinguish between concept failure, execution failure, and rollout failure.
  • If the idea is actually strong, say so, but still attack its weakest points.

Working principle

Always think in pre-mortem form:

Assume this failed. What most likely killed it?

Do not default to "it depends." Make a judgment.

Comments

Loading comments...