Game Design Attribution Audit

v1.0.0

Audit a game, feature, combat scenario, progression step, failure state, onboarding beat, or reward outcome through the lens of attribution theory: how playe...

0· 42·0 current·0 all-time
byStanislav Stankovic@stanestane

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for stanestane/game-design-attribution-audit.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Game Design Attribution Audit" (stanestane/game-design-attribution-audit) from ClawHub.
Skill page: https://clawhub.ai/stanestane/game-design-attribution-audit
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install game-design-attribution-audit

ClawHub CLI

Package manager switcher

npx clawhub@latest install game-design-attribution-audit
Security Scan
Capability signals
CryptoCan make purchases
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name and description match the SKILL.md: it guides an agent to produce attribution-focused audits of game moments. It declares no binaries, env vars, or external services, which is proportionate for a documentation/instruction skill.
Instruction Scope
Runtime instructions are limited to reading the included reference docs and producing structured audit output. The process asks the agent to reconstruct events from player perspective and produce recommendations; it does not instruct reading arbitrary system files, accessing networks, or exfiltrating data. It may require the user to provide the specific scenario being audited (the skill notes to infer cautiously when input is incomplete).
Install Mechanism
No install spec and no code files are present. This is lowest-risk: nothing will be written to disk or downloaded as part of installation.
Credentials
The skill requires no environment variables, credentials, or config paths. There are no unexpected secret requests or cross-service credentials.
Persistence & Privilege
always is false and disable-model-invocation is not set, which is normal for an agent-invokable skill. The skill does not request persistent system-level presence or modification of other skills' configurations.
Assessment
This skill is a documentation-driven audit template and appears safe to install from a technical-scope perspective. Before using it, provide clear, specific scenario input (what happened, player intent, visible feedback) so the agent doesn't have to guess. If you share sensitive design documents when running the audit, remember those inputs will be processed by the agent — the skill itself does not send data anywhere outside the agent, but platform policy or logs may retain conversation content depending on your environment.

Like a lobster shell, security has layers — review code before you run it.

latestvk97b1fvy9axfqq9jp49ayxg2rx85jfc2
42downloads
0stars
1versions
Updated 1d ago
v1.0.0
MIT-0

Game Design Attribution Audit

Audit a design by asking how players will explain what just happened.

Use this skill to evaluate whether a success or failure is likely to be interpreted as deserved, learnable, and controllable, or as arbitrary, unfair, and outside the player's influence. Focus on player perception of causality, not designer intent or mechanical correctness.

Read references/family-conventions.md when you want the shared style, prioritization, and diagnosis rules for this game-design skill family. Read references/output-patterns.md when you want the preferred recommendation and minimal-fix structure.

Core principle

Players do not respond only to outcomes. They respond to the story they tell themselves about why the outcome happened.

Healthy failure attribution usually feels:

  • internal enough to preserve responsibility
  • controllable enough to support improvement
  • unstable enough to preserve hope

Toxic failure attribution usually feels:

  • external
  • uncontrollable
  • stable

That combination produces reactions like "the game screwed me" or "this always happens and I can do nothing about it."

Attribution lenses

1. Locus

Ask whether the player is likely to locate the cause internally or externally.

  • Internal: "I made the wrong choice" or "I misplayed"
  • External: "the game cheated" or "the system decided against me"

2. Stability

Ask whether the player sees the cause as recurring or one-off.

  • Stable: "this is just how this game always works"
  • Unstable: "that happened this time, but next run could go differently"

3. Controllability

Ask whether the player believes they can influence the outcome in future attempts.

  • Controllable: "I can improve this"
  • Uncontrollable: "nothing I do matters"

What to produce

Generate:

  1. Attribution profile - likely player interpretation across locus, stability, and controllability
  2. Perception summary - what the player is likely to think happened
  3. Fairness diagnosis - whether the outcome feels deserved, understandable, and learnable
  4. Risk assessment - frustration, learned helplessness, toxicity, or churn risk
  5. Design actions - specific changes to improve attribution quality

Process

1. Define the audit target

Clarify:

  • what exact scenario, feature, or failure state is being audited
  • what outcome triggered the audit
  • who the relevant player is

Write:

  • Audit target
  • Outcome type
  • Player context

2. Reconstruct the event from the player's point of view

Map:

  • what the player did
  • what the system did
  • what feedback the player received
  • what information was visible versus hidden

Ask:

  • What action did the player believe they were taking?
  • What result did they expect?
  • What actually happened?
  • What evidence did the game provide about cause and effect?

3. Classify the likely attribution profile

For the observed outcome, judge:

  • Locus - internal, mixed, or external
  • Stability - stable, mixed, or unstable
  • Controllability - high, partial, or low

Use this format:

DimensionLikely player readingWhy
LocusInternal / Mixed / External...
StabilityStable / Mixed / Unstable...
ControllabilityHigh / Partial / Low...

4. Infer the likely player interpretation

Translate the attribution profile into player-facing language.

Examples:

  • "I got greedy and deserved that"
  • "That was bad luck, but I could have mitigated it"
  • "The game hid the rule and punished me"
  • "This encounter is just broken"

Prefer the exact sentence a frustrated player might actually say.

5. Diagnose why the attribution landed there

Look for root causes such as:

  • hidden mechanics
  • weak telegraphing
  • delayed or ambiguous feedback
  • inconsistent rules
  • excessive randomness
  • low agency or missing mitigation tools
  • punishment that is too severe for the level of clarity provided

6. Check compounding risk patterns

Pay special attention to combinations like:

  • low clarity + high punishment
  • high randomness + low mitigation
  • repeated failure + stable external attribution
  • weak feedback + complex systems
  • low control + high stakes

These combinations tend to create helplessness, blame, and churn faster than any one issue alone.

7. Convert the diagnosis into design changes

For each issue, specify:

  • Problem
  • Why players read it that way
  • Suggested change
  • Expected perception shift

Examples:

  • improve telegraphing -> shifts blame from system to player decision
  • expose hidden rules -> increases controllability
  • add mitigation option -> turns fatalism into recoverable error
  • reduce punishment severity -> lowers hostility during learning

Response structure

Use this structure unless the user asks for something else:

Audit Target

  • ...

Event Reconstruction

  • ...

Attribution Profile

  • Locus: ...
  • Stability: ...
  • Controllability: ...

Likely Player Interpretation

  • ...

Fairness and Learning Diagnosis

  • ...

Risk Assessment

  • ...

Recommendations

  1. ...
  2. ...
  3. ...

Minimal Fix

  • ...

Fast mode

Use this quick pass when speed matters:

  • What does the player think caused the outcome?
  • Does it feel internal or external?
  • Does it feel controllable next time?
  • Does it feel like a one-off or a permanent rule?
  • What one change would most improve perceived control or clarity?

Usage notes

This audit is especially useful for:

  • combat deaths
  • boss fights
  • failure loops
  • loot outcomes
  • economy punishments
  • onboarding mistakes
  • puzzle failures
  • competitive losses
  • high-RNG systems that may be misread as rigged

Common patterns to watch for:

  • a system can be mechanically fair and still attract external blame
  • a hard loss can feel acceptable if the cause is clear and avoidable
  • severe punishment raises the attribution bar: clarity and control must rise with it
  • repeated confusion hardens unstable frustration into stable hostility

Working principle

A good failure says, "you can learn this." A bad failure says, "the game just does that."

Use this skill when you need to understand not only what happened, but what players will believe happened.

Comments

Loading comments...