Game Design KPI Coverage Audit

v1.0.0

Audit a game feature, roadmap candidate, UX improvement, support system, connective-tissue feature, or quality-of-life change for KPI coverage bias and measu...

0· 75·0 current·0 all-time
byStanislav Stankovic@stanestane

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for stanestane/game-design-kpi-coverage-audit.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Game Design KPI Coverage Audit" (stanestane/game-design-kpi-coverage-audit) from ClawHub.
Skill page: https://clawhub.ai/stanestane/game-design-kpi-coverage-audit
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install game-design-kpi-coverage-audit

ClawHub CLI

Package manager switcher

npx clawhub@latest install game-design-kpi-coverage-audit
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name, description, and deliverables align with the required artifacts and included reference docs; there are no unexpected environment variables, binaries, or external services requested.
Instruction Scope
SKILL.md stays on-topic: it instructs the agent to read the bundled reference files and produce an audit (feature read, KPI framing, blind spots, recommendations). It does not request system-wide file access, credentials, or transmission to external endpoints.
Install Mechanism
No install spec or code files are included; this is an instruction-only skill so nothing will be written to disk or downloaded during install.
Credentials
The skill declares no required environment variables, credentials, or config paths; the actions described do not imply a need for secrets or unrelated service access.
Persistence & Privilege
always is false and model invocation is allowed (platform default). The skill does not request persistent system privileges or modifications to other skills' settings.
Assessment
This is an instruction-only audit tool that uses only the included reference files and does not require credentials or installs. It appears coherent and low-risk, but consider what data you will feed it: if you supply sensitive internal metrics or proprietary roadmaps to the agent, those inputs could be exposed according to your environment's data-handling rules. If you want stricter control, limit the agent's autonomous invocation or avoid pasting sensitive dashboards and raw credentials into prompts.

Like a lobster shell, security has layers — review code before you run it.

latestvk975y6txrfwzqe051198zaes4185arsm
75downloads
0stars
1versions
Updated 5d ago
v1.0.0
MIT-0

Game Design KPI Coverage Audit

Check whether the evaluation framework is seeing the whole value of the feature, or only the parts that are easy to measure.

Use this skill when a feature is being judged mainly through directly attributable KPIs and you suspect that measurement logic is biasing the team toward flashy, self-contained systems while undervaluing connective tissue, UX, quality-of-life, enabling systems, or long-term structural work.

Read references/value-types.md when identifying what kind of value the feature creates. Read references/blind-spot-patterns.md when diagnosing common KPI-coverage failures. Read references/recommendation-patterns.md when deciding how to justify or evaluate hard-to-measure work.

What to produce

Produce:

  1. Feature read - what the proposal is and what role it plays
  2. Current KPI framing - what the team is measuring or expecting to measure
  3. Coverage diagnosis - what value is covered versus ignored by those KPIs
  4. Blind spots - what may be neglected because it is hard to measure directly
  5. Risk of mis-prioritization - what bad decisions may result from the current framing
  6. Evaluation recommendation - how the feature should be justified, monitored, or compared more fairly

Process

1. Identify the role of the feature

Clarify whether the proposal is mainly:

  • a standalone engagement feature
  • a monetization feature
  • a progression layer
  • a UX improvement
  • connective tissue between systems
  • quality-of-life work
  • support infrastructure for future features
  • a clarity, pacing, or usability improvement

2. Identify the current KPI story

Ask:

  • what metric is the team using to justify this feature?
  • is it tied to revenue, retention, engagement, conversion, economy balance, sentiment, or something else?
  • is the KPI direct, indirect, speculative, or absent?

3. Audit KPI coverage

Check whether the metric framing captures the actual value of the work. Look for value types such as:

  • direct monetization
  • direct engagement lift
  • retention support
  • reduced friction
  • improved comprehension
  • stronger connective tissue between systems
  • future feature enablement
  • long-term sustainability
  • reduced support burden or balancing burden

4. Identify blind spots

Common signs:

  • the feature is dismissed because it cannot move one headline KPI on its own
  • direct-revenue features are always favored over structural health
  • UX work is treated as optional because it lacks clean attribution
  • foundational work is postponed until crisis
  • enabling systems are undervalued because they mostly improve the performance of other features

5. Judge prioritization risk

Ask:

  • what happens if this feature is judged only by direct KPI lift?
  • is the team likely to underinvest in maintenance, UX, clarity, infrastructure, or connective tissue?
  • could the current framework systematically reward short-term visible wins over long-term health?

6. Recommend better evaluation

Possible moves:

  • use a mixed scorecard instead of one KPI
  • classify the feature as enabling or connective work rather than forcing a fake direct KPI
  • evaluate via downstream support of other systems
  • allocate protected capacity for high-value, hard-to-measure work
  • compare opportunity cost honestly rather than pretending everything must tie to one direct metric

Response structure

Feature Read

  • ...

Current KPI Framing

  • ...

Coverage Diagnosis

  • ...

Blind Spots

  • ...

Risk of Mis-Prioritization

  • ...

Recommendation

  • ...

Fast mode

  • What is this feature actually for?
  • What KPI is being used to justify it?
  • What important value is not being captured by that KPI?
  • What bad prioritization decision could this cause?
  • How should the team evaluate it more fairly?

Style rules

  • Do not dismiss KPIs; diagnose their limits.
  • Do not invent fake measurable certainty for support work.
  • Distinguish direct value from enabling value.
  • Prefer fairer framing over anti-metrics rhetoric.
  • Be specific about how blind spots distort roadmap decisions.

Working principle

Teams often prioritize what they can measure cleanly, not what matters most. Use this skill to expose where KPI logic is too narrow for the actual design value on the table.

Comments

Loading comments...