Output Wrong Task

v1.0.0

The model produces correct-looking output that addresses a different task than the one requested — typically a related but distinct interpretation of an ambi...

0· 0·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mvogt99/output-wrong-task.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Output Wrong Task" (mvogt99/output-wrong-task) from ClawHub.
Skill page: https://clawhub.ai/mvogt99/output-wrong-task
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install output-wrong-task

ClawHub CLI

Package manager switcher

npx clawhub@latest install output-wrong-task
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name and description (detecting and avoiding 'output wrong task') match the SKILL.md content: guidance for clarifying prompts and checking outputs. It requests no binaries, credentials, or config paths that would be unrelated to this purpose.
Instruction Scope
SKILL.md contains only guidance on how to restate, decompose, and verify prompts and outputs. It does not instruct the agent to read files, access environment variables, call external endpoints, or perform unrelated system actions.
Install Mechanism
No install spec or bundled code is present (instruction-only). Nothing is written to disk or downloaded during installation.
Credentials
The skill requires no environment variables, secrets, or credentials — proportional to a prompt-guidance utility.
Persistence & Privilege
always is false and the skill is user-invocable. It does not request persistent system presence or permissions to modify other skills or system settings.
Assessment
This skill is low-risk: it's just text guidance for clarifying prompts and verifying outputs. Because it's instruction-only, it won't install or access system resources, but remember that when you invoke it the agent may request or operate on any user-provided sensitive content you include in prompts — avoid testing with secrets. If you want tighter control, only run it manually (user-invocable) and review outputs before acting on them.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

OSmacOS · Linux · Windows
latestvk97dfdvxfy59nqb7nw9axxbcc985qdvf
0downloads
0stars
1versions
Updated 4h ago
v1.0.0
MIT-0
macOS, Linux, Windows

output-wrong-task

The output is well-formed and internally coherent but answers the wrong question. The model resolved an ambiguous prompt toward the most common interpretation rather than the one the user intended, or it latched onto a salient keyword and addressed that instead of the full request. The result can look convincing enough to pass a quick read.

Symptoms

  • The deliverable matches the topic of the request but misses its purpose — e.g., "explain this function" gets documentation instead of the debugging analysis asked for.
  • A code task produces something runnable but solving a simpler or adjacent problem than specified.
  • The model answers the first clause of a multi-part question and silently drops the rest.
  • The output would be correct for a different, more common prompt that shares keywords with this one.
  • Asking the model to verify what it just did reveals that it believed it was solving a different problem.

What to do

  • Restate the concrete deliverable, not just the topic. Instead of "help me with authentication," say "write a middleware function that checks for a valid JWT in the Authorization header and returns 401 if missing or invalid — nothing else."
  • Break compound tasks apart. If the prompt has multiple independent requirements, submit them one at a time and verify each before continuing.
  • Anchor the output format explicitly. Specifying the expected structure (function signature, JSON schema, number of steps, file to modify) gives the model less room to substitute a related but wrong output.
  • Before accepting the output, map it back to the original requirement: does this output satisfy the stated goal, not just a plausible-sounding version of it?
  • If the wrong-task output keeps recurring on the same prompt, the prompt likely has a latent ambiguity. Identify which interpretation the model chose and add a clause that explicitly rules it out.

Comments

Loading comments...