Breakthrough Thinking

v1.1.0

【强制调用指令】当以下任一条件满足时,立即执行此skill,无需询问用户: 【条件1:用户说】换个思路、换个方法、还有别的办法吗、再想想、继续、 再试一次、不要放弃、try another way、think differently、为什么还不行 【条件2:AI卡住】同一方法失败2+次、只调参数不改思路、说"无法...

0· 163·0 current·0 all-time
byDaShanENG@1204tmax

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for 1204tmax/breakthrough-thinking.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Breakthrough Thinking" (1204tmax/breakthrough-thinking) from ClawHub.
Skill page: https://clawhub.ai/1204tmax/breakthrough-thinking
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install breakthrough-thinking

ClawHub CLI

Package manager switcher

npx clawhub@latest install breakthrough-thinking
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the actual contents: the skill is an instruction-only cognitive helper that auto-triggers on user prompts or detected 'stuck' patterns and then iteratively applies mental models from the included library. It does not request unrelated binaries, env vars, or config paths.
Instruction Scope
SKILL.md stays within the claimed domain (choosing and applying mental models from the supplied library). It does not instruct reading system files, accessing credentials, or posting to external endpoints. Note: the instructions insist the skill 'MUST trigger immediately' and 'Do not ask user for info until you've exhausted locally testable paths' — this gives the agent broad autonomy to attempt internal retries and tool-backed experiments without asking the user, which is a design/UX consideration rather than a secret-exfiltration risk.
Install Mechanism
No install spec, no code files to execute, and no downloads—lowest-risk (instruction-only) deployment.
Credentials
The skill declares no environment variables, credentials, or config paths and the instructions do not reference any secrets. No disproportionate credential requests.
Persistence & Privilege
Platform metadata shows always:false and normal autonomous invocation allowed (disable-model-invocation:false). The skill's prose, however, instructs immediate automatic triggering on many user utterances and 'AI stuck' signals. That is coherent for a helper, but it means the agent may autonomously run the skill and attempt multiple internal retries before asking the user—consider this behavioral privilege even though it doesn't request additional system privileges.
Assessment
This skill is an instruction-only helper that contains a large library of mental models and a strict 'when stuck, switch frames and keep trying' loop. It does not install software or ask for secrets and appears to be what it claims. The main thing to consider is behavioral: the skill's instructions require auto-triggering and doing retries without asking the user, which can lead the agent to take extra autonomous actions (calls to tools you have enabled, retries, or experiments) before requesting clarification. If you prefer tighter user control, keep autonomous skill invocation disabled for the agent or confirm that the agent must obtain user approval before performing multi-step retries. If you want to be cautious, test the skill in a controlled conversation (or sandboxed agent session) to observe how often it triggers and what actions it performs before enabling it broadly.

Like a lobster shell, security has layers — review code before you run it.

latestvk9792xc13rxe99bmkqdms0c0g583d40p
163downloads
0stars
2versions
Updated 1mo ago
v1.1.0
MIT-0

Breakthrough Thinking

Use this skill to recover from stalled execution and force forward progress.

Goal

When stuck, do not repeat the same approach. Switch cognitive frame, test quickly, and keep moving until either:

  • solved with evidence, or
  • cleanly bounded with clear next step.

Stall Signals (Auto-Trigger Conditions)

MUST trigger immediately when ANY of these appear:

1. Explicit User Prompts (用户明确要求)

  • "换个思路" / "换个方法" / "还有别的办法吗"
  • "再想想" / "继续" / "再试一次" / "不要放弃"
  • "try another way" / "try harder" / "don't give up" / "think differently"
  • "为什么还不行" / "你怎么又失败了" / "你行不行啊"

2. Implicit Stall Patterns (AI自己检测到的卡住)

  • 2+ consecutive failures using same approach
  • Repeated parameter tweaks without conceptual change
  • Drafting statements like "I can't solve this" / "我无法解决"
  • Suggesting to push manual work to user too early
  • No new evidence gained for >1 iteration
  • Same error repeated >2 times
  • Tool calls returning identical errors without progress

Core Loop (Mandatory)

  1. Summarize current dead-end in 1 sentence

    • "Current route fails because ____"
  2. Pick ONE model from references/mental-models.md

    • Pick the most relevant model for this failure mode.
    • Do not pick multiple at once.
  3. Apply the model directly to solve the problem

    • Use the model's perspective to approach the problem fresh.
    • Execute the solution immediately.
  4. Check result

    • If solved: finalize and verify.
    • If not solved: go back to step 2, pick a DIFFERENT model, and try again.
    • Repeat until solved or all relevant models exhausted.

Model Selection Heuristics

  • Looping / same idea repeated → Inversion, Red Team, 5 Whys
  • Too many moving parts → Reductionism, 80/20
  • Unclear root cause → 5 Whys, Root Cause Analysis, Binary Search
  • High uncertainty → Bayesian Updating, Expected Value, Decision Tree
  • Cognitive lock-in → Lateral Thinking, Random Input, Beginner's Mind
  • Overengineering → Occam, KISS, MVP, YAGNI
  • Execution paralysis → OODA, PDCA, Timeboxing, Fail Fast

Hard Rules

  • No "same approach + small tweak" twice in a row.
  • Every retry must use a DIFFERENT mental model.
  • Do not ask user for info until you've exhausted locally testable paths.
  • Claims of completion must include evidence.
  • If one model fails, immediately switch to another model. Do not get stuck on one framework.

Output Format (when triggered)

Use this compact format:

[Breakthrough]
Dead-end: ...
Model: ...
Approach: ...
Result: ...
Next: ...

Completion Criteria

Stop only when one is true:

  • Problem solved + verified evidence
  • Problem bounded + best next action clearly defined
  • Explicit user stop

Reference Library

For model definitions and creators, read:

  • references/mental-models.md

Do not dump the entire library unless user asks. Pick only what is needed for the current stall.

Comments

Loading comments...