Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Adaptive Reasoning

v2.0.0

Automatically assess task complexity and adjust reasoning level. Triggers on every user message to evaluate whether extended thinking (reasoning mode) would...

0· 120·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for alvisdunlop/alvis-adaptive-reasoning.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Adaptive Reasoning" (alvisdunlop/alvis-adaptive-reasoning) from ClawHub.
Skill page: https://clawhub.ai/alvisdunlop/alvis-adaptive-reasoning
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install alvis-adaptive-reasoning

ClawHub CLI

Package manager switcher

npx clawhub@latest install alvis-adaptive-reasoning
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name and description (dynamically adjust reasoning level) match the SKILL.md: it's an instruction-only pre-processing heuristic and requests no extra binaries, installs, or credentials.
!
Instruction Scope
The SKILL.md tells the agent to 'Do not ask. Just activate.' and to call a session_status tool or use '/reasoning on' before responding. It also requires appending icons to responses. The instructions reference an internal tool (session_status) that the skill does not declare and may have privileges; several decision-threshold fields contain corrupted/placeholder characters (�?) making the activation logic ambiguous. Silent activation (changing session state without explicit user consent) is scope creep and may be surprising.
Install Mechanism
Instruction-only skill with no install spec or code files — lowest install risk and nothing is written to disk.
Credentials
No environment variables, credentials, or config paths are requested — the skill does not ask for secrets or external service access.
Persistence & Privilege
always:false and no persistent installation requested (good). However, the skill explicitly instructs the agent to toggle session reasoning state autonomously. Combined with the platform's autonomous invocation, this could lead to repeated automatic changes to agent state; the skill does not document safety/consent semantics for that behavior.
What to consider before installing
This skill is coherent with its stated purpose but the runtime instructions raise concerns you should consider before enabling it: 1) The SKILL.md tells the agent to silently enable/disable an internal 'reasoning' mode by calling a session_status tool or '/reasoning' command — verify that such a tool exists in your agent environment and review what privileges that tool has. 2) Several decision-thresholds in the document are corrupted or missing (�? symbols); ask the author to clarify exact score boundaries and behavior so you know when deep thinking will be triggered. 3) Silent activation ("Do not ask. Just activate.") may change agent behavior without user consent; if you want explicit control, prefer a version that asks or is user-invocable only. 4) Test the skill in a non-production conversation first to confirm it only toggles reasoning state and does not cause other side effects. 5) If you rely on strict auditability or want users to opt-in, request the publisher add explicit consent/confirmation steps and repair the ambiguous thresholds.

Like a lobster shell, security has layers — review code before you run it.

latestvk9757yjsp6hh4pzwn86hg6tf8584wrsp
120downloads
0stars
2versions
Updated 1w ago
v2.0.0
MIT-0

Adaptive Reasoning

Self-assess complexity before responding. Adjust reasoning level dynamically.

Quick Assessment (run mentally on every request)

Score the request 0-10 on these dimensions:

SignalWeightExamples
Multi-step logic+3Planning, proofs, debugging chains
Ambiguity+2Nuanced questions, trade-offs, "it depends"
Code architecture+2System design, refactoring, security review
Math/formal reasoning+2Calculations, algorithms, logic puzzles
Novel problem+1No clear pattern, requires creativity
High stakes+1Production changes, irreversible actions

Subtract:

  • Routine/repetitive task: -2
  • Clear single answer: -2
  • Simple lookup/fetch: -3

Decision Thresholds

ScoreAction
�?Stay fast. No reasoning needed.
3-5Standard response. Light internal deliberation.
6-7Consider /reasoning on or pause to think explicitly.
�?Activate extended thinking. Complex problem warrants it.

Activation (Automatic)

Do not ask. Just activate.

ScoreAction
�?Respond normally. No change.
6-7Enable reasoning silently. Add 🧠 at end of response.
�?Enable reasoning. Add 🧠🔥 at end of response.

Visual Indicator

Always append the reasoning icon at the very end of your response:

  • Score 6-7: 🧠 (thinking mode active)
  • Score �?: 🧠🔥 (deep thinking mode)
  • Score �?: No icon (fast mode)

How to Activate

Use session_status tool or /reasoning on command internally before responding:

/reasoning on

Or via tool:

{"action": "session_status", "reasoning": "on"}

After completing a complex task, optionally disable to save tokens on follow-ups:

/reasoning off

Examples

Low complexity (score: 1)

"What time is it in Tokyo?" �?Simple lookup. Answer immediately. No icon.

Medium complexity (score: 4)

"Refactor this function to be more readable" �?Standard response with brief explanation. No icon.

High complexity (score: 7)

"Design a caching strategy for this API with these constraints..." �?Enable reasoning. Thoughtful response ends with: 🧠

Very high complexity (score: 9)

"Debug why this distributed system has race conditions under load" �?Enable extended thinking. Deep analysis ends with: 🧠🔥

Integration

This skill runs as mental preprocessing. No external tools needed.

For explicit control:

  • /reasoning on �?Enable extended thinking
  • /reasoning off �?Disable (faster responses)
  • /status �?Check current reasoning state

When NOT to Escalate

  • User explicitly wants quick answer ("just tell me", "quick", "tldr")
  • Time-sensitive requests where speed matters more than depth
  • Conversational/social messages (banter, greetings)
  • Already in reasoning mode for this session
  • User previously disabled reasoning in this conversation

Auto-Downgrade

After completing a complex task (score �?), if the next message is simple (score �?):

  • Silently disable reasoning to save tokens
  • Resume normal fast responses

Comments

Loading comments...