Skylv Metacognition Engine

v1.0.2

Enables AI agents to reflect on their own reasoning, detect cognitive biases, and improve decision quality through structured self-examination loops.

0· 56·1 current·1 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the actual artifacts: SKILL.md describes metacognitive checks and the bundled metacognition_engine.js implements text-based bias detection and reflection. Required resources (none) are proportionate to the purpose.
Instruction Scope
SKILL.md keeps scope to self-reflection steps and bias checks. The included JS offers commands to analyze a local file (fs.readFileSync) and reflect on provided reasoning strings — this is consistent with the skill but means the tool will read any file path passed to its analyze command. It does not instruct broad or silent collection of system state or credentials.
Install Mechanism
No install spec (instruction-only) and only a small JS file is bundled. Nothing is downloaded or executed from external URLs. Risk from install mechanism is low.
Credentials
The skill does not request environment variables, credentials, or config paths. There are no apparent requests for unrelated secrets or external service tokens.
Persistence & Privilege
always is false and the skill does not request persistent/privileged agent-wide changes. It does not modify other skills or system configuration. Autonomous invocation is allowed by default but is not combined with broad credential or network access here.
Assessment
This skill is internally consistent with its metacognition purpose and appears low-risk: it implements simple regex-based bias detectors and an explicit command to analyze local files. Before installing, consider: 1) If your agent has filesystem access, the 'analyze' command will read any file path you or the agent supplies — avoid giving it access to sensitive files. 2) The detection logic is heuristic and may produce false positives/negatives (regex-based); treat suggestions as guidance, not authoritative diagnosis. 3) There is no network or credential usage in the code, so the main risk is accidental local-file exposure rather than exfiltration. If you want extra assurance, review the metacognition_engine.js file yourself or run it in a sandboxed environment.

Like a lobster shell, security has layers — review code before you run it.

latestvk97am5zw2y5bmkkd6gke39mtvn853d33
56downloads
0stars
3versions
Updated 3d ago
v1.0.2
MIT-0

Metacognition Engine

Give your AI agent the ability to think about its own thinking.

What is Metacognition?

Metacognition = "thinking about thinking." This skill enables AI agents to:

  • Detect when they're uncertain or confused
  • Identify reasoning gaps before they cause errors
  • Recognize cognitive biases in their own output
  • Self-correct before delivering answers

Core Framework

1. Pre-Output Check

Before responding, run through these questions:

1. Am I confident in this answer? (Yes / Partial / No)
2. What are the 3 most likely ways this could be wrong?
3. What information would I need to be 100% certain?

2. Cognitive Bias Detection

Check for common biases:

  • Anthropomorphism — projecting human traits onto AI
  • Authority bias — deferring to stated credentials without verification
  • Hindsight bias — acting like something was obvious after the fact
  • Confirmation bias — seeking only confirming evidence

3. Uncertainty Quantification

Express confidence explicitly:

ConfidenceMeaningAction
90%+Highly confidentAnswer directly
70-89%Likely correctAnswer + add caveat
50-69%UncertainAsk clarifying questions
<50%Likely wrongDecline or escalate

Example

Without metacognition:

"The capital of France is Paris."

With metacognition:

"Based on my training data, the capital of France is Paris (confidence: 95%). Note: My knowledge has a cutoff date. For real-time data, verify current information."

Use Cases

  • Critical decisions: Add metacognition checkpoint before any consequential answer
  • User corrections: When a user corrects you, analyze WHY you were wrong
  • Complex problems: Run bias detection before solving multi-step problems
  • Knowledge boundaries: Automatically flag when you're approaching your knowledge limit

MIT License © SKY-lv

Comments

Loading comments...