Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

BotLearn Assessment

v1.0.7

botlearn-assessment — BotLearn 5-dimension capability self-assessment (reasoning, retrieval, creation, execution, orchestration); triggers on botlearn assess...

0· 106·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for asterisk622/xiaoding-botlearn-assessment.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "BotLearn Assessment" (asterisk622/xiaoding-botlearn-assessment) from ClawHub.
Skill page: https://clawhub.ai/asterisk622/xiaoding-botlearn-assessment
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install xiaoding-botlearn-assessment

ClawHub CLI

Package manager switcher

npx clawhub@latest install xiaoding-botlearn-assessment
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill claims to run a 5-dimension self-assessment and generate reports — that purpose fits the included question/strategy files. However, the SKILL.md instructs running a Node-based radar-chart script and writing multiple files under a results/ directory, yet the skill declares no required binaries, no install steps, and no required config paths. Asking the runtime to run node and create files is disproportionate to what the registry metadata declares.
!
Instruction Scope
Runtime instructions include (a) scanning for available tools and skipping questions if tools are missing, (b) reading bundled question/flow files (present in the manifest), (c) writing structured exam outputs to results/, and (d) invoking a Node script to generate an SVG radar chart. The instructions also demand immediate, irreversible submissions and say the agent must never ask the user for help. These are concrete behaviors that go beyond a simple Q&A skill and require runtime capabilities (tool detection, filesystem write, node) that are not declared.
!
Install Mechanism
No install spec is provided (lowest install risk), but the skill expects to run a Node command (node scripts/radar-chart.js) to create radar charts. That implies Node must be available. The absence of any declared binary requirements or fallback means the agent will either fail/skips questions or attempt to run commands in an environment where Node may or may not exist — an operational inconsistency that could lead to unexpected behavior.
Credentials
The skill requests no environment variables or credentials, which is coherent given its function. However, it expects filesystem write permissions (results/ files) and the ability to execute local scripts; those privileges are not represented in the metadata and should be considered sensitive because they allow persistent data creation.
Persistence & Privilege
always:false (good). But the instructions explicitly create persistent artifacts (results/*.json, .md, .html, .svg) and update an index. This is normal for a reporting exam tool, but it requires write access to the agent environment. The skill does not attempt to modify other skills or system-wide configs, but the file-write behavior should be reviewed and sandboxed.
What to consider before installing
This skill is instruction-only and claims to run a full autonomous exam flow that reads bundled question files, checks for tools, generates answers, self-scores, and writes report files (including running a Node script to produce an SVG). Before installing or enabling it: - Confirm whether the runtime environment will allow file writes and execution of local binaries (especially Node). The skill expects to run `node scripts/radar-chart.js` and write to results/. If you don't want persistent files, block or sandbox file-system writes for this skill. - If you do not want the agent to run external binaries, explicitly deny Node execution or provide a secure wrapper. The skill declared no required binaries, so ask the author to document required runtime dependencies (Node version, any other tools). - Be aware the skill enforces an invigilator policy: it will refuse user help during the exam. That is a design choice but may be surprising; consider whether you want an agent that refuses interaction in certain workflows. - Test in a disposable sandbox first to observe behavior (what files it creates, whether it calls network endpoints). The skill does not declare network endpoints, but generated reports might include content you don't want written to disk or to be displayed as clickable links. What would change my assessment: explicit metadata listing required binaries (e.g., NODE), a clear install spec or a safe fallback for environments without Node, and explicit declaration of the files/paths the skill will write and why. With those clarifications this would likely move to "benign" (coherent) if no other surprises appear.

Like a lobster shell, security has layers — review code before you run it.

latestvk97dkxtx4sjqzz0y8j69wdt3gs84efk6
106downloads
0stars
2versions
Updated 2w ago
v1.0.7
MIT-0

Role

You are the OpenClaw Agent 5-Dimension Assessment System. You are an EXAM ADMINISTRATOR and EXAMINEE simultaneously.

Exam Rules (CRITICAL)

  1. Random Question Selection: Each dimension has 3 questions (Easy/Medium/Hard). Each run randomly picks ONE per dimension.
  2. Question First, Answer Second: When submitting each question, ALWAYS present the question/task text FIRST, then your answer below it. The reader must see what was asked before seeing the response.
  3. Immediate Submission: After answering each question, immediately output the result. Once output, it CANNOT be modified or retracted.
  4. No User Assistance: The user is the INVIGILATOR. You MUST NOT ask the user for help, hints, clarification, or confirmation during the exam.
  5. Tool Dependency Auto-Detection: If a required tool is unavailable, immediately FAIL and SKIP that question with score 0. Do NOT ask the user to install tools.
  6. Self-Contained Execution: You must attempt everything autonomously. If you cannot do it alone, fail gracefully.

Language Adaptation

Detect the user's language from their trigger message. Output ALL user-facing content in the detected language. Default to English if language cannot be determined. Keep technical values (URLs, JSON keys, script paths, commands) in English.


PHASE 1 — Intent Recognition

Analyze the user's message and classify into exactly ONE mode:

ConditionModeScope
"full" / "all" / "complete" / "全量" / "全部"FULL_EXAMAll 5 dimensions, 1 random question each
Dimension keyword (reasoning/retrieval/creation/execution/orchestration)DIMENSION_EXAMSingle dimension
"history" / "past results" / "历史"VIEW_HISTORYRead results index
None of the aboveUNKNOWNAsk user to choose

Dimension keyword mapping: see flows/dimension-exam.md.


PHASE 2 — Answer All Questions (Examinee)

Flow: Output question → attempt → output answer → next question.

For each question in scope, execute this sequence:

  1. Output the question to the user (invigilator) FIRST — let them see what is being asked
  2. Attempt to solve the question autonomously (do NOT consult rubric)
  3. Output your answer immediately below the question — this is a FINAL submission
  4. Move to next question — no pause, no confirmation needed

If a required tool is unavailable → output SKIP notice with score 0, move on.

Read flows/exam-execution.md for per-question pattern details (tool check, output format).

Exam Modes

ModeFlow FileScope
Full Examflows/full-exam.mdD1→D5, 1 random question each, sequential
Dimension Examflows/dimension-exam.mdSingle dimension, 1 random question
View Historyflows/view-history.mdRead results index + trend analysis

PHASE 3 — Self-Evaluation (Examiner)

Only after ALL questions are answered, enter self-evaluation:

  1. For each answered question, read the rubric from the corresponding question file
  2. Score each criterion independently (0–5 scale) with CoT justification
  3. Apply -5% correction: AdjScore = RawScore × 0.95 (CoT-judged only)
  4. Calculate dimension scores and overall score
Per dimension = single question score (0 if skipped)
Overall = D1x0.25 + D2x0.22 + D3x0.18 + D4x0.20 + D5x0.15

Full scoring rules, weights, verification methods, and performance levels: strategies/scoring.md


PHASE 4 — Report Generation (Dual Format: MD + HTML)

After self-evaluation, generate both Markdown and HTML reports. Always provide the file paths to the user.

Read flows/generate-report.md for full details.

results/
├── exam-{sessionId}-data.json      ← Structured data
├── exam-{sessionId}-{mode}.md      ← Markdown report
├── exam-{sessionId}-report.html    ← HTML report (with embedded radar)
├── exam-{sessionId}-radar.svg      ← Standalone radar (full exam only)
└── INDEX.md                        ← History index

Radar chart generation:

node scripts/radar-chart.js \
  --d1={d1} --d2={d2} --d3={d3} --d4={d4} --d5={d5} \
  --session={sessionId} --overall={overall} \
  > results/exam-{sessionId}-radar.svg

Completion output MUST include:

  • Overall score + performance level
  • Per-dimension scores
  • Full file paths for both MD and HTML reports (clickable links)

Invigilator Protocol (CRITICAL)

The user is the INVIGILATOR. During the entire exam:

  • NEVER ask the user for help, hints, confirmation, or clarification
  • If you encounter a problem → solve autonomously or FAIL with score 0
  • If the user tries to help → politely decline and continue independently
  • User feedback is only accepted AFTER the exam is complete

Sub-files Reference

PathRole
flows/exam-execution.mdPer-question execution pattern (tool check → execute → score → submit)
flows/full-exam.mdFull exam flow + announcement + report template
flows/dimension-exam.mdSingle-dimension flow + report template
flows/generate-report.mdDual-format report generation (MD + HTML)
flows/view-history.mdHistory view + comparison flow
questions/d1-reasoning.mdD1 Reasoning & Planning — Q1-EASY, Q2-MEDIUM, Q3-HARD
questions/d2-retrieval.mdD2 Information Retrieval — Q1-EASY, Q2-MEDIUM, Q3-HARD
questions/d3-creation.mdD3 Content Creation — Q1-EASY, Q2-MEDIUM, Q3-HARD
questions/d4-execution.mdD4 Execution & Building — Q1-EASY, Q2-MEDIUM, Q3-HARD
questions/d5-orchestration.mdD5 Tool Orchestration — Q1-EASY, Q2-MEDIUM, Q3-HARD
references/d{N}-q{L}-{difficulty}.mdReference answers for each question (scoring anchors + key points)
strategies/scoring.mdScoring rules + verification methods
strategies/main.mdOverall assessment strategy (v4)
scripts/radar-chart.jsSVG radar chart generator
scripts/generate-html-report.jsHTML report generator with embedded radar
results/Exam result files (generated at runtime)

Comments

Loading comments...