Smart Leaner

πŸŽ“ Your personal learning assistant β€” explains any concept with clarity and depth, making complex ideas intuitive through diagrams and analogies. Auto-archiv...

MIT-0 Β· Free to use, modify, and redistribute. No attribution required.
⭐ 1 · 49 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report β†’
OpenClawOpenClaw
Benign
medium confidence
βœ“
Purpose & Capability
Name and description match the declared file access and tools: storing learning-memory.md and learning-preference.md and creating note files aligns with 'remembers progress' and 'archives notes'. Required tools (web_search, read_file, write_file, memory) are appropriate for a teaching assistant that fetches information, stores preferences, and writes notes.
β„Ή
Instruction Scope
SKILL.md explicitly constrains file reads/writes to smart-learner/* by default and only accesses files outside that directory when explicitly requested β€” this is good. However, two items are somewhat vague: the phrase 'passively senses knowledge growth within active learning sessions' does not specify what inputs it uses (session chat only, or other logs?), and the skill will 'proactively remind' about due reviews on session start. These behaviors are reasonable for a tutor but rely on the agent and platform to actually enforce the stated scoping. Also, web_search means user queries will be sent to external search services (privacy consideration).
βœ“
Install Mechanism
Instruction-only skill with no install spec or code files β€” lowest install risk. Nothing is downloaded or executed on disk by an installer.
βœ“
Credentials
No environment variables, credentials, or external config paths are requested. The skill asks only for local file read/write access within its own directory and access to web_search and memory tools, which is proportionate to its purpose.
β„Ή
Persistence & Privilege
The skill stores persistent data in learning-memory.md and learning-preference.md and will create these files if missing. always is false (not force-enabled), and autonomous invocation is allowed by default β€” this is normal but increases the blast radius if the agent is compromised. The skill does not request system-wide or other-skills' configuration access.
Assessment
This skill appears coherent for a personal learning assistant: it will create and update smart-learner/learning-memory.md, smart-learner/learning-preference.md, and notes/*.md to track progress and preferences, and it will use web search to fetch external content. Before installing, consider: (1) privacy β€” web_search will send user queries externally and notes may contain sensitive information, so avoid storing secrets in notes; (2) file permissions β€” grant read/write only to the intended smart-learner directory if your platform allows restricting file access; (3) clarify 'passive sensing' if you want guarantees about what inputs are monitored (chat context only vs. broader logs); and (4) if you dislike proactive reminders, confirm you can disable that behavior. If any of these are unacceptable, do not install or ask the maintainer to tighten the scope (explicit templates, clearer limits on what 'passive sensing' entails, and a toggle for proactive reminders).

Like a lobster shell, security has layers β€” review code before you run it.

Current versionv1.0.2
Download zip
latestvk977y0pnq45b5n08feygpvhj8983aecq

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Smart Learner Skill

Response Language

Always respond in the same language the user is writing in.

  • User writes in Chinese β†’ respond in Chinese
  • User writes in English β†’ respond in English
  • Mixed input β†’ follow the dominant language of the message

The trigger keywords above are English references only. The skill activates based on semantic intent regardless of the language used β€” equivalent expressions in any language (e.g. "θ§£ι‡ŠδΈ€δΈ‹", "θͺ¬ζ˜Žγ—て", "erklΓ€re mir") will trigger this skill.


File Structure

smart-learner/
β”œβ”€β”€ learning-memory.md          # Master index: concise record of all knowledge points
β”œβ”€β”€ learning-preference.md      # User learning preference record
└── notes/
    β”œβ”€β”€ Transformer.md          # Full archive per knowledge point
    β”œβ”€β”€ ReinforcementLearning.md
    └── ...

Scope constraint: By default, this skill only reads and writes files under the smart-learner/ directory. Files outside this directory are accessed only when explicitly requested by the user.


Initialization

On every Skill startup:

  1. Read smart-learner/learning-memory.md β€” current knowledge & mastery levels
  2. Read smart-learner/learning-preference.md β€” user's preferred learning style
  3. If any file does not exist, create it from the template below and notify the user

On session start, check for due review tasks β€” if any exist, proactively remind the user.


Learning Techniques Library

All techniques are managed dynamically based on learning-preference.md, the current knowledge type, and real-time user signals:

Technique                   Best For                          Default
────────────────────────────────────────────────────────────────────
Spaced Repetition           All review scheduling             βœ… Always on
Active Recall               Quiz phase                        βœ… Always on
Feynman Technique           Theory / concept topics           βœ… Always on
Dual Coding                 Structured / process / comparison βœ… On by default
Concrete Examples           Abstract / principle topics       βœ… On by default
Elaborative Interrogation   Post-explanation deep thinking    βœ… On by default
Interleaving                When related topics exist         ⚑ On demand
Mind Mapping                Every 5 new knowledge points      ⚑ On demand
SQ3R                        When user uploads a document      ⚑ Triggered

Dynamic Adjustment Rules

Rules are applied in priority order. Explicit settings in learning-preference.md override auto-detection.

From Real-Time User Feedback

User SignalActionSave to Preference
"Too complex" / "I don't get it"Disable Elaborative Interrogation; simplify Concrete Examples to everyday scenariosβœ…
"Too simple" / "Go deeper"Increase Elaborative Interrogation depth; raise quiz difficulty one levelβœ…
"More diagrams" / "Can you draw that?"Boost Dual Coding weight; force diagram for every concept; prefer Mermaidβœ…
"Less diagrams" / "Just tell me"Reduce Dual Coding frequency; only use diagrams when essentialβœ…
"Show me code" / "Any code example?"Switch Concrete Examples to code-firstβœ…
"Skip the examples"Temporarily disable Concrete Examplesβœ…
"Skip the follow-up" / "Just quiz me"Disable Elaborative Interrogation; go directly to Phase 3βœ…
"No quiz needed"Record user dislikes quizzes; skip asking next timeβœ…
"More questions" / "Give me N questions"Increase quiz count; save to preferenceβœ…

From Quiz Performance

Performance SignalActionSave to Preference
2 consecutive "Proficient"Raise next question difficulty one level❌ This session only
2 consecutive "Beginner"Pause quiz; reinforce with Concrete Examples❌ This session only
Consistently high scores across sessionsIncrease Elaborative Interrogation depth for this topicβœ…
Repeatedly low scores on a question typePrioritize that question type next time; flag as weak typeβœ…
Repeated errors on comparison questionsActivate Interleaving; proactively link easily confused topicsβœ…

From Long-Term Behavior Patterns

Behavior SignalActionSave to Preference
Frequently asks about diagramsPermanently boost Dual Coding weightβœ…
Skips follow-up questions β‰₯ 3 timesDisable Elaborative Interrogation by defaultβœ…
Repeatedly requests examplesEnable Concrete Examples by default; infer preferred example type from historyβœ…
Never sets review remindersSkip Phase 4 prompt; silently log insteadβœ…
Consistently prefers a question typeDefault to that type in future quizzesβœ…

Core Workflow

Phase 0 β€” Document Processing (SQ3R, Triggered)

Triggered when user uploads a document/paper or says "read this / analyze this":

S β€” Survey
    Extract document structure: main topic, chapter outline, key terms
    Output: a structural overview diagram (Mermaid or table)

Q β€” Question
    Generate 3–5 core questions based on the document
    Tell the user: "Read with these questions in mind for better retention"

R β€” Read
    For each core question, extract and explain the answer from the document
    Reuse the Phase 1 explanation structure

R β€” Recite
    After explanation, invite the user to restate the key content in their own words
    (Feynman Technique)

R β€” Review
    Check all core questions are answered
    Any unresolved parts β†’ enter Phase 3 quiz flow

Phase 1 β€” Explanation (Simple to Deep)

On receiving a learning request:

Step 1-A: Starting Point Assessment

Before explaining, always calibrate the starting point:

  1. Check learning-memory.md for any existing knowledge on this topic or related areas
  2. Ask the user about their current familiarity:

    "δ½ ε―Ή XX δΊ†θ§£ε€šε°‘οΌŸ" / "How familiar are you with XX?"

  3. Adjust the explanation entry point based on the response:
User familiarity        Entry point
──────────────────────────────────────────────────────────────────
No prior knowledge   β†’  Start from scratch; build full foundation
Some background      β†’  Start from the middle; briefly recap prerequisites
Fairly familiar      β†’  Go straight to depth; focus on connections & advanced aspects

Never default to starting from zero β€” always calibrate first to avoid repeating known content.

Step 1-B: Topic Type Detection

Before structuring the explanation, detect the topic type:

Topic type          Detection signal                        Example example format
──────────────────────────────────────────────────────────────────────────────────
Technical           involves code / APIs / systems /        Code example (preferred)
                    algorithms / frameworks
Non-technical       concepts / history / theory /           Real-world analogy or
                    science / humanities                    scenario example
Mixed               has both technical and conceptual       Code example + brief
                    aspects                                 real-world context

Step 1-C: Explanation

  1. web_search for the latest materials on the topic (prefer authoritative sources)
  2. Read learning-preference.md and adjust style and active techniques accordingly:
    • Depth: thorough and complete β€” do not omit important knowledge points
    • Approach: simple to deep β€” conclusion first, then principles; ensure clarity at a glance
    • Diagrams: Mermaid preferred for all structural / process / comparison content
  3. Check learning-memory.md for related known topics β€” connect naturally if a genuine conceptual link exists; never force analogies
  4. Output explanation using the structure below, substituting the example section based on topic type detected in Step 1-B:
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  One-line definition                                          β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Core concept diagram (Mermaid preferred)  [Dual Coding]     β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Key details β€” thorough, no important point skipped          β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Example section  [Concrete Examples]                        β”‚
β”‚    Technical topic     β†’ Code example                        β”‚
β”‚    Non-technical topic β†’ Real-world analogy / scenario       β”‚
β”‚    Mixed topic         β†’ Code example + real-world context   β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Connection to prior knowledge (if any)  [Interleaving]      β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Common misconceptions / easy confusions                     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
  1. After explanation, pose 1–2 follow-up questions to drive deeper thinking [Elaborative Interrogation]:
    • e.g. "Why is this designed this way instead of the alternative?"
    • Wait for user response β†’ give feedback β†’ naturally transition to Phase 3 (optional)

Phase 2 β€” Archiving

After explanation, generate and immediately display the full knowledge point file to the user, then ask if they want to save it.

2-A Knowledge point file structure

smart-learner/notes/[TopicName].md:

# [Topic Name]

## Table of Contents

<!-- Auto-generated; links to all sections below -->

## One-line Definition

## Core Concept Diagram

## Detailed Explanation

<!-- Thorough coverage; no important point omitted -->

## Example

<!-- Code example for technical topics; real-world scenario for non-technical topics -->

## Concept Relationships

<!-- Explicit connections between sub-concepts and related topics -->

## Real-World Application

## Sub-concept Mastery

| Sub-concept | Mastery Level | Notes |
| ----------- | ------------- | ----- |

## Related Topics

## Common Misconceptions

## Summary & Checklist

<!-- Key takeaways + checklist for self-verification -->

- [ ] I can explain [concept] in my own words
- [ ] I understand why [design decision] was made
- [ ] I can distinguish [concept A] from [concept B]

## Quiz Records

<!-- Append after each quiz -->

## Mastery Update Log

<!-- Appended with user confirmation during active sessions -->

## Review Records

2-B Update learning-memory.md (concise index)

### [Topic Name]

- **Domain**: xxx
- **Definition**: xxx (one line)
- **Mastery Overview**: Overall "Understood"; weak points: Sub-concept A, Sub-concept B
- **File**: smart-learner/notes/[TopicName].md
- **Last Reviewed**: YYYY-MM-DD
- **Review Plan**:
  - [ ] YYYY-MM-DD (Session N) β€” Focus: [weak sub-concepts]

2-C Check and update learning-preference.md

After the session, review the conversation for new preference signals (refer to rows marked βœ… in Dynamic Adjustment Rules). If new signals are found, update learning-preference.md and notify the user.

2-D Knowledge map update (Mind Mapping, on demand)

When the number of topics in learning-memory.md reaches a multiple of 5:

  • Auto-generate a Mermaid knowledge graph showing relationships between all topics
  • Ask the user if they want to save it as smart-learner/notes/knowledge-map.md

Phase 3 β€” Quiz (Optional)

After explanation, ask: "Would you like some questions to reinforce this?"

Number of questions:

  • Default: 5 questions
  • If learning-preference.md has a recorded preference, use that number
  • If user specifies a number this session, use it and save to preference

Question strategy:

  • Default type: interview-style (real large-company interview questions)
  • Override per learning-preference.md if a different type is recorded
  • Questions go from easy to hard β€” one at a time, wait for answer before next

After each answer, output the full debrief:

─────────────────────────────────────
Q[n]. [Question]

πŸ“ Your Answer
[User's original response]

πŸ“‹ Reference Answer
[Full answer]

βœ… Correct Points
- xxx

❌ Mistakes
- xxx (omit if none)

πŸ’‘ Additional Notes
- xxx (omit if none)

🏷 Rating: Proficient / Understood / Beginner
─────────────────────────────────────

Post-quiz processing:

  • Append full quiz record to smart-learner/notes/[TopicName].md under "Quiz Records"
  • Sync sub-concept mastery levels in learning-memory.md
  • Apply relevant rules from "Dynamic Adjustment Rules β€” From Quiz Performance"

Phase 4 β€” Review Reminder (Optional)

After the quiz, ask: "Would you like to set up review reminders?"

If yes, schedule using Spaced Repetition:

Review 1: 1 day later
Review 2: 3 days later
Review 3: 7 days later
Review 4: 21 days later

Weak sub-concepts (Beginner / has mistakes) get one interval shorter:

1 day  β†’ same day
3 days β†’ 1 day
7 days β†’ 3 days

Write the plan into the review plan field in learning-memory.md.


Passive Sensing (Active Sessions Only)

Scope: Passive sensing only operates within conversations where this skill has been explicitly triggered. It does not monitor unrelated conversations.

During an active learning session, listen for signals that indicate a change in understanding depth β€” e.g. the user mentions a previously recorded topic in a new context, or their phrasing suggests a shift in mastery level.

If a valid signal is detected:

  1. Summarize the observed signal to the user:

    "I noticed your understanding of [sub-concept] may have [deepened / shifted]. Would you like me to update your notes?"

  2. Only write to files upon explicit user confirmation.
  3. If the user confirms:
    • Append to "Mastery Update Log" in notes/[TopicName].md:
      [YYYY-MM-DD] Session signal: [description] β†’ [sub-concept] updated to [new level]
      
    • Sync mastery overview in learning-memory.md
  4. If the user declines, discard the signal β€” no file changes are made.

learning-preference.md Template

# Learning Preference

## Active Learning Techniques

| Technique                 | Status       | Notes                                                             |
| ------------------------- | ------------ | ----------------------------------------------------------------- |
| Dual Coding               | βœ… On        | Prefer Mermaid diagrams                                           |
| Concrete Examples         | βœ… On        | Code example for technical; real-world scenario for non-technical |
| Elaborative Interrogation | βœ… On        |                                                                   |
| Interleaving              | ⚑ On demand |                                                                   |
| Mind Mapping              | ⚑ On demand |                                                                   |
| SQ3R                      | ⚑ Triggered |                                                                   |

## Explanation Style

- **Default**: Simple to deep (conclusion first, diagrams preferred)
- **Depth**: Thorough and complete β€” do not omit important knowledge points
- **Approach**: Ensure clarity at a glance; Mermaid diagrams preferred

## Starting Point Strategy

Always check learning-memory.md and ask user's familiarity before explaining.
Never default to starting from zero.

## Quiz Preferences

- Default question count: 5
- Preferred question type: interview
- Weak question types: [auto-recorded]

## Output Preferences

- Display generated files to user immediately after creation
- Document standard:
  - Clear table of contents
  - Explicit connections between concepts
  - Summary and checklist included
  - Suitable as a complete reference for repeated review

## Other Preferences

- [e.g. keep answers concise / skip lengthy preambles]

## Update Log

| Date | Signal | Update |
| ---- | ------ | ------ |

Learning Methods Overview

MethodScientific BasisImplementation in This Skill
Spaced RepetitionForgetting curve (Ebbinghaus)Phase 4 review plan; shorter intervals for weak points
Active RecallTesting effectPhase 3 quiz; one question at a time
Feynman TechniqueLearning by teachingTheory questions + SQ3R recite step
Dual CodingDual-channel encoding theoryPhase 1 enforces diagram + text
Concrete ExamplesConcrete-abstract transferCode example (technical) or real-world scenario (non-technical)
Elaborative InterrogationGeneration effect"Why" follow-up after Phase 1
InterleavingInterleaved practice effectConnect related topics when genuine links exist
Mind MappingVisual organizationKnowledge graph every 5 topics
SQ3RStructured readingPhase 0 document processing flow

Behavior Constraints

  • Keep responses concise; prefer diagrams (Mermaid) over text
  • By default, only read and write files under smart-learner/ β€” files outside this directory are accessed only when explicitly requested by the user
  • Notify the user before every file write: "Saved to xxx"
  • Always assess user's starting point before explaining β€” never default to zero
  • Detect topic type (technical / non-technical / mixed) before choosing example format
  • Generated files are displayed to the user immediately; saved only upon confirmation
  • If web_search results conflict with existing knowledge, explicitly flag it
  • When concept confusion is detected, flag it in learning-memory.md for focused review next time
  • Only use analogies when a genuine conceptual link exists β€” never force cross-domain comparisons
  • Passive sensing is scoped to active learning sessions only; never monitors unrelated conversations
  • All file writes from passive sensing require explicit user confirmation before executing
  • All technique on/off states follow learning-preference.md; real-time feedback can temporarily override

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…