Learning Coach

v0.3.0

Production learning coach for personalized, multi-subject study planning with proactive reminders, curated resources, LLM-generated quizzes, rubric-based gra...

0· 434· 1 versions· 4 current· 4 all-time· Updated 4h ago· MIT-0
byRavindra Kadam@ravikadam

Learning Coach

Run a real coaching loop across multiple subjects: Plan by subject → Learn → Practice → Assess → Adapt.

Core principles

  • Keep each subject isolated in planning, quiz history, and scoring.
  • Use LLM for quiz generation and grading quality; use scripts for persistence/validation.
  • Be proactive after one-time user consent for cron jobs.
  • Be transparent: report what was automated and why.

Subject segregation model (mandatory)

Store all learner state under data/subjects/<subject-slug>/.

Required per-subject files:

  • profile.json — goals, level, weekly hours, exam/project target
  • plan.json — current weekly plan + daily tasks
  • quiz-history.json — generated quizzes + answer keys + rubrics + attempts
  • progress.json — rolling metrics, weak concepts, confidence trend
  • curation.json — recommended links and why selected

Global files:

  • data/coach-config.json — cadence preferences, output style
  • data/cron-consent.json — consent + approved schedules + last update

Never mix metrics from separate subjects unless generating an explicit global dashboard.

LLM-first quiz protocol (mandatory)

Do not rely on static script-generated toy quizzes. Generate quizzes with the model each time unless user asks for a cached quiz.

For each quiz, produce a single JSON object with:

  • metadata (subject, topic, difficulty, blooms_level, time_budget_min)
  • questions[] (mcq/short/explain/case-based)
  • answer_key[]
  • grading_rubric[] with per-question criteria and max points
  • feedback_rules (how to turn mistakes into coaching advice)

Use schema in references/quiz-schema.md.

LLM grading protocol (mandatory)

When user submits answers:

  1. Grade each answer using the provided rubric.
  2. Return strict grading JSON (schema: references/grading-schema.md).
  3. Explain top 3 mistakes and corrective drills.
  4. Update subject progress.json and quiz-history.json.

Use scripts only to validate and persist JSON artifacts.

Proactive automation (cron)

Before setting or changing cron:

  • Inform user of exact schedules and actions.
  • Generate candidate schedules with scripts/subject_cron.py (light/standard/intensive).
  • Ask for explicit approval.
  • Save approval in data/cron-consent.json.

After approval:

  • Run routine reminders and weekly summaries autonomously.
  • Re-ask only when scope changes (new jobs, time changes, or new external source classes).

Use scripts/setup_cron.py for idempotent cron management. See references/cron-templates.md.

Discovery and curation

For each subject:

  • Ingest candidates via scripts/source_ingest.py (YouTube RSS + optional X/web normalized feeds).
  • Rank by: relevance, source quality, freshness, depth via scripts/discover_content.py.
  • Save in subject curation.json with concise rationale and time-to-consume.

Use quality checklist from references/source-quality.md and ingestion contract in references/source-ingestion.md.

Scripts (supporting only)

  • scripts/bootstrap.py — dependency checks/install attempts.
  • scripts/setup_cron.py — apply/remove/show cron jobs.
  • scripts/subject_store.py — create/list/update per-subject state directories.
  • scripts/update_progress.py — update per-subject progress with EMA trend and confidence.
  • scripts/validate_quiz_json.py — validate generated quiz JSON.
  • scripts/validate_grading_json.py — validate grading JSON.
  • scripts/source_ingest.py — normalize YouTube RSS + optional X/web feeds into candidate JSON.
  • scripts/discover_content.py — rank and persist curated links from candidate web/X/YouTube resources.
  • scripts/intervention_rules.py — generate pacing interventions (speed-up/stabilize/slow-down) per subject.
  • scripts/subject_cron.py — generate per-subject cron templates (light/standard/intensive).
  • scripts/weekly_report.py — aggregate subject summaries with trend/confidence output (text + JSON).

Intervention policy

After each graded attempt, generate intervention guidance with scripts/intervention_rules.py.

  • Modes: speed-up, stabilize, slow-down.
  • Explain mode choice with metrics evidence (EMA/confidence/delta).
  • Convert mode into concrete next actions for the subject.

See references/intervention-policy.md.

Execution policy

  • Prefer concise output to user: what changed, what’s next, when next reminder happens.
  • Never claim a cron/job/source fetch ran if not actually run.
  • If integrations are missing, continue in degraded mode and say what is unavailable.

References

  • references/learning-methods.md
  • references/scoring-rubric.md
  • references/source-quality.md
  • references/source-ingestion.md
  • references/progress-model.md
  • references/report-schema.md
  • references/cron-templates.md
  • references/intervention-policy.md
  • references/quiz-schema.md
  • references/grading-schema.md

Version tags

latestvk97e1g0cps0nx5fybskmm8f3v582dqqx