Install
openclaw skills install notebooklm-skill-factoryOrchestrate NotebookLM research into SKILL.md generation and Claude Code validation in a single automated pipeline. Use when user asks to create a new Claude Code skill from source materials (PDFs, articles, YouTube, URLs), batch-produce skills, or convert domain knowledge into reusable skills. Trigger phrases -- create a skill for X, make me a skill that does X, turn this into a skill, generate a skill from these sources, build a skill factory pipeline. NOT for manual skill editing (use skill-creator) or NotebookLM-only tasks (use notebooklm directly).
openclaw skills install notebooklm-skill-factoryOrchestrate the full pipeline: NotebookLM source ingestion → structured SKILL.md extraction → write to skills directory → validate → test → iterate.
Before the pipeline, verify NotebookLM is authenticated:
notebooklm status || notebooklm login
If notebooklm login opens a browser, tell user to complete Google OAuth and press ENTER in their terminal.
Create a dedicated notebook for this skill:
notebooklm create "Skill: {skill-name}" --json
Parse the notebook id from JSON output.
Collect sources from user. Ask:
"What sources should this skill be based on? Give me URLs, local file paths, YouTube links — anything high-quality and specific to this domain."
If the user doesn't have sources ready, suggest:
Add sources (use -n <notebook_id> for all subsequent commands):
notebooklm source add "https://..." -n <id> --json
notebooklm source add ./local-file.pdf -n <id> --json
Capture each source_id.
Wait for indexing — spawn a background agent to avoid blocking:
notebooklm source wait <source_id> -n <id> --timeout 600
For multiple sources, wait them all. If any source fails indexing (exit code 1), log a warning but continue with remaining sources.
Load the extraction prompt template:
Read references/skill-extraction-prompt.md. Replace {USER_INTENT} with the user's original request.
Query NotebookLM against all indexed sources:
notebooklm ask "{extraction_prompt}" -n <id> --json
Parse the output into a clean SKILL.md:
echo '<json_output>' | python3 scripts/parse-skill-output.py > /tmp/skill-output.md
Or save the JSON to a temp file first, then pipe.
Validate basic structure:
--- YAML frontmattername: and description: fieldsCreate the skill directory:
mkdir -p ~/.claude/skills/{skill-name}
Write the SKILL.md:
Move the parsed output to ~/.claude/skills/{skill-name}/SKILL.md
Run skill-creator validation (via Skill tool):
Invoke skill-creator with: "Validate the skill at ~/.claude/skills/{skill-name}/ and fix any issues found. Run package_skill.py to check for errors."
Run skill-vetter security check (via Skill tool):
Invoke skill-vetter with: "Review the skill at ~/.claude/skills/{skill-name}/ for security issues."
Test the skill by invoking it with a realistic prompt matching its intended use case.
Collect failures:
If issues found, iterate:
notebooklm ask "The SKILL.md generated earlier has this issue: {failure}. Based on the sources, rewrite it to fix this. Output complete corrected SKILL.md in a markdown code block." -n <id> --json
Repeat until the skill passes a real usage test.
| Situation | Action |
|---|---|
| NotebookLM auth expired | Run notebooklm login, retry |
| Source indexing failed | Skip that source, warn user, continue |
| Extraction prompt returned empty | Check if sources are indexed (status=ready), re-query with --new flag |
| Generated SKILL.md fails validation | Send failure description back to NB for rewrite (Phase 4 iteration) |
| NB rate limited | Wait 5-10 min, retry once |
Remind users:
When the skill passes testing, report:
~/.claude/skills/{name}/)