Notebooklm Skill Factory

PassAudited by VirusTotal on May 3, 2026.

Overview

Type: OpenClaw Skill Name: notebooklm-skill-factory Version: 0.1.0 The skill provides an automated pipeline for generating new Claude Code skills by orchestrating NotebookLM research and validation. It uses standard CLI interactions with the notebooklm tool, a safe Python script (scripts/parse-skill-output.py) for text parsing, and includes explicit security checks by calling the 'skill-vetter' tool on generated content. No indicators of malicious intent, data exfiltration, or unauthorized execution were found.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

A flawed or source-manipulated generated skill could be installed and exercised in the user's Claude environment before the user has reviewed it.

Why it was flagged

The workflow writes and overwrites active Claude skill files and then invokes the generated skill, but does not require an explicit user approval or isolated staging step before those high-impact actions.

Skill content
Move the parsed output to `~/.claude/skills/{skill-name}/SKILL.md` ... Parse again and overwrite SKILL.md ... Test the skill by invoking it
Recommendation

Stage generated skills outside the active skills directory, show the full diff/content, require explicit approval before installation or testing, sanitize the skill name/path, and keep a rollback copy.

What this means

A malicious or prompt-injected source document could influence the generated skill's future instructions and behavior.

Why it was flagged

Content from external or local sources is used to generate persistent SKILL.md instructions. The artifacts do not explicitly tell the agent to treat instructions embedded in sources as untrusted before installing the resulting skill.

Skill content
Collect sources from user... URLs, local file paths, YouTube links ... `notebooklm ask "{extraction_prompt}" -n <id> --json` ... Move the parsed output to `~/.claude/skills/{skill-name}/SKILL.md`
Recommendation

Add source-injection defenses to the extraction prompt, require human review before installing, and limit generated skills to domain facts and procedures rather than instructions copied from sources.

What this means

The pipeline will operate using the user's authenticated NotebookLM account.

Why it was flagged

The skill uses the user's NotebookLM/Google authentication. This is expected for the NotebookLM integration, but it is sensitive account access.

Skill content
`notebooklm status || notebooklm login` ... tell user to complete Google OAuth
Recommendation

Use the intended Google/NotebookLM account, understand what that account can access, and revoke or refresh access if no longer needed.

What this means

Private documents supplied as sources may be uploaded to or processed by NotebookLM.

Why it was flagged

The skill sends user-selected URLs and local files to NotebookLM for indexing. This is purpose-aligned, but it crosses a provider boundary.

Skill content
`notebooklm source add "https://..." -n <id> --json` ... `notebooklm source add ./local-file.pdf -n <id> --json`
Recommendation

Only provide files and URLs that are appropriate to share with NotebookLM, and avoid confidential material unless the account and provider policy are acceptable.

What this means

Malformed or adversarial JSON could cause command parsing problems if handled unsafely.

Why it was flagged

The example pipes provider/model JSON through a shell command. If an agent literally interpolates untrusted JSON into this command, shell quoting could be fragile, though the artifact also suggests using a temp file.

Skill content
`echo '<json_output>' | python3 scripts/parse-skill-output.py > /tmp/skill-output.md`
Recommendation

Prefer saving the NotebookLM JSON to a temp file and passing the filename to the parser, rather than embedding raw provider output in a shell command.