Notebooklm Skill Factory
AdvisoryAudited by Static analysis on May 3, 2026.
Overview
No suspicious patterns detected.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A flawed or source-manipulated generated skill could be installed and exercised in the user's Claude environment before the user has reviewed it.
The workflow writes and overwrites active Claude skill files and then invokes the generated skill, but does not require an explicit user approval or isolated staging step before those high-impact actions.
Move the parsed output to `~/.claude/skills/{skill-name}/SKILL.md` ... Parse again and overwrite SKILL.md ... Test the skill by invoking itStage generated skills outside the active skills directory, show the full diff/content, require explicit approval before installation or testing, sanitize the skill name/path, and keep a rollback copy.
A malicious or prompt-injected source document could influence the generated skill's future instructions and behavior.
Content from external or local sources is used to generate persistent SKILL.md instructions. The artifacts do not explicitly tell the agent to treat instructions embedded in sources as untrusted before installing the resulting skill.
Collect sources from user... URLs, local file paths, YouTube links ... `notebooklm ask "{extraction_prompt}" -n <id> --json` ... Move the parsed output to `~/.claude/skills/{skill-name}/SKILL.md`Add source-injection defenses to the extraction prompt, require human review before installing, and limit generated skills to domain facts and procedures rather than instructions copied from sources.
The pipeline will operate using the user's authenticated NotebookLM account.
The skill uses the user's NotebookLM/Google authentication. This is expected for the NotebookLM integration, but it is sensitive account access.
`notebooklm status || notebooklm login` ... tell user to complete Google OAuth
Use the intended Google/NotebookLM account, understand what that account can access, and revoke or refresh access if no longer needed.
Private documents supplied as sources may be uploaded to or processed by NotebookLM.
The skill sends user-selected URLs and local files to NotebookLM for indexing. This is purpose-aligned, but it crosses a provider boundary.
`notebooklm source add "https://..." -n <id> --json` ... `notebooklm source add ./local-file.pdf -n <id> --json`
Only provide files and URLs that are appropriate to share with NotebookLM, and avoid confidential material unless the account and provider policy are acceptable.
Malformed or adversarial JSON could cause command parsing problems if handled unsafely.
The example pipes provider/model JSON through a shell command. If an agent literally interpolates untrusted JSON into this command, shell quoting could be fragile, though the artifact also suggests using a temp file.
`echo '<json_output>' | python3 scripts/parse-skill-output.py > /tmp/skill-output.md`
Prefer saving the NotebookLM JSON to a temp file and passing the filename to the parser, rather than embedding raw provider output in a shell command.
