Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Deep Research → NotebookLM Orchestrator

v1.0.0

End-to-end orchestration: Deep Research → NotebookLM content generation. Chains gemini-deep-research and notebooklm-content-creation skills. Supports choosin...

0· 87·0 current·0 all-time
bySkywalker326@skywalker-lili

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for skywalker-lili/jclaw-deep-research-to-notebooklm.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Deep Research → NotebookLM Orchestrator" (skywalker-lili/jclaw-deep-research-to-notebooklm) from ClawHub.
Skill page: https://clawhub.ai/skywalker-lili/jclaw-deep-research-to-notebooklm
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install jclaw-deep-research-to-notebooklm

ClawHub CLI

Package manager switcher

npx clawhub@latest install jclaw-deep-research-to-notebooklm
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill claims to orchestrate gemini-deep-research → notebooklm-content-creation, which is reasonable. However the runtime instructions require additional tooling (openclaw CLI, node, python3, and the gemini extension under $HOME/.gemini) and make assumptions about the filesystem user (e.g., /home/node/ObsidianVault). None of these binaries, paths, or credentials are declared in the skill metadata, so required capabilities are under-specified and potentially incompatible with typical agent environments.
!
Instruction Scope
SKILL.md tells the agent to create /tmp task directories and write/execute shell scripts that poll services, notify via Discord, trigger the agent via 'openclaw agent', and save/download artifacts to ~/ObsidianVault. It also embeds a placeholder CHAT_ID to be injected. These instructions read/write local files, call external CLIs, and send messages — scope goes beyond a simple high-level orchestration note and prescribes concrete filesystem and network actions that are not declared or sandboxed.
Install Mechanism
No install spec (instruction-only), which is lower-risk in that nothing is pre-downloaded by an installer. However, the instructions explicitly create and execute scripts and expect installed components (node, gemini extension, openclaw CLI). Because the skill writes executable scripts to disk and expects to run them, the lack of an install manifest that documents prerequisites is a meaningful omission.
!
Credentials
The runtime expects access to Discord channel IDs/tokens via openclaw CLI, the gemini-deep-research extension files ($HOME/.gemini/extensions/gemini-deep-research), and ability to write into user home directories (e.g., /home/node/ObsidianVault). The skill declares no required environment variables, binaries, or config paths to justify these accesses. That mismatch means the skill may silently fail or — worse — attempt network/credentialed actions without the user explicitly consenting or knowing which secrets are used.
!
Persistence & Privilege
The instructions create persistent artifacts (task directories and polling scripts) and urge background polling that triggers agent invocations. While the skill is not marked always:true, it prescribes creating long-running or recurring processes that can persist on disk and re-trigger the agent; that increases blast radius and should be explicitly declared and consented to.
What to consider before installing
This skill is an orchestration recipe but its SKILL.md assumes many local tools, credentials, and filesystem locations that are not declared in the metadata. Before installing or enabling it, verify the following: (1) Do you have the gemini-deep-research and notebooklm-content-creation skills/extensions installed and where exactly (the script expects $HOME/.gemini/extensions/...)? (2) Is the openclaw CLI installed and configured to send Discord messages and trigger agents? If not, the scripts will fail or may attempt to perform networked actions once provided credentials. (3) Are you comfortable with the skill writing executable scripts to /tmp and to your home (~/ObsidianVault) and running background polling for up to 40 minutes per artifact? (4) Confirm which user account the agent runs as (the doc references /home/node) and whether that account has access to the intended folders. (5) Ideally ask the skill author to: a) declare required binaries and env vars (openclaw, node, python3, any Discord tokens), b) avoid hard-coded home paths or make saves opt-in with explicit path confirmation, and c) remove or clearly document background-polling scripts and agent triggers so you can decide whether to run them manually or in a sandbox. If you cannot verify these points, treat the skill cautiously or run it in an isolated environment.

Like a lobster shell, security has layers — review code before you run it.

latestvk979bry94xehasxyw440319yjh83xdg0
87downloads
0stars
1versions
Updated 4w ago
v1.0.0
MIT-0

Deep Research → NotebookLM Orchestrator

End-to-end workflow: Run Deep Research, then automatically feed the report into NotebookLM to generate Audio/Video/Infographics/Slides.

Dependencies (must be installed):

  • gemini-deep-research skill — for running Deep Research via Gemini CLI
  • notebooklm-content-creation skill — for NotebookLM notebook/source/audio/video/infographic/slides management

Workflow Overview

User requests "research + generate content"
    ↓
Agent confirms parameters (one message)
    ↓
Start Deep Research (background polling, 5 min interval, max 20 min)
    ↓
DR completes → save report
    ↓
Notify user: "DR done, starting NotebookLM..."
    ↓
Create NotebookLM notebook + upload report
    ↓
Generate selected artifacts in parallel (Audio/Video/Infographics/Slides)
    ↓
Background polling for each artifact (5 min interval, max 40 min)
    ↓
Each artifact completes → notify user (with download if requested)
    ↓
All done → final summary notification

Step 1 — Pre-Flight Confirmation (One Message, All Parameters)

Confirm in the user's current session language. Example (Chinese):

请确认 Deep Research → NotebookLM 参数:

① 研究主题:
   (将原样发给 Gemini Deep Research)

② 报告格式:
   - Comprehensive Research Report(推荐)
   - Executive Brief(精简版)
   - Technical Deep Dive

③ NotebookLM 产物(可多选):
   - ☐ Audio Overview(播客) ← 默认选中
   - ☐ Video Overview(视频)
   - ☐ Infographics(信息图)
   - ☐ Slides(幻灯片)

④ 产物参数:
   - Audio 格式:deep_dive / brief / critique / debate(默认:deep_dive)
   - Audio 长度:short / default / long(默认:default)
   - Video 格式:explainer / brief / cinematic(默认:explainer)
   - Slides 格式:detailed_deck / presenter_slides(默认:detailed_deck)
   - 语言:zh-CN / en / ...(默认:zh-CN)

⑤ 是否下载产物到本地?
   - 是 → 保存到 ~/ObsidianVault/Default/NotebookLM/<notebook-name>/
   - 否 ← 默认

⑥ 轮询设置:
   - DR 轮询:每 5 分钟,最多 4 次 = 20 分钟
   - NotebookLM 轮询:每 5 分钟,最多 8 次 = 40 分钟

直接回复修改项,或"确认"以默认参数启动。

Defaults: Audio only (deep_dive, default length, zh-CN), no download.


Step 2 — Start Deep Research

Use the gemini-deep-research skill to start the research.

2.1 Create Task Directory

mkdir -p /tmp/deep-research-to-notebooklm/<YYMMDD-HHmm>_<slug>/

Write task.json:

{
  "topic": "<user's research topic>",
  "dr_format": "Comprehensive Research Report",
  "dr_output_path": "/home/node/ObsidianVault/Default/DeepResearch/<YYYYMMDD>-<slug>.md",
  "artifacts": ["audio"],
  "artifact_params": {
    "audio": { "format": "deep_dive", "length": "default" },
    "video": { "format": "explainer" },
    "slides": { "format": "detailed_deck", "length": "default" }
  },
  "language": "zh-CN",
  "download": false,
  "dr_poll_interval": 300,
  "dr_max_polls": 4,
  "nlm_poll_interval": 300,
  "nlm_max_polls": 8,
  "created_at": "<ISO timestamp>"
}

2.2 Launch DR

Start Deep Research using the shell pipe method (see gemini-deep-research SKILL.md Step 3 Method B). Save the researchId to task.json.

2.3 Write DR Poll Script

Write <task-dir>/dr-poll.sh. This script:

  • Polls DR status every 5 minutes, max 4 polls (20 min)
  • On completion: saves report, notifies user, triggers agent to start NotebookLM
  • On timeout/failure: notifies user directly
#!/bin/bash
set -euo pipefail
TASK_DIR="$(cd "$(dirname "$0")" && pwd)"
cd "$TASK_DIR"

[[ -f dr-done.flag ]] && echo "Already complete." && exit 0

RESEARCH_ID=$(python3 -c "import json; print(json.load(open('task.json'))['researchId'])")
OUTPUT_PATH=$(python3 -c "import json; print(json.load(open('task.json'))['dr_output_path'])")
TOPIC=$(python3 -c "import json; print(json.load(open('task.json'))['topic'])")
CHAT_ID="INJECT_CHAT_ID"  # ← Agent: replace with current Discord channel ID
GEMINI_EXT="$HOME/.gemini/extensions/gemini-deep-research"

log() { echo "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] $*" | tee -a dr-poll.log; }

notify_user() {
  local message="$1"
  openclaw message send --channel discord --target "$CHAT_ID" -m "$message" 2>/dev/null || log "WARNING: notification failed"
}

trigger_agent() {
  local message="$1"
  openclaw agent --channel discord --message "$message" --deliver --timeout 600 2>/dev/null || {
    log "WARNING: agent trigger failed, falling back to direct message"
    notify_user "$message"
  }
}

poll_status() {
  (echo '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"poll","version":"1.0"}}}'
   sleep 0.5
   echo "{\"jsonrpc\":\"2.0\",\"id\":2,\"method\":\"tools/call\",\"params\":{\"name\":\"research_status\",\"arguments\":{\"id\":\"$RESEARCH_ID\"}}}"
  ) | timeout 60 node "$GEMINI_EXT/dist/index.js" 2>/dev/null | grep -v "MCP server running" | tail -1
}

POLL_COUNT=0
MAX_POLLS=4
INTERVAL=300

while true; do
  POLL_COUNT=$((POLL_COUNT + 1))
  [[ $POLL_COUNT -gt $MAX_POLLS ]] && { log "TIMEOUT"; notify_user "❌ Deep Research 超时(20分钟)。"; exit 1; }

  log "[Poll $POLL_COUNT/$MAX_POLLS] Checking DR status..."
  RESULT=$(poll_status) || true
  echo "$RESULT" >> dr-poll.log

  STATUS=$(echo "$RESULT" | python3 -c "
import sys, json
try:
    d = json.loads(sys.stdin.read())
    text = d['result']['content'][0]['text']
    obj = json.loads(text)
    print(obj.get('status', 'unknown'))
except: print('parse_error')" 2>/dev/null)

  log "Status: $STATUS"

  if [[ "$STATUS" == "completed" ]]; then
    log "DR completed. Extracting report..."
    echo "$RESULT" | python3 -c "
import sys, json
d = json.loads(sys.stdin.read())
text = d['result']['content'][0]['text']
obj = json.loads(text)
report = obj.get('outputs', [{}])[0].get('text', '')
if report:
    with open('$OUTPUT_PATH', 'w') as f: f.write(report)
    print(f'Saved: {len(report)} chars')
else:
    print('No report found')
" >> dr-poll.log 2>&1

    if [[ -s "$OUTPUT_PATH" ]]; then
      SIZE=$(du -h "$OUTPUT_PATH" | cut -f1)
      log "Report saved: $OUTPUT_PATH ($SIZE)"
      touch dr-done.flag
      notify_user "✅ Deep Research 完成!报告已保存($SIZE)。正在启动 NotebookLM..."

      # Read artifacts config from task.json
      ARTIFACTS=$(python3 -c "import json; print(','.join(json.load(open('task.json'))['artifacts']))")
      ARTIFACT_PARAMS=$(python3 -c "
import json
t = json.load(open('task.json'))
params = t.get('artifact_params', {})
lang = t.get('language', 'zh-CN')
lines = []
for art in t['artifacts']:
    p = params.get(art, {})
    lines.append(f'- {art}: {json.dumps(p, ensure_ascii=False)}')
print('\n'.join(lines))
")
      trigger_agent "✅ Deep Research 完成。请启动 NotebookLM 工作流:
- 报告路径:$OUTPUT_PATH
- Notebook 名称:$TOPIC
- 产出类型:$ARTIFACTS
- 各产物参数:
$ARTIFACT_PARAMS
- 语言:$(python3 -c "import json; print(json.load(open('task.json'))['language'])")
- 是否下载:$(python3 -c "import json; print('是' if json.load(open('task.json'))['download'] else '否')")
请创建 notebook、上传报告、生成所选产物。"
    else
      notify_user "⚠️ Deep Research 完成但报告保存失败。"
    fi
    exit 0
  fi

  [[ "$STATUS" == "failed" ]] && { log "Failed"; notify_user "❌ Deep Research 失败。"; exit 1; }

  log "Still in_progress. Sleeping ${INTERVAL}s..."
  sleep "$INTERVAL"
done

2.4 Launch DR Poll

cd /tmp/deep-research-to-notebooklm/<task-dir>/
nohup bash dr-poll.sh > /dev/null 2>&1 &

Step 3 — NotebookLM Phase (Triggered by DR Poll)

When the agent receives the trigger message from DR poll, execute the NotebookLM workflow. This is triggered mode — skip all user confirmations.

3.1 Create Notebook

nlm notebook create "<TOPIC from trigger>"

Capture notebook ID.

3.2 Upload Report

nlm source add <notebook_id> --file "<report_path from trigger>" --wait

3.3 Generate Artifacts in Parallel

For each selected artifact type, create and capture artifact ID:

Audio:

nlm audio create <notebook_id> --format <format> --length <length> --language <lang> --confirm

Video:

nlm video create <notebook_id> --format <format> --language <lang> --confirm

Infographics:

nlm infographic create <notebook_id> --detail detailed --orientation landscape --language <lang> --confirm

Slides:

nlm slides create <notebook_id> --format <format> --length <length> --language <lang> --confirm

3.4 Write NotebookLM Poll Script

Write <task-dir>/nlm-poll.sh. This script tracks multiple artifacts in parallel.

#!/bin/bash
set -euo pipefail
TASK_DIR="$(cd "$(dirname "$0")" && pwd)"
cd "$TASK_DIR"

[[ -f nlm-done.flag ]] && echo "Already complete." && exit 0

NOTEBOOK_ID=$(python3 -c "import json; print(json.load(open('task.json'))['notebook_id'])")
NOTEBOOK_NAME=$(python3 -c "import json; print(json.load(open('task.json'))['topic'])")
CHAT_ID="INJECT_CHAT_ID"  # ← Agent: replace with current Discord channel ID
DOWNLOAD=$(python3 -c "import json; print(json.load(open('task.json'))['download'])")
OUTPUT_DIR="$HOME/ObsidianVault/Default/NotebookLM/$(echo $NOTEBOOK_NAME | tr ' ' '-')"

log() { echo "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] $*" | tee -a nlm-poll.log; }

notify_user() {
  local message="$1"
  openclaw message send --channel discord --target "$CHAT_ID" -m "$message" 2>/dev/null || log "WARNING: notification failed"
}

POLL_COUNT=0
MAX_POLLS=8
INTERVAL=300

# Read artifact IDs from artifacts.json (written by agent at setup time)
# Format: [{"type":"audio","id":"abc123","download_cmd":"nlm download audio"}, ...]
TOTAL=$(python3 -c "import json; print(len(json.load(open('artifacts.json'))))")

while true; do
  POLL_COUNT=$((POLL_COUNT + 1))
  [[ $POLL_COUNT -gt $MAX_POLLS ]] && {
    log "TIMEOUT after $MAX_POLLS polls"
    # Notify about remaining incomplete artifacts
    INCOMPLETE=$(python3 -c "
import json
arts = json.load(open('artifacts.json'))
done = set()
if __import__('os').path.exists('completed.json'):
    done = set(json.load(open('completed.json')))
remaining = [a['type'] for a in arts if a['type'] not in done]
print(', '.join(remaining) if remaining else 'none')
")
    notify_user "⏰ NotebookLM 产物生成超时(40分钟)。未完成:$INCOMPLETE"
    exit 1
  }

  log "[Poll $POLL_COUNT/$MAX_POLLS] Checking status..."
  STATUS_OUTPUT=$(nlm studio status "$NOTEBOOK_ID" 2>&1) || true
  echo "$STATUS_OUTPUT" >> nlm-poll.log

  # Check each artifact
  python3 << 'PYEOF' >> nlm-poll.log 2>&1
import json, subprocess, os

arts = json.load(open('artifacts.json'))
status_data = json.loads(open('/dev/stdin', 'r').read()) if False else None

# Parse studio status output
try:
    status_data = json.loads('''STATUS_OUTPUT'''.replace("STATUS_OUTPUT", ""))
except:
    pass

completed = set()
if os.path.exists('completed.json'):
    completed = set(json.load(open('completed.json')))

for art in arts:
    if art['type'] in completed:
        continue
    # Find artifact status
    art_status = 'unknown'
    if status_data:
        for s in (status_data if isinstance(status_data, list) else status_data.get('artifacts', [])):
            if s.get('id') == art['id']:
                art_status = s.get('status', 'unknown')
                break
    
    if art_status == 'completed':
        print(f"COMPLETED:{art['type']}:{art['id']}")
    elif art_status == 'failed':
        print(f"FAILED:{art['type']}")
PYEOF

  # Process completions
  COMPLETED_ARTS=()
  FAILED_ARTS=()
  
  while IFS= read -r line; do
    if [[ "$line" == COMPLETED:* ]]; then
      ART_TYPE=$(echo "$line" | cut -d: -f2)
      ART_ID=$(echo "$line" | cut -d: -f3)
      COMPLETED_ARTS+=("$ART_TYPE")
      
      # Download if requested
      if [[ "$DOWNLOAD" == "True" ]]; then
        mkdir -p "$OUTPUT_DIR"
        DOWNLOAD_CMD=$(python3 -c "import json; arts=json.load(open('artifacts.json')); print([a['download_cmd'] for a in arts if a['type']=='$ART_TYPE'][0])")
        OUTPUT_FILE="$OUTPUT_DIR/$ART_TYPE-$(date +%Y%m%d).mp4"
        eval "$DOWNLOAD_CMD $NOTEBOOK_ID --id $ART_ID -o $OUTPUT_FILE" >> nlm-poll.log 2>&1 || true
        SIZE=$(du -h "$OUTPUT_FILE" 2>/dev/null | cut -f1 || echo "?")
        notify_user "✅ $ART_TYPE 生成完成!已下载($SIZE):$OUTPUT_FILE"
      else
        notify_user "✅ $ART_TYPE 生成完成!(未下载,在 NotebookLM 中可查看)"
      fi
      
      # Mark completed
      python3 -c "
import json, os
completed = set()
if os.path.exists('completed.json'):
    completed = set(json.load(open('completed.json')))
completed.add('$ART_TYPE')
json.dump(list(completed), open('completed.json', 'w'))
"
    elif [[ "$line" == FAILED:* ]]; then
      ART_TYPE=$(echo "$line" | cut -d: -f2)
      FAILED_ARTS+=("$ART_TYPE")
      notify_user "❌ $ART_TYPE 生成失败。"
      python3 -c "
import json, os
completed = set()
if os.path.exists('completed.json'):
    completed = set(json.load(open('completed.json')))
completed.add('$ART_TYPE')
json.dump(list(completed), open('completed.json', 'w'))
"
    fi
  done < <(python3 << 'PYEOF'
import json, os

arts = json.load(open('artifacts.json'))
completed = set()
if os.path.exists('completed.json'):
    completed = set(json.load(open('completed.json')))

# Read status from last poll
try:
    with open('nlm-poll.log') as f:
        lines = f.readlines()
    # Find the most recent status output
    status_line = ''
    for line in reversed(lines):
        if line.strip().startswith('[') and '"status"' in line:
            status_line = line.strip()
            break
    
    # Actually we need to re-check, skip this approach
    # The status is checked via nlm studio status above
    pass
except:
    pass
PYEOF
  )

  # Check if all done
  ALL_DONE=$(python3 -c "
import json, os
arts = json.load(open('artifacts.json'))
completed = set()
if os.path.exists('completed.json'):
    completed = set(json.load(open('completed.json')))
remaining = [a['type'] for a in arts if a['type'] not in completed]
print('yes' if not remaining else 'no')
")

  if [[ "$ALL_DONE" == "yes" ]]; then
    log "All artifacts completed!"
    touch nlm-done.flag
    notify_user "🎉 全部 NotebookLM 产物生成完成!共 $POLL_COUNT 轮轮询。"
    exit 0
  fi

  log "Still in_progress. Sleeping ${INTERVAL}s..."
  sleep "$INTERVAL"
done

⚠️ Simplified Alternative: If the above multi-artifact poll is too complex, use a simpler approach — one poll script per artifact type, all launched in parallel. Each follows the single-artifact pattern from notebooklm-content-creation SKILL.md.

3.5 Write artifacts.json

Written by the agent at setup time:

[
  {
    "type": "audio",
    "id": "<artifact_id_from_create>",
    "download_cmd": "nlm download audio",
    "download_ext": "mp3"
  }
]

If multiple artifacts selected, each gets its own entry.

3.6 Launch NotebookLM Poll

cd /tmp/deep-research-to-notebooklm/<task-dir>/
nohup bash nlm-poll.sh > /dev/null 2>&1 &

Step 4 — Notifications Summary

EventMethodMessage
DR completenotify_user"✅ Deep Research 完成!报告已保存(SIZE)。"
DR completetrigger_agentFull parameters for NotebookLM
DR timeoutnotify_user"❌ Deep Research 超时(20分钟)。"
DR failurenotify_user"❌ Deep Research 失败。"
Each artifact completenotify_user"✅ AUDIO 生成完成!" (with path if downloaded)
All artifacts completenotify_user"🎉 全部产物生成完成!"
NotebookLM timeoutnotify_user"⏰ 产物生成超时(40分钟)。未完成:X, Y"
Artifact failurenotify_user"❌ X 生成失败。"

All notifications use openclaw message send (direct to Discord, no agent processing).


Temp Directory Structure

/tmp/deep-research-to-notebooklm/
  <YYMMDD-HHmm>_<slug>/
    task.json           ← full task config
    dr-poll.log         ← DR polling log
    dr-done.flag        ← DR completion marker
    dr-poll.sh          ← DR polling script
    artifacts.json      ← NotebookLM artifact IDs
    nlm-poll.log        ← NotebookLM polling log
    nlm-done.flag       ← NotebookLM completion marker
    nlm-poll.sh         ← NotebookLM polling script
    completed.json      ← list of completed artifact types
    <report>.md         ← saved DR report

Quick Reference

User says: "帮我做个关于 XX 的深度研究,然后生成播客" → Agent confirms params → Start DR → Auto-chain to NotebookLM

User says: "深度研究 XX,生成音频和视频" → Agent confirms → Start DR → Auto-chain to NotebookLM (audio + video in parallel)

User says: "研究一下 XX 并下载播客文件" → Agent confirms with download=true → DR → NotebookLM → download on complete

Comments

Loading comments...