Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Loom Workflow

AI-native workflow analyzer for Loom recordings. Breaks down recorded business processes into structured, automatable workflows. Use when: - Analyzing Loom videos to understand workflows - Extracting steps, tools, and decision points from screen recordings - Generating Lobster workflow files from video walkthroughs - Identifying ambiguities and human intervention points in processes

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 1.8k · 5 current installs · 5 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description match the code and instructions: the scripts download Loom videos (yt-dlp), extract frames (ffmpeg + smart extraction), transcribe (whisper), produce prompts for a vision-capable model, and generate Lobster workflows. Required binaries and steps described are consistent with the stated purpose.
!
Instruction Scope
Runtime instructions and scripts explicitly direct the agent to download videos and upload extracted frames/prompts to vision-capable LLMs (example: piping a prompt to 'claude --images ...' and analyze prompts that reference frame image paths). This necessarily transmits potentially sensitive screen content off-host. The SKILL.md and scripts generate large prompts (and include image base64 encoding) which are normal for multimodal analysis but are also the pattern flagged by the pre-scan. The instructions also reference a 'loom-workflow' CLI (Quick Start) while the bundle contains individual Python scripts rather than that wrapper — minor inconsistency but could confuse automated execution.
Install Mechanism
No install spec is present (instruction-only installation). The bundle includes Python scripts that will be present on disk, but there are no external download URLs or archive extraction steps in the install metadata, which lowers installation risk.
!
Credentials
The skill declares no required environment variables or credentials, but generated commands refer to other agent tools (openclaw.invoke, gog.gmail.list) and email/listing commands that would typically need service credentials. Those credentials are not declared in requires.env. Also, the workflow will send video frames to external vision models (e.g., Claude or other APIs) — sensitive data exposure is implied but no endpoints/credentials are declared, which is an incoherence/privacy concern.
Persistence & Privilege
always:false and no code tries to persistently modify other skills or system-wide settings. The skill does create local artifacts (frames, manifests) during normal operation, which is expected for its function.
Scan Findings in Context
[base64-block] expected: The code contains an image base64 encoder (encode_image) and the workflow builds prompts that could embed or reference images; base64 content is expected for packaging images when calling vision APIs. However, base64 blocks are also commonly used in prompt-injection attempts, so this flagged pattern should be examined in context (here it appears to be used for normal multimodal API payloads).
What to consider before installing
This skill mostly does what it says, but review the following before installing or running it: 1) Privacy: extracted frames contain screen content; the pipeline expects you to run a vision-capable model (e.g., Claude or another cloud API). Only send sensitive recordings to models/endpoints you trust — prefer local models if privacy is a concern. 2) Undeclared credentials: generated Lobster steps reference tools like openclaw.invoke and gmail commands that will need credentials or agent tool access; the skill does not declare or request these. Expect to supply or gate those separately and review generated commands before execution. 3) Prompt & data handling: the scripts create large prompts (and can base64-encode images) — verify the prompt files before sending them to any external service to avoid accidental data leakage. 4) Execution surface: the SKILL.md shows a 'loom-workflow' CLI but the bundle provides individual Python scripts; ensure you understand how the agent will invoke them and test locally first (review scripts/analyze-workflow.py, smart-extract.py, generate-lobster.py). If you need higher assurance, run the pipeline in an isolated environment and avoid uploading recordings with credentials or PII to remote LLM services. If anything about required external services (which model endpoint, API keys) is unclear, ask the skill author for explicit details before use.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.1
Download zip
latestvk97cfw6rg2j3z5wyva99w3b1s180h9zj

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Loom Workflow Analyzer

Transforms Loom recordings into structured, automatable workflows.

Quick Start

# Full pipeline - download, extract, transcribe, analyze
{baseDir}/scripts/loom-workflow analyze https://loom.com/share/abc123

# Individual steps
{baseDir}/scripts/loom-workflow download https://loom.com/share/abc123
{baseDir}/scripts/loom-workflow extract ./video.mp4
{baseDir}/scripts/loom-workflow generate ./analysis.json

Pipeline

  1. Download - Fetches Loom video via yt-dlp
  2. Smart Extract - Captures frames at scene changes + transcript timing
  3. Transcribe - Whisper transcription with word-level timestamps
  4. Analyze - Multimodal AI analysis (requires vision model)
  5. Generate - Creates Lobster workflow with approval gates

Smart Frame Extraction

Frames are captured when:

  • Scene changes - Significant visual change (ffmpeg scene detection)
  • Speech starts - New narration segment begins
  • Combined - Speech + visual change = high-value moment
  • Gap fill - Max 10s without a frame

Analysis Output

The analyzer produces:

  • workflow-analysis.json - Structured workflow definition
  • workflow-summary.md - Human-readable summary
  • *.lobster - Executable Lobster workflow file

Ambiguity Detection

The analyzer flags:

  • Unclear mouse movements
  • Implicit knowledge ("the usual process")
  • Decision points ("depending on...")
  • Missing credentials/context
  • Tool dependencies

Vision Analysis Step

After extraction, use the generated prompt with a vision model:

# The prompt is at: output/workflow-analysis-prompt.md
# Attach frames from: output/frames/

# Example with Claude:
cat output/workflow-analysis-prompt.md | claude --images output/frames/*.jpg

Save the JSON response to workflow-analysis.json, then:

{baseDir}/scripts/loom-workflow generate ./output/workflow-analysis.json

Lobster Integration

Generated workflows use:

  • approve gates for destructive/external actions
  • llm-task for classification/decision steps
  • Resume tokens for interrupted workflows
  • JSON piping between steps

Requirements

  • yt-dlp - Video download
  • ffmpeg - Frame extraction + scene detection
  • whisper - Audio transcription
  • Vision-capable LLM for analysis step

Multilingual Support

Works with any language - Whisper auto-detects and transcribes. Analysis should be prompted in the video's language for best results.

Files

6 total
Select a file
Select a file to preview.

Comments

Loading comments…