Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Recursive maths animator

v1.7.0

Recursive maths animator — Manim-based technical animations with optional voiceover (manim-voiceover), git scene versioning, pinned requirements, asset folde...

0· 146·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for science-prof-robot/recursive-maths-animator.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Recursive maths animator" (science-prof-robot/recursive-maths-animator) from ClawHub.
Skill page: https://clawhub.ai/science-prof-robot/recursive-maths-animator
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install recursive-maths-animator

ClawHub CLI

Package manager switcher

npx clawhub@latest install recursive-maths-animator
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
Name/description (Manim animations, voiceover, versioning, verification) align with the included code: pattern library, design systems, voiceover adapters, manim_versioning utilities, and helper scripts. Optional Gemini/Google TTS support in references/gemini_tts_service.py matches documentation that it is optional.
!
Instruction Scope
SKILL.md instructs a 'vision verification loop' that slices frames and 'review[s] with the host model’s vision' (Cursor/Claude Code). That explicitly directs rendered frames/video to be reviewed by a hosted multimodal model—i.e., transmitted outside the local environment. The docs also recommend using gTTS and optionally the google-genai/Gemini adapter, which perform network calls. These behaviors are coherent with the feature set (voiceover and model-assisted verification) but have privacy/security implications and should be highlighted to users before use.
Install Mechanism
No install spec in the registry (instruction-only), and the skill expects consumers to copy/add the provided 'references/' directory to project sys.path or their host skills folder. No downloads from arbitrary URLs or extract operations are present in the metadata.
Credentials
The skill declares no required env vars. Documentation calls out an optional GEMINI_API_KEY if using references/gemini_tts_service.py or google-genai — this is proportional to optional Gemini TTS usage. There are no unrelated credential requests. Users should still be cautious about supplying any API keys (GEMINI_API_KEY) only when needed and after reviewing the corresponding adapter code.
Persistence & Privilege
always:false and no special system-wide modifications are requested. The workflow asks users to add the skill's references/ path to sys.path or copy files into a user skills directory — standard for deployable code, not an elevated privilege. The skill does not request permanent platform-level privileges in metadata.
What to consider before installing
This skill appears to do what it claims: Manim animation patterns, optional voiceover support, and a verification loop. However, before installing or running it: - Review any adapter that calls external TTS or GenAI services (references/gemini_tts_service.py and the use of manim-voiceover/gTTS). These modules perform network calls to third-party TTS/GenAI and will transmit your audio/text to external services. - The recommended 'vision verification loop' explicitly sends extracted frames for review in the host model/environment (Cursor/Claude Code). Treat rendered frames and any included images/fonts as potentially sensitive data and do not upload them to hosted models unless you accept that transmission. - Only set GEMINI_API_KEY or other credentials if you trust the adapter code. Inspect references/gemini_tts_service.py to confirm the endpoints and how the key is used. - Run the skill inside an isolated virtualenv or sandbox, and inspect scripts/run_pipeline.py and scripts/extract_verification_frames.py before execution to confirm they run only the commands you expect (no unexpected shell execs or uploads). - If you need to keep data local, avoid the verification loop that uploads frames, or perform verification locally (manual review) instead. If you want a more precise assessment, provide the full contents of references/gemini_tts_service.py and scripts/run_pipeline.py so I can check for exact network endpoints, credential usage, or any obfuscated/executable behavior.

Like a lobster shell, security has layers — review code before you run it.

latestvk97ba6tn830fn87y6bsbabze7585jzbc
146downloads
0stars
7versions
Updated 1d ago
v1.7.0
MIT-0

Recursive maths animator (Manim + voiceover + verification)

This skill ships helper code under references/ (palette, optional Gemini TTS adapter, manim_versioning.ManimProject) and utilities under scripts/. Agents should point users at those paths when generating projects.

Brief-first workflow (do this before any scene code)

Many users want something cool, shareable, and minimal — not a wall of technical detail. Do not jump straight to project.init() + a full scene unless the user explicitly says “just build it.”

  1. Digestible pitch first — In chat, give a short animation brief: one-line takeaway, 3–5 beats (what appears, in order, rough seconds each), and why it fits Manim (motion, not static slides). Keep it skimmable; no long tables of API names unless they ask.

  2. Offer choices — Present 2–3 palette options from the built-in design systems (see below). Use letter labels the user can reply with:

    • A — Swiss grid (Inter, clinical, data-forward)
    • B — Bauhaus primary (Space Grotesk, geometric, educational)
    • C — Braun minimal (Work Sans, warm gray, product)
    • D — Editorial bold (Playfair + Inter, dramatic, storytelling)
    • E — Apple precision (DM Sans, cool, tech)
    • F — Soft enterprise (Roboto, warm cream — existing default)

    Also offer aspect ratio (16:9, 1:1, 9:16) and tone (calm / punchy). Let the user pick or mix.

  3. Wait for approval — Only after the user confirms (or says “use A + 1:1 + calm”) do you: write ANIMATION_BRIEF.md (filled) + DESIGN_THEME.md (locked), then implement the scene.

  4. Use the maths engine — Prefer Manim-native motion: MathTex / Tex, NumberPlane, ParametricFunction, Transform / ReplacementTransform, Indicate, ShowPassingFlash, LaggedStart, updaters. Avoid “generic UI explainer” unless that is what they asked for. See references/manim_guide.md for patterns.

  5. Shareable qualityMP4 at -qh / --quality h is the default deliverable for “looks good.” GIF is for layout checks only; re-encoding with aggressive ffmpeg crushes gradients and dark minimal palettes. If they need a small GIF, render a short clip, limit colors in the scene, or share MP4 / link instead.

ManimProject.init() seeds ANIMATION_BRIEF.md with a template; agents replace “DRAFT” content after approval.

Operating principles (do these every time)

  1. Design theme + brief — After the user approves the pitch, record mood, light/dark, chosen design system (swiss / bauhaus / braun / editorial / apple / soft), typography, motion, deliverable size, and brand assets in DESIGN_THEME.md. Keep the approved story in ANIMATION_BRIEF.md.
  2. Pinned dependencies — Every project keeps a root requirements.txt (seeded on init() from this skill’s template). When you add imports or optional stacks (e.g. Gemini), update requirements.txt and tell the user to pip install -r requirements.txt. For reproducible CI, suggest pip freeze > requirements.lock.txt after upgrades.
  3. Assets live in assets/ — Put images, SVGs, and custom fonts under assets/images, assets/svgs, assets/fonts. Keep scenes/ for Python only so diffs stay readable.
  4. Optional GIF before final MP4 — When stakeholders need a quick motion check in chat, produce a low-quality GIF (ManimProject.render_approval_gif("scene_1") or render(..., output_format="gif", export_approval_copy=True)). If the user prefers to go straight to MP4 (e.g. silent cut with voiceover added later), skip the GIF and render MP4 directly. After any GIF sign-off, render output_format="movie" (MP4; see Rendering — Manim uses --format mp4).
  5. Verify with vision, then iterate — After each substantive render, run the verification loop below: slice frames, review with the host model’s vision, write VERIFICATION_FEEDBACK.md, fix Manim code, re-render. Prefer MP4 for final verification passes; GIF is acceptable for quick layout checks.

Requirements

  • Python 3.9+
  • manimpip install manim (versions pinned in project requirements.txt)
  • manim-voiceover with a TTS backend — e.g. pip install "manim-voiceover[gtts]" (uses network for gTTS unless you switch engine)
  • ffmpeg and ffprobe — with libx264 and libass if you burn subtitles (see scripts/run_pipeline.py); ffprobe is required for extract_verification_frames.py
  • git — for ManimProject versioning commands

Optional:

  • google-genai — only if using references/gemini_tts_service.py (set GEMINI_API_KEY); uncomment in requirements.txt when used.

Using references/ from your project

The installable skill is the directory that contains SKILL.md (often .../recursive-maths-animator/ inside a Git clone), not the repository root above it. If the host says “unknown skill,” confirm that path ends with recursive-maths-animator/SKILL.md.

The Quick Start imports ManimProject from manim_versioning. Add this skill’s references directory to sys.path (or copy the files into your repo).

from pathlib import Path
import sys

# Path to the installed skill’s references/ folder (adjust if you symlink or copy the skill).
# Cursor (user-wide): ~/.cursor/skills/recursive-maths-animator/references
# Claude Code (user-wide): ~/.claude/skills/recursive-maths-animator/references
SKILL_REF = Path.home() / ".cursor/skills/recursive-maths-animator/references"
# SKILL_REF = Path("path/to/recursive-maths-animator/references")

sys.path.insert(0, str(SKILL_REF.resolve()))
from manim_versioning import ManimProject

Scene files should use the same pattern so soft_enterprise_palette and optional gemini_tts_service resolve:

import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).resolve().parent.parent / "references"))

(Adjust the relative path if your layout differs.)

Quick Start

from pathlib import Path
import sys

REF = Path("/path/to/recursive-maths-animator/references").resolve()
sys.path.insert(0, str(REF))
from manim_versioning import ManimProject

project = ManimProject("my_animation")
project.init()  # Creates git repo, scenes/ folder, structure

# Create Scene 1
project.create_scene("scene_1", """
class Scene1(Scene):
    def construct(self):
        # Your animation code
        pass
""")

# Render and commit
project.render("scene_1")  # Auto-commits as "scene_1 v1"

# Make changes, create new version
project.update_scene("scene_1", "# updated code...")
project.render("scene_1")  # Auto-commits as "scene_1 v2"

# Rollback if needed
project.rollback("scene_1", version=1)  # Restores v1

# Create provisional branch for review
project.branch("scene_1", "review-alice")  # Creates branch, doesn't affect main

Project structure

After ManimProject.init(), the layout includes dependency and theme files plus asset and approval folders:

my_animation/
├── .git/
├── requirements.txt       # Pinned Manim / voiceover; extend when you add packages
├── ANIMATION_BRIEF.md     # Short pitch + beats + approved choices (before / while coding)
├── DESIGN_THEME.md        # User’s theme answers — fill after approval, before heavy code
├── assets/
│   ├── README.md
│   ├── images/
│   ├── svgs/
│   └── fonts/
├── VERIFICATION_FEEDBACK.md   # Latest multimodal review output (agent-written; optional until first review)
├── exports/
│   ├── approvals/         # GIF (or other) previews for sign-off
│   └── verification/    # Frame slices + manifest.json per run (see extract script)
├── scenes/
│   ├── scene_1/
│   │   ├── scene_1.py
│   │   ├── versions/
│   │   │   ├── v1.py
│   │   │   └── v2.py
│   │   └── branches/
│   │       └── review-alice/
│   ├── scene_2/
│   └── shared/
│       ├── palette.py
│       └── utils.py
├── media/
│   ├── scene_1_v2.mp4
│   └── scene_1_v2.gif     # when you render GIF previews
└── project.json

Versioning commands

ActionCommandResult
Initialize projectproject.init()Git repo + folder structure
Create sceneproject.create_scene(name, code)Scene file + initial commit
Update sceneproject.update_scene(name, code)New version committed
Renderproject.render(name)Video + auto-commit
List versionsproject.versions(name)Shows v1, v2, v3...
Rollbackproject.rollback(name, version)Restores code to version
Create branchproject.branch(name, branch_name)Provisional copy
Merge branchproject.merge(name, branch_name)Merges into main
Compareproject.diff(name, v1, v2)Shows code differences
Tag approvedproject.tag(name, version, "approved")Marks final version

Provisional branch workflow

project.update_scene("scene_1", "# version 1 code")
project.render("scene_1")

project.branch("scene_1", "alt-animation")
project.update_scene("scene_1", "# alternative code", branch="alt-animation")
project.render("scene_1", branch="alt-animation")

# Review outputs, then merge or delete_branch as needed
project.merge("scene_1", "alt-animation")

Scene templates

Using a built-in design system (recommended)

"""
SCENE {N}: {TITLE}
{Description}
~{duration}s, {orientation}
Design system: {scheme}
"""

import sys
sys.path.insert(0, '{project_path}/references')

from manim import *
from manim_voiceover import VoiceoverScene
from manim_voiceover.services.gtts import GTTSService

# Import the chosen design system (example: swiss)
from design_systems.swiss_international import SwissScene, SwissColors, EASE_SWISS_SNAP


class Scene{N}_{Title}(SwissScene):
    """{Description}"""

    def __init__(self, **kwargs):
        config.pixel_width = {width}
        config.pixel_height = {height}
        config.frame_width = {frame_w}
        config.frame_height = {frame_h}
        config.frame_rate = 60

        super().__init__(**kwargs)

        self.set_speech_service(GTTSService(lang='en', slow=True))

    def construct(self):
        self.setup_swiss_background()

        section_title = self.make_heading("{SECTION_TITLE}")
        section_title.to_edge(UP, buff=0.5)
        self.add(section_title)

        with self.voiceover(
            text="{VOICEOVER_LINE_1}"
        ) as tracker:
            pass

        self.wait(0.5)


if __name__ == "__main__":
    config.quality = "high_quality"
    scene = Scene{N}_{Title}()
    scene.render()

Legacy: voiceover + soft palette (no design system)

"""
SCENE {N}: {TITLE}
{Description}
~{duration}s, {orientation}
"""

import sys
sys.path.insert(0, '{project_path}/references')

from manim import *
from manim_voiceover import VoiceoverScene
from manim_voiceover.services.gtts import GTTSService
from default_typography import DEFAULT_FONT
from soft_enterprise_palette import SoftColors, EASE_GAS_SPRING


class Scene{N}_{Title}(VoiceoverScene):
    """{Description}"""

    def __init__(self, **kwargs):
        config.pixel_width = {width}
        config.pixel_height = {height}
        config.frame_width = {frame_w}
        config.frame_height = {frame_h}
        config.frame_rate = 60

        super().__init__(**kwargs)

        self.set_speech_service(GTTSService(lang='en', slow=True))

    def construct(self):
        bg = Rectangle(
            width=config.frame_width,
            height=config.frame_height,
            fill_color=SoftColors.BACKGROUND,
            fill_opacity=1
        )
        self.add(bg)

        section_title = Text(
            "{SECTION_TITLE}",
            font=DEFAULT_FONT,
            font_size=14,
            color=SoftColors.TEXT_SECONDARY
        )
        section_title.to_edge(UP, buff=0.5)
        self.add(section_title)

        with self.voiceover(
            text="{VOICEOVER_LINE_1}"
        ) as tracker:
            pass

        self.wait(0.5)

    def create_token(self, text, is_active=False):
        token = Text(
            text,
            font=DEFAULT_FONT,
            font_size=24,
            color=SoftColors.TEXT_PRIMARY if is_active else SoftColors.TEXT_SECONDARY,
            weight=MEDIUM
        )

        if is_active:
            bg = RoundedRectangle(
                corner_radius=0.12,
                width=token.width + 0.35,
                height=token.height + 0.25,
                fill_color=SoftColors.CONTAINER,
                fill_opacity=0.85,
                stroke_color=SoftColors.BORDER,
                stroke_width=1
            )
            bg.move_to(token.get_center())
            token = VGroup(bg, token)

        return token


if __name__ == "__main__":
    config.quality = "high_quality"
    scene = Scene{N}_{Title}()
    scene.render()

Design systems

Five built-in designer-inspired aesthetic systems live under references/design_systems/. Each is a complete module (colors, typography, motion, containers, background, base scene) following the same API as soft_enterprise_palette.SoftEnterpriseScene.

KeyNameDesigner / MovementPrimary FontMood
swissSwiss InternationalJosef Müller-BrockmannInterStrict grid, clinical precision, black/white + restrained red
bauhausBauhaus ModernHerbert BayerSpace GroteskGeometric, primary colors, functional art
braunBraun MinimalDieter RamsWork SansWarm light grays, systematic, "less but better"
editorialEditorial BoldPaula Scher / PentagramPlayfair Display + InterDramatic scale contrast, deep navy + warm cream
appleApple PrecisionJony IveDM SansCool neutrals, generous whitespace, sleek motion
softSoft EnterpriseSkill defaultRobotoWarm cream, dot grid, gas-spring easing

Import a system directly:

import sys
sys.path.insert(0, 'path/to/references')
from design_systems.swiss_international import SwissScene, SwissColors, EASE_SWISS_SNAP

Or use the registry:

from design_systems import get_scheme, get_scene_class
SceneClass = get_scene_class("swiss")   # -> SwissScene

Fonts are downloaded on demand:

from design_systems.font_catalog import install_fonts
install_fonts("swiss", target_dir="assets/fonts")

All fonts are SIL Open Font License (OFL) 1.1 and freely redistributable. ManimProject.init(scheme="swiss", install_fonts=True) can download fonts automatically at project creation.

Soft enterprise palette

Defined in references/soft_enterprise_palette.py — import SoftColors and EASE_GAS_SPRING after adding references to sys.path.

Default font: references/default_typography.py defines DEFAULT_FONT (Roboto) for all Text() unless the user overrides in DESIGN_THEME.md.

Rendering

Manim Community expects --format mp4 (or gif, webm, etc.), not movie. The word “movie” in docs means “video file”; ManimProject.render(..., output_format="movie") maps to --format mp4 internally.

For shareable, high-quality output, prefer --quality h (or -qh) MP4. Post-processing GIF with heavy palette reduction often looks worse than the source MP4 — especially dark or gradient minimal styles.

# Draft MP4
manim -ql scene.py SceneClass --format mp4 --disable_caching

# Stakeholder approval GIF (small, easy to share)
manim -ql scene.py SceneClass --format gif --disable_caching

# High quality final MP4
manim -qh scene.py SceneClass --format mp4 --disable_caching

# Versioning helper — final pass (still uses output_format="movie" in Python = MP4 on CLI)
project.render("scene_1", quality="high", output_format="movie")

# Versioning helper — approval GIF into exports/approvals/ (no auto-commit)
project.render_approval_gif("scene_1")

If your Manim build errors on --format, upgrade Manim (Community ≥ 0.18) or use a two-step pipeline: render draft MP4, then ffmpeg to GIF (document in project README if needed).

Verification loop (required after substantive renders)

This skill does not call cloud LLM APIs from Python. Cursor or Claude Code performs multimodal review using extracted stills.

When to run

  • After any render that changes layout, copy, colors, or story beats (including a new GIF approval cut).
  • Final checks should use a full-quality MP4 when possible; GIFs are fine for early layout passes.

Step 1 — Extract frames

From the animation project root (or pass --cwd), run:

python3 path/to/recursive-maths-animator/scripts/extract_verification_frames.py path/to/render.mp4

Optional: --count 10, --format png, --output-dir exports/verification/my_run.

This writes a timestamped folder under exports/verification/ with JPEG/PNG frames and manifest.json (t_seconds, pct, filename per frame).

Step 2 — Multimodal review (host agent)

  1. Read manifest.json and open every extracted frame (vision).
  2. Read DESIGN_THEME.md and the agreed storyboard / scene plan (what each beat must prove).
  3. Apply references/video_verification_rubric.md: padding and safe margins, typography (including font vs DESIGN_THEME.md), text alignment and overlap, theme colors, logical progression vs plan, motion hints between samples, glitches.

Step 3 — Write VERIFICATION_FEEDBACK.md (project root)

Use this structure:

# Verification feedback

## Verdict
PASS | PASS_WITH_ISSUES | FAIL

## Summary
2–4 sentences. Must include at least one sentence on **text alignment** (e.g. columns, baselines, multi-line blocks) and one on **overlap / clutter** (text vs arrows/shapes, cramped `buff=`).

## Layout (alignment & overlap)
- Alignment: …
- Overlap / clutter: …

## Issues
### P0 — (title)
- Evidence: frame `frame_03_...jpg` — t=…s, pct=…%
- Expected: …
- Observed: …
- Suggested fix: … (Manim: e.g. `buff=`, `to_edge`, `shift`, color constant, reorder `play`)

### P1 — …

## Next iteration
Ordered list of edits to the scene file(s), then re-render and re-run extraction.

Step 4 — Iterate

  1. Implement P0 then P1 (then P2) in Manim source.
  2. Re-render the same deliverable type you are validating.
  3. Re-run extract_verification_frames.py on the new file (new output folder preserves history).
  4. Repeat until Verdict is PASS or PASS_WITH_ISSUES with only acceptable P2 items.

Round cap: default 3 full verify cycles unless the user explicitly asks for more.

Pipeline helper (optional)

scripts/run_pipeline.py wraps render + optional subtitle burn-in. scripts/check_environment.py verifies common dependencies.

Output locations

  • Draft: media/videos/scene_1/480p15/Scene1.mp4
  • Final: media/videos/scene_1/1080p60/Scene1.mp4
  • Versioned: media/scene_1_v{N}.mp4 (when using ManimProject; see implementation)

Best practices

  1. Theme in writingDESIGN_THEME.md should reflect what the user agreed to; link palette choices to the chosen design system (e.g. SwissColors, BauhausColors) or a project palette module under scenes/shared/.
  2. Requirements drift — Any new pip dependency must appear in requirements.txt the same change set.
  3. Version deliberately: use commits per meaningful final render; GIF previews may skip auto-commit (see render_approval_gif).
  4. Use branches for experiments before merging to main line.
  5. Tag approvals when a cut is final (project.tag(...)).
  6. Keep scenes independently renderable.
  7. Shared utilities live under scenes/shared/; binaries only under assets/.
  8. Keep voiceover text TTS-friendly (plain punctuation, avoid noisy symbols).
  9. Target ~10–15s per scene for short-form vertical if that is the deliverable.
  10. Close the verification loop — Do not treat a render as done until frames are extracted and VERIFICATION_FEEDBACK.md records a PASS (or user accepts PASS_WITH_ISSUES).
  11. Pitch before pixels — For creative or “explainer” requests, use the Brief-first workflow so palette and story match what the user considers “cool” before you invest in a long scene file.

Automated sandbox reports (VirusTotal Zenbox, etc.)

If a dynamic scan of the skill zip shows subprocesses, python.exe, cmd.exe, non-standard ports, or URLs such as http://192.168.x.x:…/v1/…, treat the overall verdict and score first: this package is documentation + optional Manim helpers; it does not embed a C2 server or obfuscated payloads. Strings like /v1/chat/completions in memory usually come from the analyzer environment (local model proxy), not from files in this skill. Heuristic “injection” or “non-standard port” flags are common for any stack that runs subprocess + Python + optional HTTP clients (e.g. gTTS). Compare the zip to this repository when in doubt.

Troubleshooting

IssueSolution
Git not initializedRun project.init() first
Import errors for helpersAdd this skill’s references/ to sys.path or copy files into your project
Branch merge conflictResolve in scene file, then commit via project helpers
Cache issuesUse --disable_caching
TTS / API limitsFall back to gTTS or another SpeechService
ffprobe / frame extract failsInstall full ffmpeg package; ensure ffprobe is on PATH
Empty or black framesRe-sample with higher --count or inspect source video; check -ss timing
GIF looks muddy / banded after ffmpegDeliver MP4 for final share; shorten the clip, simplify palette in Manim, or use gentler GIF settings — do not treat crushed GIF as the only artifact

Optional follow-on

  • Remotion or other compositors for captions, UI chrome, or multi-track polish.
  • General video editing (FFmpeg, DaVinci, etc.) for final assembly.

Comments

Loading comments...