Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Directoryahu
v1.0.0End-to-end pipeline for creating faceless Islamic story TikTok videos. Orchestrates multiple specialized agents: story research, scriptwriting, image generat...
⭐ 0· 342·1 current·1 all-time
byMohamed Zeidan@mohamedzeidan2021
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The skill's name/description (end-to-end video pipeline) matches the included instructions and code: it orchestrates story research, script writing, image generation, TTS, face-detection gating, FFmpeg assembly, and optional publishing. However, the SKILL.md and code expect external services (image-gen provider 'flux', ElevenLabs TTS, Google Vision / AWS Rekognition or similar) and local tools (FFmpeg) even though the registry metadata lists no required environment variables, binaries, or config paths. That mismatch is unexpected and should be resolved.
Instruction Scope
The SKILL.md provides detailed, bounded instructions for the pipeline: what each agent does, quality gates (e.g., fail on detected faces), and explicit external tools to attach. It does not instruct the agent to read unrelated system files, exfiltrate data, or contact unexpected endpoints beyond the named providers. It does, however, require web search/file-read for story verification and cloud APIs for image/TTS/face detection — reasonable for the stated purpose but worth noting because they require credentials and network access.
Install Mechanism
There is no install spec (instruction-only + included Python orchestrator). Nothing in the manifest downloads or executes arbitrary third-party archives. The single code file (orchestrator.py) is local and readable; install risk is low compared to a binary download, but running it will create logs and output files on disk.
Credentials
Although the runtime documentation references multiple external services (image gen 'flux', ElevenLabs/OpenAI TTS, optional cloud face detection like Google Vision or AWS Rekognition) and local binaries (FFmpeg), the registry metadata declares no required environment variables or primary credential. That is a discrepancy: to operate, the pipeline will require API keys and credentials which are not declared. Users should be aware the skill expects secrets and network access and should avoid blindly providing high-privilege credentials.
Persistence & Privilege
always:false (good). The orchestrator persists state to disk (output dirs, per-video state JSONs, pipeline.log) and creates output files; this is normal for a media pipeline. It does not request elevated system privileges or modify other skills. If installed, it will write files to the agent host and may invoke other agents/tools — run in an environment where file writes and external API calls are acceptable.
What to consider before installing
This skill appears to be what it claims (an end-to-end, multi-agent pipeline to produce faceless Islamic story videos) but there are practical mismatches you should address before installing or running it:
- Credentials & APIs: SKILL.md and config reference image-generation (flux/SDXL/Midjourney), TTS (ElevenLabs/OpenAI), and optional cloud face detection (Google Vision/AWS Rekognition). The registry metadata does not declare any required env vars — so the skill will expect you to supply API keys at runtime. Only provide minimal-scope keys (create limited-service keys) and never share long-lived, high-privilege credentials (e.g., root AWS keys).
- Local tools & files: The pipeline expects FFmpeg and will write logs (pipeline.log), per-video state files, and output directories. Run it in an isolated workspace or container so these files don't mix with sensitive data.
- Config path mismatch: orchestrator.py defaults to loading config from 'config/global_config.json', but the repo contains 'global_config.json' at root. Verify your config path before running to avoid using unsafe defaults.
- Privacy & content: The pipeline will perform web searches and may call cloud APIs that process your text/images. If you plan to use real user data or unpublished material, confirm the provider privacy policies and that you are comfortable with those services processing the content.
- Face-detection gating: The visual agent enforces strict 'no faces' rules and may call cloud face-detection services. Decide whether you want to use local detection (MTCNN/RetinaFace) vs cloud (Vision/AWS) based on privacy and credential scope.
- Test first: Run the orchestrator in a sandbox with dummy API keys or with local-only tools (e.g., local face detector, mocked image/TTS outputs) to confirm behaviour before giving real API credentials or enabling autonomous execution.
If you want, I can:
- List the exact environment variables and tool binaries you should create/configure to run this pipeline safely (minimal-permission suggestions),
- Suggest a safe sandbox/docker run command to test the orchestrator without exposing host data,
- Or scan orchestrator.py and the other files for any strings or endpoints you might want to whitelist/inspect further.Like a lobster shell, security has layers — review code before you run it.
latestvk977qccjge5q53fj8kqxkdch1s81yx4q
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
