Content Claw

v0.0.1

Automated content generation engine. Transform source material (papers, podcasts, case studies) into platform-ready content using recipes and brand graphs. U...

0· 92·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (content generation from URLs into recipes/brand graphs) matches the included code and declared requirements: scripts perform extraction, topic discovery (exa), image generation (fal), and publishing. Requesting 'uv' as the project runner and FAL_KEY/EXA_API_KEY is proportional to the stated functionality.
Instruction Scope
SKILL.md and scripts confine normal I/O to the skill directory and require .env in the skill root. However: (1) discovery/extraction scripts use Playwright with stealth settings to scrape Reddit/X and optionally accept cookies for authenticated scraping/publishing (this grants the skill the ability to act as the user's account if cookies are provided); (2) SKILL.md's baseDir resolution uses readlink against ~/.agents or ~/.claude paths (touches user home to find the installed skill), which technically contradicts the 'never access the user's personal files' phrasing — this is intended to locate the skill files but should be noted. Overall instructions are explicit (do not access outside BASE_DIR) but grant the agent the capability to fetch and post content when cookies are supplied.
Install Mechanism
This is instruction-only (no automated install spec). SKILL.md documents installing 'uv' via brew/pipx or the project's official install script; dependencies are installed via uv sync. The curl install option is a URL-based installer (astral.sh/uv) which is a documented upstream installer but is inherently a download-from-URL action — review the script before running. No other arbitrary remote archive downloads or unknown hosts are used.
Credentials
Only two required env vars are declared (FAL_KEY for fal.ai image gen and EXA_API_KEY for Exa search), which aligns with the code. The skill optionally asks users to provide Reddit/X cookies for authenticated scraping/publishing; cookies are stored locally under BASE_DIR/creds per SKILL.md. That capability is expected for publishing features but elevates risk if you supply credentials — use scoped or throwaway creds and review publishing scripts before enabling.
Persistence & Privilege
always is false and the skill does not request permanent system-wide privileges. It stores run artifacts and optional cookies under its own BASE_DIR, which is normal. It does not modify other skills' configs. Autonomous invocation (model invocation enabled) is the platform default and not a standalone concern here.
Assessment
This skill appears to do what it claims (extract, synthesize, generate images, optionally publish). Before installing: (1) review scripts/publish.py and discover_topics.py to confirm you’re comfortable with local Playwright-based scraping and the publishing flow; (2) avoid supplying your main account cookies — prefer dry-run, sandbox, or throwaway/limited accounts; (3) use scoped/limited API keys for fal.ai and exa.ai and monitor usage; (4) if you will run the curl-based installer for 'uv', inspect the script first; (5) be aware that stealth scraping can violate some sites’ terms of service — use responsibly and consider legal/ToS implications. If you want higher assurance, ask the maintainer for a signed release or run the skill inside an isolated environment/container.

Like a lobster shell, security has layers — review code before you run it.

latestvk979p6dvs5svhyvbt4jympgnt9834n6t

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🎨 Clawdis
Binsuv
EnvFAL_KEY, EXA_API_KEY
Primary envFAL_KEY

Comments