MoltBook Digest

v0.1.2

Collect Moltbook posts and comments, build an evidence pack, and interpret it through either the calling agent or LiteLLM.

0· 131·0 current·0 all-time
byZhiwei Li@mtics
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the code and instructions. The script queries Moltbook API endpoints (DEFAULT_BASE_URL), expands posts/comments, and writes evidence and report artifacts. The declared runtime binary 'uv' matches the SKILL.md examples. Declared project dependencies (litellm, pyyaml) align with the script's ability to call external LLM providers and parse YAML config.
Instruction Scope
SKILL.md limits actions to collection, expansion, and analysis steps and instructs the agent to read/write the generated files (digest.md, evidence.json, analysis_input.md, agent_handoff.md, analysis_report.md). It explicitly prefers the public API over scraping and warns not to reveal API keys. There are no instructions to read unrelated system files or to exfiltrate secrets.
Install Mechanism
No packaged install spec is included (instruction-only), but SKILL.md expects using the 'uv' tool to sync the project which will install dependencies from pyproject.toml (litellm, pyyaml). These are public PyPI packages — this is expected for LLM integration but does create a normal package-install footprint. No arbitrary download URLs or archive extraction are present.
Credentials
The skill requires no environment variables by default. config.example.yaml exposes optional provider API keys and lists common env var names (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.) for providers the script can call. This is proportionate to the optional 'LiteLLM' analysis path, but users should understand that supplying provider credentials enables outbound calls to external LLM APIs.
Persistence & Privilege
The skill is user-invocable, not forced-always, and does not request elevated system privileges. It writes output files into the project/baseDir (expected behavior). It does not modify other skills or system-wide agent settings.
Assessment
This skill appears coherent for Moltbook research, but review a few things before installing: 1) If you plan to enable the LiteLLM/provider path, only add API keys you trust and do so in a secure config or env var — supplying keys allows the script to make outbound calls. 2) 'uv sync' will install Python packages (litellm, pyyaml); consider using a virtualenv or isolated environment and pin package sources. 3) The script uses public Moltbook API endpoints by default (no auth required), but behavior may change if those endpoints are later restricted. 4) If you want to avoid any external network calls, run with --analysis-mode none (collection only) and keep provider configs empty. 5) If you are not comfortable providing provider credentials, audit the code paths that call litellm/providers (search for provider/api_key usage) prior to adding secrets. Overall the skill is consistent with its stated purpose.

Like a lobster shell, security has layers — review code before you run it.

latestvk977xgyepdvzhypr15kkbd652s83d9a9

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

Binsuv

Comments