Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Insight Engine
v1.0.4Logs/metrics → Python statistics → LLM interpretation → Notion reports. Use when: generating daily/weekly/monthly operational insights from AI system logs, p...
⭐ 0· 402·1 current·1 all-time
byNissan Dookeran@nissan
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The name/description (logs → local Python stats → LLM → Notion) matches the included code: collectors, statistical aggregation, LLM calls, and Notion writer are present. Minor mismatch: registry metadata lists required binaries as python3 and ollama — ollama is used only for the dry-run local path (call_ollama), while live runs use Anthropic. The metadata's declared required env vars only list ANTHROPIC_API_KEY and NOTION_API_KEY, but the SKILL.md and code use many additional env vars (LANGFUSE_* keys, NOTION_ROOT_PAGE_ID, NOTION_DAILY_DB_ID, OPENCLAW_* dirs, GIT_REPOS, CP_API_URL, OLLAMA_BASE_URL, etc.). The functional capability is plausible, but the metadata under-declares environment requirements.
Instruction Scope
Runtime instructions and code read local gateway logs and daily memory markdowns, run git commands on repo paths (defaults to current dir), call Langfuse and Control Plane endpoints, build a JSON data packet (including memory_context truncated to 6000 chars) and send that packet to an LLM (Anthropic or local Ollama) and then write the LLM output to the user's Notion workspace. This scope is consistent with the stated purpose but broad: it can access arbitrary repo histories and local memory files and include their content in requests to external services — verify you want those sources transmitted to Anthropic/Notion.
Install Mechanism
No installation spec is provided (instruction-only). All code is included in the skill bundle, and there are no remote downloads or extract steps. This is low-risk from an install vector perspective.
Credentials
Declared required env vars (metadata) are only ANTHROPIC_API_KEY and NOTION_API_KEY, but the SKILL.md and code rely on many additional env vars and secrets (LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, NOTION_ROOT_PAGE_ID, NOTION_DAILY_DB_ID, OPENCLAW_* directories, GIT_REPOS, CP_API_URL, OLLAMA_MODEL/BASE_URL). Some of these are sensitive (Langfuse secret key, Notion integration token, Anthropic API key). The fact that the metadata omits these other variables is a mismatch and increases the chance a user supplies sensitive secrets without realizing which ones are used.
Persistence & Privilege
The skill is not marked always:true and does not request elevated platform privileges. It will write to the user's Notion workspace (expected). It can create Notion databases/pages in the user's workspace when needed. Autonomous invocation is allowed by default (not flagged here), which combined with the broad data-access scope is something to consider but is not unusual for skills.
Scan Findings in Context
[system-prompt-override] expected: The skill ships detailed system prompts (daily/weekly/monthly prompt templates) to instruct the LLM. The static scanner flagged system-prompt-override patterns; presence of system prompt templates is expected for an LLM-driven reporting tool, but review these prompts manually because strong/instructional prompts combined with agent autonomy could be abused or cause unexpected LLM behavior.
What to consider before installing
This skill does what it says (collect local logs, compute stats, ask an LLM to interpret, and write a Notion report), but several practical cautions:
- Metadata is incomplete: although only ANTHROPIC_API_KEY and NOTION_API_KEY are declared as required, the tool uses many other env vars (LANGFUSE_PUBLIC_KEY / SECRET_KEY, NOTION_ROOT_PAGE_ID / NOTION_DAILY_DB_ID, OPENCLAW log/memory dirs, GIT_REPOS, CP_API_URL, OLLAMA_*). Do not supply extra secrets unless you intend the skill to access those services.
- Inspect what will be sent to the LLM/Notion: run the tool in data-only mode (python3 scripts/src/engine.py --mode daily --data-only) or dry-run to print the JSON data packet and the system prompt so you can confirm no sensitive raw logs, secrets, or private repo contents are included. The code truncates memory_context to 6000 chars but can still contain sensitive text.
- Limit scope of sources: set GIT_REPOS to explicit repository paths you trust (avoid defaulting to current dir), and point OPENCLAW_LOG_DIR and OPENCLAW_MEMORY_DIR to directories that don't contain secrets. If you don't use Langfuse, avoid setting LANGFUSE_* secrets — the code will attempt calls if configured.
- Notion token scope: create a Notion integration with minimal page/database access (not full workspace admin) so pages created/modified are limited to a specific area.
- Secrets and network: Anthropic and Notion API keys are used to send data off-host. If you can't risk any outbound transmission of certain content, don't run live mode. Dry-run with a local Ollama is safer for previews.
- Prompt review: because the skill includes assertive system prompts (the scanner flagged a system-prompt pattern), read scripts/prompts/*.md to ensure no prompt instructs the model to exfiltrate data beyond producing the report.
If after these checks the data sources and token scopes look appropriate, the skill appears functionally coherent. If you are unsure which env vars to set or what data will be included, treat this as a red flag and test in an isolated environment first.Like a lobster shell, security has layers — review code before you run it.
latestvk97dq0pxbnhqe2gbvx60ya44g983rrj7
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
🔬 Clawdis
Binspython3, ollama
EnvANTHROPIC_API_KEY, NOTION_API_KEY
Primary envANTHROPIC_API_KEY
