idea-inbox

v1.1.1

收集“idea:/灵感:”消息到飞书多维表格(默认自动创建新表),用大模型生成AI归纳/类别/标签(支持自动新增标签),并按配置的每日时间(默认10:02,今日新增=0不发)推送当日汇总。

1· 91·0 current·0 all-time
byxiaoxiaoxi@hanxiaolin
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match code and SKILL.md: triggers on DM prefixes, classifies items (rules or LLM), creates/updates a Feishu Bitable, and writes local config. No unrelated credentials or binaries are required.
Instruction Scope
SKILL.md and scripts explicitly read and write local configs (e.g., ~/.openclaw/idea-inbox/config.json, ~/.codex/config.toml, ~/.codex/auth.json, and OpenClaw config). The skill sends idea text to the configured model provider (base_url + apiKey) for LLM classification. These behaviors are consistent with the stated purpose but are material privacy/actions (sending content to an external model and persisting tokens/configs).
Install Mechanism
Instruction-only skill (no install spec). Code files are present but there is no remote download or install step. Nothing is fetched from arbitrary URLs during setup.
Credentials
The skill declares no required env vars, but the code intentionally reads local config files (~/.codex/* and OpenClaw config) and allows env overrides (OPENCLAW_CONFIG, CODEX_CONFIG, CODEX_AUTH, IDEA_INBOX_*). This is proportionate to needing model/provider credentials, but users should note the skill will use any model API keys present in those files and may write Bitable tokens into ~/.openclaw/idea-inbox/config.json.
Persistence & Privilege
always:false and it only writes its own config under the user's home (~/.openclaw/idea-inbox). It does not request to modify other skills or global agent settings.
Assessment
This skill appears internally consistent and does what it says. Important things to consider before installing: - The skill will read local model/provider config files (~/.codex/config.toml, ~/.codex/auth.json and your OpenClaw config) and use any API keys it finds to call the model endpoint; it will send the idea text to that model provider. Make sure you trust the model provider configured in those files. - On first run it will create a Feishu (Lark) Bitable app and save app_token/table_id and field ids to ~/.openclaw/idea-inbox/config.json. That file may contain sensitive tokens — review and restrict file permissions if needed. - If you do not want idea text sent to an LLM, you can disable AI in the config (ai.enabled=false) so the skill falls back to local rules. - Review ~/.codex/auth.json and OpenClaw provider entries to confirm where data will be sent. If you have confidential ideas, ensure the configured model provider is appropriate or disable AI classification. Confidence is high for this assessment because the code, SKILL.md, and file manifest are coherent and no installer or unexpected external URLs are present. If you want a lower-privilege mode or proof of exactly which provider will be used at runtime, provide a sample ~/.codex/config.toml and ~/.openclaw/openclaw.json (or confirm they are absent) so we can re-evaluate based on real provider entries.

Like a lobster shell, security has layers — review code before you run it.

bitablevk977whbp613v3ev83tqyy2c27s83wzypfeishuvk977whbp613v3ev83tqyy2c27s83wzyplatestvk97281yfqyd0c61rsfexjnjv5h83yzg8stablevk977whbp613v3ev83tqyy2c27s83wzyp

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments