Work Helper

v1.0.0

Assistant de travail personnel pour consultant/freelance sysops. Use when: logging an activity, taking a note, creating a reminder, generating a recap or CRA...

0· 164·0 current·0 all-time
byRomain@romain-grosos
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (personal work helper for consultants) align with the actual code and runtime behavior: local JSON stores for journals/notes/reminders, LLM-powered recaps/CRA, PDF ingestion via Vision API, and optional mail-client/nextcloud integration. No unrelated credentials, cloud providers, or binaries are required beyond a (optional) LLM API key and other optional skills.
Instruction Scope
SKILL.md and the scripts instruct the agent to read/write local OpenClaw config/data paths and to call other optional skills (mail-client, nextcloud-files) to fetch attachments or push exports. This is appropriate for the stated purposes but means the skill will access email attachments (via mail-client) and local PDFs and may send their contents to an external LLM service when llm.enabled=true — a privacy/data-exfiltration consideration (expected but important to review).
Install Mechanism
No install spec; code is instruction-and-script based using only Python stdlib. Nothing is downloaded from arbitrary URLs or installed during install-time, minimizing supply-chain risk.
Credentials
The skill does not declare required environment variables but will read an API key from ~/.openclaw/secrets/openai_api_key (or fallback OPENAI_API_KEY env). This is proportional to LLM features. Two notes: (1) README indicates the API key file may be 'shared with veille' — a shared key increases blast radius across skills; (2) enabling LLM causes potentially sensitive local data (journal entries, transcribed PDFs) to be sent to the configured base_url, so users should use a dedicated key and a trusted LLM endpoint.
Persistence & Privilege
always:false and default autonomous invocation are standard. The skill writes only to its own config and data paths under ~/.openclaw and emits cron payloads for reminders; it does not modify other skills' configs or system-wide settings. It calls other skills' scripts via subprocess but checks paths to ensure they're under the skills directory.
Assessment
This skill is coherent with its description. Before installing or enabling LLM features: (1) Recognize that enabling llm.enabled=true will send journal text and transcribed PDFs to the configured LLM API — use a dedicated API key and trusted endpoint. (2) The API key is read from ~/.openclaw/secrets/openai_api_key or OPENAI_API_KEY; avoid sharing that file across unrelated tooling. (3) The ingest feature uses the mail-client skill to fetch email attachments — review/trust the mail-client skill because it can expose email content to this skill. (4) Data and downloaded PDFs persist under ~/.openclaw/data/work-helper; remove those manually if you uninstall. If you need stronger isolation, keep LLM disabled, use a separate API key with minimal privileges, and review mail-client/nextcloud integration before enabling exports.

Like a lobster shell, security has layers — review code before you run it.

latestvk97cw4sz199r76bj26n5xp5z5n82tkay

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

📋 Clawdis

Comments