meeting-autopilot
ReviewAudited by ClawScan on May 1, 2026.
Overview
Meeting Autopilot appears purpose-aligned, but it processes sensitive transcripts through external LLM APIs and saves extracted meeting history locally by default.
Install only if you are comfortable sending meeting transcripts to Anthropic/OpenAI and storing extracted meeting items locally. For confidential meetings, check your organization’s AI/data policy, consider `--no-history`, and review generated email/ticket drafts before sending or filing them.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The skill can run its bundled shell pipeline and read/write transcript-related files while producing the report.
The skill explicitly asks the agent to run local scripts, read transcript files, write outputs, and make network API calls. This is central to its purpose, but users should notice the local execution and file/network access.
permissions: exec: true # Run extraction scripts read: true # Read transcript files write: true # Save history and reports network: true # LLM API calls
Use it only on intended transcript files, keep output directories scoped, and review the scripts/options if your meetings contain sensitive content.
Your Anthropic/OpenAI account credentials are used for transcript analysis requests.
The skill requires a provider API key to call Anthropic or OpenAI. This is expected for the LLM-processing purpose, but it uses the user's provider account and may incur cost or expose submitted data to that provider.
- **ANTHROPIC_API_KEY** or **OPENAI_API_KEY** environment variable
Use an appropriate API key, prefer HTTPS provider endpoints, and confirm your provider's retention/privacy settings before processing sensitive meetings.
The skill may appear to require no dependencies or credentials in registry metadata even though it needs local tools and an LLM API key to run.
The registry metadata under-declares requirements that are disclosed in the skill files, including bash/jq/python3/curl and Anthropic/OpenAI API keys. This is not hidden behavior, but automated install surfaces may not warn users.
Required binaries (all must exist): none ... Required env vars: none ... Primary credential: none
Read SKILL.md/README before installing and ensure the required tools and API key setup are acceptable.
Sensitive meeting content may leave your machine and be processed under the selected LLM provider's policies.
The skill sends transcript content to an external LLM provider. This is clearly disclosed and purpose-aligned, but meeting transcripts can contain confidential business, personnel, or financial information.
Transcript content IS sent to the configured LLM API (Anthropic or OpenAI) for processing - The LLM provider's data handling policies apply
Do not process meetings that your organization prohibits sending to external AI services; review provider data-handling settings first.
Extracted action items, decisions, and related meeting details may remain on disk after the report is generated.
The skill persists extracted meeting items locally for future cross-meeting tracking. This is disclosed and scoped, but it can retain sensitive commitments or decisions beyond the current session.
Items are automatically saved to `~/.meeting-autopilot/history/`. Mention this — it's a preview of the v1.1 feature that tracks commitments across meetings.
Use `--no-history` for sensitive meetings or delete `~/.meeting-autopilot/history/` when you do not want retained meeting records.
