meeting-autopilot

ReviewAudited by ClawScan on May 1, 2026.

Overview

Meeting Autopilot appears purpose-aligned, but it processes sensitive transcripts through external LLM APIs and saves extracted meeting history locally by default.

Install only if you are comfortable sending meeting transcripts to Anthropic/OpenAI and storing extracted meeting items locally. For confidential meetings, check your organization’s AI/data policy, consider `--no-history`, and review generated email/ticket drafts before sending or filing them.

Findings (5)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

The skill can run its bundled shell pipeline and read/write transcript-related files while producing the report.

Why it was flagged

The skill explicitly asks the agent to run local scripts, read transcript files, write outputs, and make network API calls. This is central to its purpose, but users should notice the local execution and file/network access.

Skill content
permissions:
  exec: true          # Run extraction scripts
  read: true          # Read transcript files
  write: true         # Save history and reports
  network: true       # LLM API calls
Recommendation

Use it only on intended transcript files, keep output directories scoped, and review the scripts/options if your meetings contain sensitive content.

What this means

Your Anthropic/OpenAI account credentials are used for transcript analysis requests.

Why it was flagged

The skill requires a provider API key to call Anthropic or OpenAI. This is expected for the LLM-processing purpose, but it uses the user's provider account and may incur cost or expose submitted data to that provider.

Skill content
- **ANTHROPIC_API_KEY** or **OPENAI_API_KEY** environment variable
Recommendation

Use an appropriate API key, prefer HTTPS provider endpoints, and confirm your provider's retention/privacy settings before processing sensitive meetings.

What this means

The skill may appear to require no dependencies or credentials in registry metadata even though it needs local tools and an LLM API key to run.

Why it was flagged

The registry metadata under-declares requirements that are disclosed in the skill files, including bash/jq/python3/curl and Anthropic/OpenAI API keys. This is not hidden behavior, but automated install surfaces may not warn users.

Skill content
Required binaries (all must exist): none ... Required env vars: none ... Primary credential: none
Recommendation

Read SKILL.md/README before installing and ensure the required tools and API key setup are acceptable.

What this means

Sensitive meeting content may leave your machine and be processed under the selected LLM provider's policies.

Why it was flagged

The skill sends transcript content to an external LLM provider. This is clearly disclosed and purpose-aligned, but meeting transcripts can contain confidential business, personnel, or financial information.

Skill content
Transcript content IS sent to the configured LLM API (Anthropic or OpenAI) for processing
- The LLM provider's data handling policies apply
Recommendation

Do not process meetings that your organization prohibits sending to external AI services; review provider data-handling settings first.

What this means

Extracted action items, decisions, and related meeting details may remain on disk after the report is generated.

Why it was flagged

The skill persists extracted meeting items locally for future cross-meeting tracking. This is disclosed and scoped, but it can retain sensitive commitments or decisions beyond the current session.

Skill content
Items are automatically saved to `~/.meeting-autopilot/history/`. Mention this — it's a preview of the v1.1 feature that tracks commitments across meetings.
Recommendation

Use `--no-history` for sensitive meetings or delete `~/.meeting-autopilot/history/` when you do not want retained meeting records.