Airpoint
PassAudited by VirusTotal on May 12, 2026.
Overview
Type: OpenClaw Skill Name: airpoint Version: 1.3.16 The skill bundle acts as an interface to the `airpoint` macOS application, which is designed for AI-driven computer control. The `SKILL.md` provides instructions for the OpenClaw agent to use the `airpoint` CLI for tasks like opening apps, clicking UI elements, and reading the screen. While the `airpoint` application itself requires extensive macOS permissions (Accessibility, Screen Recording) and thus possesses powerful capabilities that could be misused, the skill bundle's content contains no evidence of intentional malicious behavior, data exfiltration, persistence mechanisms, or prompt injection attempts against the OpenClaw agent to perform actions outside its stated purpose. The instructions are clear, aligned with the described functionality, and do not instruct the agent to perform harmful actions.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
A mistaken, overbroad, or poorly reviewed instruction could cause the agent to read private content, type into the wrong app, change settings, send messages, or take other actions on the Mac.
This exposes broad computer-control authority through a natural-language agent. The capability is disclosed and purpose-aligned, but the skill does not bound which apps, data, settings, messages, or transactions the agent may interact with.
Airpoint gives you an AI agent that can **see and control a Mac** — open apps, click UI elements, read on-screen text, type, scroll, drag, and manage windows... planning actions, executing them, and verifying the result.
Use only for narrowly described tasks, watch the agent while it runs, avoid sensitive screens, and require explicit user confirmation before account changes, purchases, sending messages, deleting data, or changing system/security settings.
The agent may continue a multi-step workflow across applications without intermediate review, increasing the chance that one bad interpretation affects private data or account state.
The recommended default workflow is to delegate broad, multi-step desktop actions to the external agent. The artifact documents cancellation with `airpoint stop`, but not approval checkpoints or action limits for sensitive operations.
Use `airpoint ask` for almost everything. The agent can read the screen, interact with any app, and chain multi-step workflows autonomously.
Prefer short, reversible tasks. Ask Airpoint to pause before sensitive steps, and be ready to run `airpoint stop` if it begins an unexpected action.
While granted, Airpoint can potentially observe sensitive on-screen information and control mouse/keyboard input on the Mac.
These macOS permissions are necessary for the stated computer-use function and are disclosed, but they are powerful privileges that allow observation and input control beyond a single app.
macOS permissions... **Accessibility** — required for mouse/keyboard control. **Screen Recording** — required for screenshots and screen perception.
Grant these permissions only if you trust Airpoint, review macOS Privacy & Security settings, and revoke permissions when you no longer need the tool.
Misconfiguration or misuse could consume paid API quota, and task data may be handled by the selected model provider.
The skill requires model-provider credentials configured in the Airpoint app. This is expected for the advertised AI agent, but users should understand that provider keys may incur cost and grant access to model APIs.
**AI model API key (required).** Set an API key for the chosen provider... OpenAI... Anthropic and Google Gemini are also supported.
Use dedicated API keys where possible, monitor usage and billing, and rotate or revoke keys if Airpoint is no longer used.
Sensitive information visible on the screen could be included in screenshots processed for UI perception.
The artifact indicates screenshots may be analyzed by a model-provider component to locate UI targets. This is aligned with the purpose, but visible screen contents can include private data and the artifact does not describe provider retention or filtering boundaries.
a Google Gemini API key enables the visual locator — a secondary model (`gemini-3-flash-preview`) that finds UI targets on screen by analyzing screenshots.
Close or hide sensitive windows before use, review Airpoint and provider privacy policies, and avoid using the agent on highly confidential screens unless you accept that data flow.
Local session screenshots could expose private information to anyone or any process with access to that user profile.
The skill documents local screenshot artifacts saved in Airpoint session storage. This is useful for verification but may retain sensitive screen context after a task.
One or more **screenshot file paths** showing the screen state after the task... `/Users/you/Library/Application Support/com.medhuelabs.airpoint/sessions/.../screenshots/step_3.png`
Check whether Airpoint provides session cleanup, delete unneeded screenshots, and avoid leaving sensitive information visible during tasks.
Trust in the actual Mac-control behavior depends on the external Airpoint app and CLI, not just this skill file.
The reviewed skill is instruction-only and depends on a separately installed external app/CLI. That is normal for this integration, but the executable behavior is outside the provided artifact set.
**Airpoint app** — must be running. Download from [airpoint.app](https://airpoint.app). **Airpoint CLI** — the `airpoint` command must be on PATH. Install it from the Airpoint app.
Install only from the official vendor source, keep the app updated, and verify the vendor before granting Accessibility or Screen Recording permissions.
