Install
openclaw skills install airpointClawHub Security found sensitive or high-impact capabilities. Review the scan results before using.
Control a Mac through natural language — open apps, click buttons, read the screen, type text, manage windows, and automate multi-step tasks via Airpoint's AI computer-use agent.
openclaw skills install airpointAirpoint gives you an AI agent that can see and control a Mac — open apps, click UI elements, read on-screen text, type, scroll, drag, and manage windows. You give it a natural-language instruction and it carries out the task autonomously by perceiving the screen (accessibility tree + screenshots + visual locator), planning actions, executing them, and verifying the result.
Everything runs through the airpoint CLI.
airpoint command must be on PATH. Install it from the Airpoint app: Settings → Plugins → Install CLI.Before using Airpoint's AI agent, the user must configure it in the Airpoint app (Settings → Assistant):
gpt-5.1 with reasoning effort low gives
the best balance of cost, speed, and quality.gemini-3-flash-preview) that finds UI targets on screen
by analyzing screenshots. Without it, the agent relies on the accessibility
tree only.If the user reports that airpoint ask fails or the agent can't see the
screen, ask them to verify steps 1–3 above.
airpoint ask "<your instruction>" to send a task to the on-device agent.airpoint stop to cancel.Example flow:
> airpoint ask "open Safari and search for 'OpenClaw'"
Opened Safari, typed 'OpenClaw' into the address bar, and pressed Enter.
The search results page is now displayed.
1 screenshot(s) saved to session abc123
└ screenshots/step_3.png (/Users/you/Library/Application Support/com.medhuelabs.airpoint/sessions/abc123/screenshots/step_3.png)
After receiving this, show the screenshot to the user so they can see what happened.
This is the most important command. It sends a natural-language task to Airpoint's built-in computer-use agent which can see the screen, move the mouse, click, type, scroll, open apps via Spotlight, manage windows, and verify its own actions.
# Synchronous — waits for the agent to finish (up to 5 min) and returns output
airpoint ask "open Safari and go to github.com"
airpoint ask "what's on my screen right now?"
airpoint ask "find the Slack notification and read it"
airpoint ask "open System Settings and enable Dark Mode"
airpoint ask "open Mail, find the latest email from John, and summarize it"
# Fire-and-forget — returns immediately
airpoint ask "open Spotify and play my liked songs" --no-wait
# Show the assistant panel on screen while running
airpoint ask "open System Settings and enable Dark Mode" --show-panel
airpoint stop
Cancels the currently running assistant task. Use this if a task is stuck or taking too long.
airpoint see
Returns a screenshot of the current display. Useful for verifying state before
or after issuing an ask command.
airpoint status
airpoint status --json
Returns app version and current state (tracking active, etc.).
Airpoint also supports hands-free cursor control via camera-based hand tracking. These commands start/stop that feature:
airpoint tracking on
airpoint tracking off
airpoint tracking # show current state
airpoint settings list # all current settings
airpoint settings list --json # machine-readable
airpoint settings get cursor.sensitivity
airpoint settings set cursor.sensitivity 1.5
Common settings: cursor.sensitivity (default 1.0), cursor.acceleration
(default true), scroll.sensitivity (default 1.0), scroll.inertia
(default true).
airpoint vitals # CPU, RAM, temperature
airpoint vitals --json
airpoint open # opens/focuses the Airpoint macOS app
airpoint ask for almost everything. The agent can read the screen,
interact with any app, and chain multi-step workflows autonomously.--json when you need to parse output programmatically.