Install
openclaw skills install midscene-ios-automationVision-driven iOS device automation using Midscene CLI. Operates entirely from screenshots — no DOM or accessibility labels required. Can interact with all visible elements on screen regardless of technology stack. Control iOS devices with natural language commands via WebDriverAgent. Triggers: ios, iphone, ipad, ios app, tap on iphone, swipe, mobile app ios, ios device, ios testing, iphone automation, ipad automation, ios screen, ios navigate, test ios app, verify on iphone, QA on ipad, check the app on ios, test on ios device, see if the app works on iphone, end-to-end test on ios, visual verification on ios Powered by Midscene.js (https://midscenejs.com)
openclaw skills install midscene-ios-automationCRITICAL RULES — VIOLATIONS WILL BREAK THE WORKFLOW:
- Never run midscene commands in the background. Each command must run synchronously so you can read its output (especially screenshots) before deciding the next action. Background execution breaks the screenshot-analyze-act loop.
- Run only one midscene command at a time. Wait for the previous command to finish, read the screenshot, then decide the next action. Never chain multiple commands together.
- Allow enough time for each command to complete. Midscene commands involve AI inference and screen interaction, which can take longer than typical shell commands. A typical command needs about 1 minute; complex
actcommands may need even longer.- Always report task results before finishing. After completing the automation task, you MUST proactively summarize the results to the user — including key data found, actions completed, screenshots taken, and any relevant findings. Never silently end after the last automation step; the user expects a complete response in a single interaction.
Automate iOS devices using npx -y @midscene/ios@1. Each CLI command maps directly to an MCP tool — you (the AI agent) act as the brain, deciding which actions to take based on screenshots.
act Can DoInside a single act call on iOS, Midscene can tap, double-tap, long-press, type, clear text, scroll, drag items, zoom with two fingers, press keys, and use system navigation such as Home or the app switcher while working from the current visible screen.
Midscene requires models with strong visual grounding capabilities. The following environment variables must be configured — either as system environment variables or in a .env file in the current working directory (Midscene loads .env automatically):
MIDSCENE_MODEL_API_KEY="your-api-key"
MIDSCENE_MODEL_NAME="model-name"
MIDSCENE_MODEL_BASE_URL="https://..."
MIDSCENE_MODEL_FAMILY="family-identifier"
Example: Gemini (Gemini-3-Flash)
MIDSCENE_MODEL_API_KEY="your-google-api-key"
MIDSCENE_MODEL_NAME="gemini-3-flash"
MIDSCENE_MODEL_BASE_URL="https://generativelanguage.googleapis.com/v1beta/openai/"
MIDSCENE_MODEL_FAMILY="gemini"
Example: Qwen 3.5
MIDSCENE_MODEL_API_KEY="your-aliyun-api-key"
MIDSCENE_MODEL_NAME="qwen3.5-plus"
MIDSCENE_MODEL_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
MIDSCENE_MODEL_FAMILY="qwen3.5"
MIDSCENE_MODEL_REASONING_ENABLED="false"
# If using OpenRouter, set:
# MIDSCENE_MODEL_API_KEY="your-openrouter-api-key"
# MIDSCENE_MODEL_NAME="qwen/qwen3.5-plus"
# MIDSCENE_MODEL_BASE_URL="https://openrouter.ai/api/v1"
Example: Doubao Seed 2.0 Lite
MIDSCENE_MODEL_API_KEY="your-doubao-api-key"
MIDSCENE_MODEL_NAME="doubao-seed-2-0-lite"
MIDSCENE_MODEL_BASE_URL="https://ark.cn-beijing.volces.com/api/v3"
MIDSCENE_MODEL_FAMILY="doubao-seed"
Commonly used models: Doubao Seed 2.0 Lite, Qwen 3.5, Zhipu GLM-4.6V, Gemini-3-Pro, Gemini-3-Flash.
If the model is not configured, ask the user to set it up. See Model Configuration for supported providers.
npx -y @midscene/ios@1 connect
Use the built-in launch capability when you want to start from a known app or route before the rest of the task. Give it the most specific target you have, such as a bundle ID, web URL, deep link, or phone/mail link. Typical targets include com.apple.Preferences, https://www.apple.com, myapp://profile/user/123, and tel:+1234567890.
Use this when the task needs lower-level device control instead of a normal visible UI interaction:
npx -y @midscene/ios@1 runwdarequest --method GET --endpoint /wda/screen
This does not run an ADB command. On iOS, the underlying operation is an HTTP request to WebDriverAgent, typically GET http://<wdaHost>:<wdaPort>/session/<sessionId>/wda/screen.
npx -y @midscene/ios@1 take_screenshot
After taking a screenshot, read the saved image file to understand the current screen state before deciding the next action.
Use act to interact with the device and get the result. It autonomously handles all UI interactions internally — tapping, typing, scrolling, swiping, waiting, and navigating — so you should give it complex, high-level tasks as a whole rather than breaking them into small steps. Describe what you want to do and the desired effect in natural language:
# specific instructions
npx -y @midscene/ios@1 act --prompt "type hello world in the search field and press Enter"
npx -y @midscene/ios@1 act --prompt "tap Delete, then confirm in the alert dialog"
# or target-driven instructions
npx -y @midscene/ios@1 act --prompt "open Settings and navigate to Wi-Fi, tell me the connected network name"
Use assert to verify that the current screen satisfies a natural language condition. It does not perform UI actions; it checks the visible screen state and passes only when the assertion is true. Use this for validation, QA checks, and final state verification after act.
npx -y @midscene/ios@1 assert --prompt "there is a login button visible"
npx -y @midscene/ios@1 assert --prompt "the settings screen shows Wi-Fi and Bluetooth options"
When the user provides a screenshot, icon, logo, or reference image and wants an exact visual match, prefer tap --locate instead of a generic act --prompt. Pass --locate as JSON. The prompt describes the target, images supplies named reference images, and convertHttpImage2Base64: true is useful when the image URL may not be directly accessible to the model.
npx -y @midscene/ios@1 tap --locate '{
"prompt": "tap the area contains the image",
"images": [
{
"name": "target image",
"url": "https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png"
}
],
"convertHttpImage2Base64": true
}'
The same locate JSON shape also works for other commands that accept a locate parameter.
npx -y @midscene/ios@1 disconnect
The generated HTML report is recommended for human reading first. It includes step-by-step execution details and replay videos for each operation, which makes it much easier to understand what happened and troubleshoot problems.
If another skill or tool needs to consume the report, first convert it with report-tool from the same platform CLI package. Prefer Markdown for LLM-based workflows. Use JSON when the report needs to be processed programmatically.
npx -y @midscene/ios@1 report-tool --action to-markdown --htmlPath ./midscene_run/report/.../index.html --outputDir ./output-markdown
npx -y @midscene/ios@1 report-tool --action split --htmlPath ./midscene_run/report/.../index.html --outputDir ./output-data
Since CLI commands are stateless between invocations, follow this pattern:
act to perform the desired action or target-driven instructions, and use assert when you need to verify the resulting screen state."the Settings icon in the top-right corner" instead of "the icon"."the search icon at the top right", "the third item in the list").act command: When performing consecutive operations within the same app, combine them into one act prompt instead of splitting them into separate commands. For example, "open Settings, tap Wi-Fi, and check the connected network" should be a single act call, not three. This reduces round-trips, avoids unnecessary screenshot-analyze cycles, and is significantly faster.assert for verification: When the goal is to confirm that a screen state is true, use assert --prompt "..." instead of an act prompt. Keep assertions observable and specific, such as "the permission dialog is visible" or "the Save button is disabled".tap --locate when a reference image is provided: If the user shares a screenshot, icon, or logo and wants that exact visual target, use tap --locate with a multimodal locate JSON object such as { "prompt": "...", "images": [...] } instead of relying only on act --prompt.Example — Alert dialog interaction:
npx -y @midscene/ios@1 act --prompt "tap the Delete button and confirm in the alert dialog"
npx -y @midscene/ios@1 take_screenshot
Example — Form interaction:
npx -y @midscene/ios@1 act --prompt "fill in the username field with 'testuser' and the password field with 'pass123', then tap the Login button"
npx -y @midscene/ios@1 take_screenshot
Symptom: Connection refused or timeout errors. Solution:
Symptom: No device detected or connection errors. Solution:
Symptom: Authentication or model errors. Solution:
.env file contains MIDSCENE_MODEL_API_KEY=<your-key>.@midscene/* Dependency Version OutdatedSymptom: Unexpected behavior, missing features, or version mismatch errors. Solution:
npm ls @midscene/ios @midscene/core @midscene/shared (or pnpm why @midscene/ios).npm view @midscene/ios version, npm view @midscene/core version, npm view @midscene/shared version.npm i @midscene/ios@latest @midscene/core@latest @midscene/shared@latest.