Control
v1.0.0Advanced desktop automation with mouse, keyboard, and screen control
⭐ 0· 1.2k·7 current·8 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The name, description, SKILL.md, and code implement desktop automation (mouse, keyboard, screenshots, window management, clipboard). That functionality is coherent with the stated purpose. However there are metadata inconsistencies: the registry metadata (ownerId/slug) differs from _meta.json (different ownerId and slug 'desktop-control'), and the published skill's slug 'control' differs from package files. The skill contains code files (Python) but no install spec in the registry metadata — the SKILL.md instructs pip-install of several dependencies (pyautogui, opencv, pygetwindow, pyperclip), which is necessary but not declared in the registry. These mismatches could be harmless packaging issues but warrant scrutiny.
Instruction Scope
Runtime instructions and the code stay within desktop automation scope: move mouse, type text, take screenshots, manipulate clipboard, list windows, and run demo flows. The ai_agent component can plan and autonomously execute multi-step workflows (open apps, type, click, take screenshots). This is expected for this skill but grants broad local control (including the ability to type into any focused window, open apps, read/modify clipboard, and capture screen contents). The SKILL.md demos explicitly ask user to run sequences that will control the desktop — that's expected but high-impact.
Install Mechanism
No install spec is provided in the registry (lowest install risk), but the package includes Python code that imports pyautogui and the documentation instructs pip install of pyautogui, pillow, opencv-python, pygetwindow, pyperclip. Because the registry doesn't declare or bundle dependencies, users must install them manually. opencv-python and pyautogui may require native dependencies/permissions on some OSes. No remote downloads or URL-based installers were found.
Credentials
The skill declares no required environment variables or credentials, which is consistent with a purely local desktop-control tool. One uncertainty: ai_agent.py accepts an optional llm_client and comments that it will 'try to auto-detect' an LLM client — the visible code sets llm_client from an argument but truncated portions could attempt to discover/configure a model client or use system credentials. No explicit environment-variable reads or network endpoints are present in the visible code, but the potential LLM integration is a place to check for undeclared credential usage.
Persistence & Privilege
The skill does not request always:true, and does not declare any persistent system-wide modifications. It can be invoked by the model autonomously (disable-model-invocation is false), which is the platform default; combined with its ability to act on the user's desktop this increases potential impact but is expected for an automation skill. The code does not appear to modify other skills' configs.
What to consider before installing
This package appears to be a legitimate desktop automation skill that implements the features described, but there are a few red flags to consider before installing and running it:
- Metadata mismatch: the registry owner/slug differ from the _meta.json inside the package. Confirm the author's identity and source before trusting the package.
- Manual dependency installation: the skill expects pyautogui, opencv-python, pygetwindow, pyperclip, etc. Install these in a contained environment (virtualenv or VM) and inspect for platform-specific permission prompts (accessibility/access to control input on macOS, for example).
- Powerful local capabilities: the skill can move your mouse, type into any active window, take screenshots, and read/modify clipboard contents. Only run it when you trust the code and on a non-sensitive session (or in a VM) and enable failsafe/require_approval.
- Autonomy & LLM integration: the ai_agent component can execute multi-step tasks autonomously. Avoid providing an LLM client or credentials until you've reviewed any code paths that might send screenshots or other data to external services.
- Safety tips: enable failsafe (DesktopController(failsafe=True)), prefer require_approval=True during testing, run demos step-by-step rather than 'run all', and test inside a disposable VM. If you plan to allow autonomous execution, review the full ai_agent.py to ensure it does not attempt to auto-detect or transmit credentials/data.
If you want higher confidence: ask the publisher to reconcile the metadata, provide a provenance/source URL (repo or homepage), or provide a signed release. If you provide the truncated parts of ai_agent.py and __init__.py for review, I can re-evaluate any hidden behaviors.Like a lobster shell, security has layers — review code before you run it.
latestvk973fe1vxthk8r7wrmcqs1zrmd82c9fq
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
