Testing Skill for Agency Agents
v1.0.0Provides automated testing and integration validation for OpenClaw's skill management in agency-agents projects with error handling and logging.
MIT-0
Security Scan
OpenClaw
Benign
medium confidencePurpose & Capability
The name and description (automated testing and integration validation for agency-agents) match the included assets: multiple testing-agent personas, test-run examples, Playwright/axe/lighthouse/k6 examples, and shell commands to capture evidence. Required binaries/env listed in registry are empty, which is plausible for an instruction-only testing skill that expects the host environment to provide test tooling (or for the user to install them).
Instruction Scope
SKILL.md instructs cloning and running clawhub commands; the included persona files contain explicit runtime commands (npx @axe-core/cli, npx lighthouse, ./qa-playwright-capture.sh, k6 scripts, grep/ls/cat) and example test code that will read files, run local HTTP checks against localhost or a provided BASE_URL, and write screenshots/logs. This behavior aligns with a testing tool, but it does mean the skill expects to run shell commands and access the filesystem and local network — review any scripts (e.g., qa-playwright-capture.sh) and ensure you run them in an appropriate environment.
Install Mechanism
No install spec provided (instruction-only). That lowers supply-chain risk because nothing is automatically downloaded or written to disk by the registry installation itself. The SKILL.md expects the user/agent to run clawhub and host tools; the included docs reference common public tools (Playwright, axe, Lighthouse, k6) rather than unknown download URLs.
Credentials
Registry declares no required environment variables, but multiple included example/test files reference environment variables (e.g., process.env.API_BASE_URL, __ENV.BASE_URL) and perform authentication flows in examples. This is reasonable for a testing skill (tests need target URLs and sometimes credentials), but the skill does not declare these variables up front — you should provide/inspect the expected env values and avoid passing production secrets to test runs.
Persistence & Privilege
The skill is not always-included and does not request persistent privileges. There is no install script that writes into agent config in the registry metadata. The skill's instructions will run commands when invoked, but there is no evidence it modifies other skills or global agent settings.
Assessment
This package appears to be a coherent testing toolkit: it contains extensive, expected test run instructions and examples that call Playwright, axe, Lighthouse, k6, and simple shell checks. Before running or granting an agent permission to execute it: 1) verify the skill origin (source/homepage are unknown); 2) inspect any executable scripts referenced (e.g., ./qa-playwright-capture.sh) so you know what will run on your machine; 3) run tests in an isolated environment or CI runner rather than on sensitive production hosts; 4) avoid injecting real production credentials — the examples expect BASE_URL/API_BASE_URL and sometimes auth tokens, so prefer synthetic/test accounts and declare env vars explicitly; 5) ensure required tooling (Playwright, k6, lighthouse, axe) is installed from trusted sources. If you need higher assurance, ask the publisher for a provenance link or a signed release and review any referenced scripts before execution.Like a lobster shell, security has layers — review code before you run it.
latest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
SKILL.md
Testing Skill
This is a comprehensive testing skill designed for validating OpenClaw's skill management system.
Description
This skill provides a robust framework for testing various OpenClaw functionalities including installation, updates, and integration with other systems. It's specifically designed for developers working on agency-agents projects.
Features
- Automated testing capabilities
- Integration with OpenClaw's core systems
- Support for both local and remote skill deployment
- Comprehensive error handling and logging
Installation
- Clone the repository
- Run
clawhub install testing - Verify installation with
clawhub list
Usage Examples
# Basic usage
clawhub run testing --command validate
# Advanced usage with parameters
clawhub run testing --command test-all --verbose --output results.json
Troubleshooting
If you encounter issues during installation:
- Ensure you have the latest version of ClawHub CLI
- Verify your internet connection
- Check that you're logged in with
clawhub whoami - Contact support if problems persist
Version History
- 1.0.0: Initial release with core testing functionality
- 1.0.1: Added verbose logging option
- 1.1.0: Added JSON output support
Files
9 totalSelect a file
Select a file to preview.
Comments
Loading comments…
