Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Automation Testing Protocol

v1.0.0

A comprehensive framework for testing and validating automation projects to ensure stability, security, and scalability.

1· 21·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name, README, and SKILL.md consistently describe a QA/testing framework — that purpose matches the instructions to discover and run tests. However, the protocol explicitly mentions verifying connectors (Meta API, Google Sheets, SMTP) and using real/sandbox credentials yet the skill declares no required environment variables, credentials, or configuration paths. The required timezone (example: Africa/Cairo) is also prescriptive and not justified in metadata.
!
Instruction Scope
Runtime instructions direct the agent to search project roots for run_tests.py or tests/, execute python3 path/to/project/run_tests.py, and if none exist, to create a run_tests.py implementing six layers (including integration with external APIs). That grants the agent broad discretion to modify project files, run arbitrary test code, and interact with external services — actions beyond a simple read-only QA helper. The instructions also mandate 100% pass criteria and timezone enforcement, which could cause repeated modifications or blocking behavior.
Install Mechanism
No install spec or code files are present; this is instruction-only and does not add binaries or download code. This lowers surface risk because nothing is written to disk by an installer step.
!
Credentials
The protocol expects use of real or sandbox credentials for external connectors but requires zero environment variables or credentials in its manifest. It also instructs secrets be stored in .env or config files — which implies the agent may read or write such files — but no credential access is declared. This mismatch means agents following the skill may attempt to access credentials that were not signaled as required, creating a surprise access vector.
Persistence & Privilege
always:false (no forced inclusion) is appropriate. The skill is allowed to be invoked autonomously by default (disable-model-invocation:false) — that is normal for skills but worth noting: combined with the instruction to modify projects and run code that may access external services, autonomous invocation increases blast radius if the skill runs without human review. The skill does not request persistent privileges or modify other skills' configs.
What to consider before installing
This skill appears to be a coherent QA framework, but there are meaningful mismatches you should resolve before trusting it: (1) It says tests should exercise external connectors with real/sandbox credentials but declares no required environment variables or credentials — verify where API keys should come from and never let an agent fetch secrets automatically. (2) The runtime instructions mandate creating and executing run_tests.py if missing — require manual code review of any generated tests before execution. (3) The skill prescribes timezone and a 100% pass rule which could cause repeated or blocking operations; confirm that behavior is acceptable. Recommendations: run the skill only with human-in-the-loop approval (disable autonomous invocation), inspect any run_tests.py the agent proposes to add, restrict network/credentials access when testing (use isolated sandboxes), and ask the author for an explicit list of required environment variables and a clear safety policy before enabling it broadly.

Like a lobster shell, security has layers — review code before you run it.

latestvk9757489rw25rggh9s4e9jqq9h856y80
21downloads
1stars
1versions
Updated 5h ago
v1.0.0
MIT-0

Automation Quality Assurance & Testing Protocol

This skill is the primary authority for testing any automation project within the OpenClaw environment. It ensures operational stability, prevents regressions, and maintains high-quality standards across all automated workflows.


Agent: How to Use This Skill

Read this protocol fully before modifying or deploying any automation script. Follow the steps sequentially.

1. Comprehensive Automation Testing Strategy

To ensure a robust automation, every project must pass through these 6 critical testing layers:

  • Layer 1: Unit Testing (Logic) - Test individual functions, mathematical calculations, and internal logic branches in isolation.
  • Layer 4: Idempotency & Recovery - Ensure that if a script fails and restarts, it does not produce side effects (e.g., no duplicate emails or redundant API calls). The script must be "Safe to Restart."
  • Layer 2: Integration Testing (Connectors) - Verify successful communication with external services (Meta API, Google Sheets, SMTP, etc.) using real or sandbox credentials.
  • Layer 3: End-to-End (E2E) Flow - Simulate a complete lifecycle of the automation (e.g., Budget Breach -> Pause -> Notify) to ensure the entire chain works.
  • Layer 5: Regression Testing - Always run the full run_tests.py suite after any change to confirm that existing features remain functional.
  • Layer 6: Observability & Logging Verification - Confirm that the script produces clear, actionable logs for every step, especially during failures, to ensure "blind spots" are eliminated.

2. Execution Protocol

Do find and execute tests before and after every modification:

  1. Discover: Always look for run_tests.py or a tests/ directory within the project root.
  2. Execute:
    python3 path/to/project/run_tests.py
    
  3. Initialize: If the project lacks tests, you are mandated to create a run_tests.py file implementing the 6 layers above.

3. Standard Exit Criteria (Definition of Done)

A task is considered "Complete" only when:

  • 100% Pass Rate: All 6 testing layers pass without errors.
  • Timezone Uniformity: All timestamps and scheduling are synchronized to the environment's local time (e.g., Africa/Cairo).
  • Security Compliance: Zero hardcoded secrets. All tokens and passwords must be isolated in .env or config files.
  • Failure Resilience: The script handles API timeouts and connection drops gracefully without crashing.
  • Documentation: The code is clean, commented, and includes a brief explanation of any new test cases added.

Maintenance & Scalability

  • The Test Suite must grow with the project. Every new feature requires a corresponding test case.
  • Any project without a functional run_tests.py is considered "Substandard" and must be fixed immediately.

Comments

Loading comments...