Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

drift-testing

v1.0.0

Verifies API implementations against OpenAPI specifications using the Drift CLI, catching spec drift and supporting Bi-Directional Contract Testing (BDCT). U...

0· 68·0 current·0 all-time
byKevin Rohan Vaz@kevinrvaz

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for kevinrvaz/drift-testing.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "drift-testing" (kevinrvaz/drift-testing) from ClawHub.
Skill page: https://clawhub.ai/kevinrvaz/drift-testing
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install drift-testing

ClawHub CLI

Package manager switcher

npx clawhub@latest install drift-testing
Security Scan
Capability signals
CryptoCan make purchasesRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description match the included code and docs: this is a Drift CLI-based API contract testing skill. However the registry metadata claims no required binaries or env vars while the README and scripts clearly assume the Drift binary, npm/node, Python+PyYAML, Prism, and a tool called 'uv' may be available. That mismatch is likely an oversight but is disproportionate to the stated 'no requirements'.
!
Instruction Scope
SKILL.md and included Lua scripts/lifecycle hooks explicitly instruct the agent to run Drift commands, start mock servers, call HTTP endpoints, seed and delete resources, and optionally 'keep running until everything passes' via run_loop.sh. Those are expected for provider verification but permit destructive actions against real APIs and indefinite retry loops if mispointed at production. The instructions also reference fetching remote docs from pactflow.github.io and calling local auth endpoints — network I/O is normal here but potentially impactful.
Install Mechanism
There is no formal install spec (instruction-only), but included scripts will install global npm packages (e.g., @stoplight/prism-cli) and the docs show downloading the Drift binary from pactflow.github.io. These are typical for CLI-based workflows but mean the skill expects to write/install tools on the host. The shebangs and script headers indicate a 'uv' runner and Python deps (pyyaml) which are not declared elsewhere.
!
Credentials
The skill declares no required environment variables, yet the docs and example YAMLs reference several env vars (API_TOKEN, PACTFLOW_TOKEN, SERVER_URL, TEST_PASSWORD, READONLY_TOKEN, DRIFT_* vars). These are relevant to the tool's function, but their absence from the declared requirements is an incoherence. The skill does not ask for unrelated credentials, but it does rely on secrets being set in the environment to run tests and publish results.
Persistence & Privilege
The skill is not always-enabled and does not request persistent system-level privileges or modify other skills' configs. It can run autonomously by default (platform default), which combined with the above concerns (destructive tests, loops) increases blast radius — but autonomy alone is expected for skills.
What to consider before installing
This skill implements Drift-based API contract testing and mostly behaves as advertised, but there are a few things to check before installing or letting an agent run it autonomously: - Verify prerequisites yourself: the README and scripts expect the Drift CLI, node/npm, Python (PyYAML), Prism, and a 'uv' runner; the registry metadata incorrectly lists no required binaries. Ensure those tools are present and from trusted sources. - Secrets and env vars: the docs use environment variables (API_TOKEN, PACTFLOW_TOKEN, SERVER_URL, TEST_PASSWORD, etc.). Provide least-privilege test tokens (system CI tokens where suggested), not personal credentials, and confirm the skill author does not need broad access. - Destructive operations & loops: scripts and Lua hooks routinely perform POST/DELETE to seed/clean state and run run_loop.sh which retries until all tests pass. Do NOT point this at a production API. Run first in an isolated test environment and review/run scripts manually before giving the agent permission to invoke them. - Installation side-effects: some scripts auto-install npm packages globally. If you want to avoid global changes, modify scripts to use local installs or containerized execution. - Review code: scan extract_endpoints.py, run_loop.sh, and the Lua examples for any network calls or hardcoded URLs you don't expect. Ensure CI publishing steps (PactFlow) use CI-scoped tokens. If you want to proceed: run the tool locally in a container or throwaway VM, confirm behavior, then add it to your agents with restricted permissions and test-only credentials. If you need, I can list the exact files/lines that reference each external tool, env var, or potentially destructive action.

Like a lobster shell, security has layers — review code before you run it.

latestvk970h3gsc5b7zfr8gvbqh5r2s584tt3y
68downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Drift Skill

Never modify the openapi spec that you are testing.

Reference Files

Scripts

  • scripts/extract_endpoints.py — Reads the spec and outputs all operations + response codes. Summary mode flags parameters with no spec example. Scaffold mode (--scaffold) emits a ready-to-fill operations: YAML block with correct auth patterns, nil UUIDs for 404s, ignore.schema for 4xx, and FILL_IN markers. Use --only-missing <drift.yaml> to generate only the gaps not yet covered by an existing test file. Requires pyyaml.
  • scripts/check_coverage.py — Coverage checker: diffs an OpenAPI spec against Drift test files and reports which operations and response codes are missing tests. Requires pyyaml.
  • scripts/run_loop.sh / scripts/run_loop.ps1 — Feedback loop runner: retries drift verify --failed until all tests pass, then runs check_coverage.py. Both gates must pass for exit 0. Dependencies are installed automatically via uv. Use the .ps1 version on Windows.
  • scripts/start_mock.sh / scripts/start_mock.ps1 — Starts a Prism mock server from an OpenAPI spec. Installs Prism if needed. Supports --port and --dynamic flags. Use the .ps1 version on Windows.

Full docs: https://pactflow.github.io/drift-docs/ For anything not covered here, fetch: https://pactflow.github.io/drift-docs/docs/<section>/<page>.md

To discover all available pages, fetch the sitemap: https://pactflow.github.io/drift-docs/sitemap.xml

For an LLM-optimised index of all docs, fetch: https://pactflow.github.io/drift-docs/llms.txt


Installation

# Quickest — no install needed
npx @pactflow/drift --help

# Project-level (recommended for teams)
npm install --save-dev @pactflow/drift

# Global
npm install -g @pactflow/drift

# Verify
drift --version

Project Setup

drift init   # interactive wizard — scaffolds all files below

drift init is interactive — ask the user to run it.

drift/
├── drift.yaml              # Main config — sources, plugins, global settings
├── drift.lua               # Lifecycle hooks and helper functions
├── my-api.dataset.yaml     # Test data
└── my-api.tests.yaml       # Test cases

Minimal drift.yaml:

# yaml-language-server: $schema=https://download.pactflow.io/drift/schemas/drift.testcases.v1.schema.json
drift-testcase-file: v1
title: "My API Tests"

sources:
  - name: source-oas # referenced in test targets
    path: ./openapi.yaml # or uri: https://... for remote specs
  - name: product-data
    path: ./product.dataset.yaml
  - name: functions
    path: ./product.lua

plugins:
  - name: oas # spec-first verification
  - name: json
  - name: data

global:
  auth:
    apply: true
    parameters:
      authentication:
        scheme: bearer # bearer | basic | api-key
        token: ${env:API_TOKEN}

operations:
  # test cases here — see references/test-cases.md

Running Tests

# Basic run
drift verify --test-files drift.yaml --server-url https://api.example.com/v1

# Single operation (fast iteration)
drift verify --test-files drift.yaml --server-url https://api.example.com/v1 --operation getProductByID

# Re-run only failures
drift verify --test-files drift.yaml --server-url https://api.example.com/v1 --failed

# Filter by tags
drift verify --test-files drift.yaml --server-url https://api.example.com/v1 --tags smoke
drift verify --test-files drift.yaml --server-url https://api.example.com/v1 --tags '!destructive'

See references/cli-reference.md for all flags, parallel execution, JUnit output, and exit codes.


Full Coverage Feedback Loop

When the goal is full endpoint coverage:

Caution — destructive tests on production: If --server-url points at a live production API, DELETE and POST tests are permanent. Always use a dedicated test account and confirm any resource used in a DELETE test is disposable.

Copy this checklist and track your progress:

Coverage Loop Progress:
- [ ] Step 0: Check current coverage (check_coverage.py)
- [ ] Step 1: Parse spec and collect operation list (openapi-parser skill or extract_endpoints.py)
- [ ] Step 2: Assemble initial test file
- [ ] Step 3: Run tests (run_loop.sh / run_loop.ps1)
- [ ] Step 4: Diagnose and fix each failure
- [ ] Step 5: Apply common fixes (hooks for state, data seeding)
- [ ] Step 6: Verify exit code 0 + full coverage

Step 0 — Check current coverage

Run before writing tests or when resuming an existing test suite:

# Run against your spec and test file(s)
uv run path/to/scripts/check_coverage.py \
  --spec openapi.yaml \
  --test-files drift.yaml

# Multiple files / globs
uv run path/to/scripts/check_coverage.py \
  --spec openapi.yaml \
  --test-files "tests/*.yaml"

# Machine-readable output (for CI or scripting)
uv run path/to/scripts/check_coverage.py \
  --spec openapi.yaml \
  --test-files drift.yaml --json

Output shows: operations with no tests at all, operations missing specific response codes, and overall operation/code percentages. Exit code 0 = full coverage, 1 = gaps remain.

The script excludes 500/501/502/503 by default (same rule as Step 1 below). Pass --exclude-codes to customise.

Step 1 — Parse the spec

Use extract_endpoints.py to collect the complete operation list, all documented response codes per operation, and ready-to-use operations: YAML stubs. If the openapi-parser skill is available in your environment, you can use that instead.

# See all operations + response codes, flagging params with no spec example
uv run scripts/extract_endpoints.py --spec openapi.yaml

# Generate skeleton stubs for every operation
uv run scripts/extract_endpoints.py --spec openapi.yaml \
  --scaffold --source my-oas > operations.yaml

# Generate ONLY the gaps not already in an existing test file
uv run scripts/extract_endpoints.py --spec openapi.yaml \
  --scaffold --only-missing drift.yaml --source my-oas >> drift.yaml
GET /products          → 200, 401, 404
POST /products         → 201, 400, 401
DELETE /products/{id}  → 204, 401, 403, 404

Critical: Any parameter without a spec-level example causes Value for query parameter X is missing. Supply an explicit value in parameters.query/path/headers for each.

Globally-required query parameters (e.g. ?version=YYYY-MM-DD on every endpoint) can be injected once via the http:request hook rather than repeated in every test case:

["http:request"] = function(event, data)
  if data.query == nil then data.query = {} end
  data.query["version"] = "2024-01-04"
  return data   -- MUST return modified data
end

Duplicate operationId values — some specs reuse the same operationId for two different paths. Use method:path targeting for the affected operation:

target: source-oas:post:/orgs/{org_id}/apps/installs/{install_id}/secrets

500 responses are excluded from the coverage requirement — a 500 requires a server bug and can't be deterministically triggered.

Step 2 — Assemble the initial test file

Wire the stubs from the openapi-parser into drift.yaml. Don't aim for perfection — the loop surfaces what's missing. Start each test as simple as possible:

getProduct_Success:
  target: source-oas:getProductByID
  parameters:
    path:
      id: 10
  expected:
    response:
      statusCode: 200

Add tags to every operation — they enable --tags filtering and make suites easier to manage:

getProduct_Success:
  target: source-oas:getProductByID
  tags: [smoke, read-only, products]
  ...

getProduct_Unauthorized:
  tags: [security, auth]
  ...

deleteProduct_Success:
  tags: [destructive, products]
  ...

Common tags: smoke, read-only, write, destructive, security, auth, regression. See references/test-cases.md for the full tags section.

For error paths, see references/test-cases.md for 401, 403, 404, and 400 patterns. For mock server setup, see references/mock-server.md.

Step 3 — Run and capture failures

The run_loop.sh script automates this entire step through Step 6:

# Runs drift --failed in a loop, then checks coverage. Exits 0 only when both pass.
path/to/scripts/run_loop.sh \
  --spec openapi.yaml \
  --test-files drift.yaml \
  --server-url https://api.example.com/v1

Or run drift manually and iterate:

drift verify --test-files drift.yaml --server-url https://api.example.com/v1

# Re-run only failures to keep the loop fast
drift verify --test-files drift.yaml --server-url https://api.example.com/v1 --failed

For local testing with a mock server, start Prism first:

path/to/scripts/start_mock.sh --spec openapi.yaml --port 4010
# then in another terminal:
path/to/scripts/run_loop.sh --spec openapi.yaml --test-files drift.yaml --server-url http://localhost:4010

Step 4 — Diagnose and fix each failure

SymptomLikely causeFix
Got 404, expected 200Test data doesn't existAdd operation:started hook to seed the resource
Got 200, expected 404ID happens to existUse ${notIn(...)} or nil UUID 00000000-0000-0000-0000-000000000000
Got 401, expected 200Auth not configuredAdd global.auth or check token env var
Got 200, expected 401Auth not strippedAdd exclude: [auth] + bad token
Got 403, expected 200Token lacks required scopeUse a token with sufficient permissions
Got 200, expected 403Need valid auth + forbidden resourcePoint at a resource the token can't access; see references/auth.md
Schema validation error on responseAPI drifted from spec, OR spec has invalid examplesCheck whether spec examples are valid — Drift may be correctly reporting a spec bug
Value for query parameter X is missingOptional param has no spec exampleSupply an explicit value for every param without a spec example
Got 400 on a 200 testMissing globally-required query paramInject it via http:request hook or add to every test case
Got 500Test data triggered a server bugFix the data

ignore: { schema: true } suppresses request schema validation only. Use it on any 4xx scenario — especially when testing against a mock server, where Prism doesn't enforce auth and may return an inaccurate error body. Response schema validation has no bypass; spec example bugs surface as failures (see references/mock-server.md).

Multiple 2xx codes: Write one test per documented code — statusCode: [200, 204] array syntax is not supported.

Dynamic IDs and hook timing: Dataset expressions resolve before operation:started. Use pre-seeded static IDs, or rewrite the URL via http:request.

Step 5 — Common fixes

Data must exist before the test (DELETE, PUT, PATCH):

["operation:started"] = function(event, data)
  if data.operation == "deleteProduct_Success" then
    http({ url = server_url .. "/products", method = "POST",
           body = { id = 10, name = "test", price = 9.99 } })
  end
end,
["operation:finished"] = function(event, data)
  http({ url = server_url .. "/products/10", method = "DELETE" })
end,

See references/lua-api.md for the full Lua API and the data object shape.

Step 6 — Verify exit code 0 + full coverage

drift verify --test-files drift.yaml --server-url https://api.example.com/v1
echo "Exit code: $?"

Before declaring done, verify coverage is complete:

uv run path/to/scripts/check_coverage.py \
  --spec openapi.yaml --test-files drift.yaml
echo "Coverage exit: $?"

Done when both commands exit 0.


Quick Reference

ScenarioApproach
Stateless read-only endpointDeclarative test, no hooks
Stable test dataDataset expressions
Create data before testoperation:started hook
Clean up after testoperation:finished hook
Dynamic values (UUIDs, timestamps)exported_functions in Lua
Guaranteed 404${notIn(...)} or nil UUID
Force error code on mock serverPrefer: code=X header
Test without live backendPrism mock — see references/mock-server.md
Non-standard auth prefixhttp:request hook — see references/auth.md
Re-run only broken tests--failed flag
Publish to PactFlow--generate-result flag

Comments

Loading comments...