Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Database Tester

v1.0.0

Database testing for data integrity, SQL validation, migration verification, and performance. Test CRUD operations, constraints, transactions, stored procedu...

0· 54·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The SKILL.md content aligns with a database-testing purpose (CRUD, constraints, migrations, performance). However, the description explicitly says NOT for 'database administration' while the instructions include admin-level actions (e.g., SET GLOBAL slow_query_log, enabling server-level settings) that are typically DB-admin responsibilities.
!
Instruction Scope
Runtime instructions directly show commands that connect to databases and alter server globals, and they reference environment variables (DB_HOST, DB_USER, DB_PASS, DB_NAME, DB_FILE) and local files. The skill's instructions will cause access to databases and potentially modify server settings; they also assume the agent will run shell commands that could read local files.
Install Mechanism
Instruction-only skill with no install spec and no code files — lowers risk because nothing is written or downloaded by the skill itself.
!
Credentials
The SKILL.md uses multiple environment variables and credentials (DB_HOST, DB_USER, DB_PASS, DB_NAME, DB_FILE) but the skill metadata declares no required env vars or primary credential. That mismatch means the skill expects sensitive credentials but does not advertise or constrain them, increasing the risk of accidental exposure or misconfiguration.
Persistence & Privilege
The skill is not always-included and does not request persistent or elevated platform privileges. It does not modify other skills' configs per the provided metadata.
What to consider before installing
This skill appears to be a legitimate database-testing playbook, but it assumes access to database credentials and runs some administrative commands while not declaring those needs. Before installing or running it: (1) only supply the skill with least-privilege test/staging credentials (never production credentials); (2) review and confirm which environment variables the agent will actually receive (DB_HOST, DB_USER, DB_PASS, DB_NAME, DB_FILE are referenced in the docs); (3) avoid enabling global server settings (like slow_query_log) on production systems — run tests in a sandbox or replica; (4) ask the publisher to update the metadata to list required env vars and document privilege needs; and (5) if you lack a separate test DB and DBA support, treat this as suspicious until you can run it in an isolated environment.

Like a lobster shell, security has layers — review code before you run it.

latestvk975m1d1gan6x7az1vaw0mf3kn84x419
54downloads
0stars
1versions
Updated 4d ago
v1.0.0
MIT-0

Database Tester

Validate database operations, data integrity, and migrations.

Test Categories

CategoryFocusWhen
CRUDInsert/Select/Update/Delete correctnessEvery release
ConstraintsPK, FK, UNIQUE, NOT NULL, CHECKSchema changes
TransactionsACID compliance, isolation levelsConcurrent features
MigrationSchema + data migration correctnessVersion upgrades
PerformanceSlow queries, index effectivenessPerformance issues
SecuritySQL injection, permissions, encryptionSecurity reviews

Quick Database Validation

Connect & Inspect

# MySQL
mysql -h $DB_HOST -u $DB_USER -p$DB_PASS $DB_NAME -e "SHOW TABLES;"

# PostgreSQL
PGPASSWORD=$DB_PASS psql -h $DB_HOST -U $DB_USER -d $DB_NAME -c "\dt"

# SQLite
sqlite3 $DB_FILE ".tables"

Schema Comparison (Migration Verification)

-- MySQL: Get table structure
SHOW CREATE TABLE users;
DESCRIBE users;

-- PostgreSQL: Get table structure
\d+ users
SELECT column_name, data_type, is_nullable, column_default
FROM information_schema.columns
WHERE table_name = 'users'
ORDER BY ordinal_position;

-- Compare expected vs actual columns
-- After migration, verify:
-- 1. New columns exist with correct type/default
-- 2. Dropped columns are gone
-- 3. Modified columns have new type/constraints
-- 4. Indexes are created/dropped as expected

Constraint Testing

For each table, verify constraints are enforced:

-- NOT NULL: Insert null into required field → should fail
INSERT INTO users (name, email) VALUES (NULL, 'test@example.com');
-- Expected: ERROR (NOT NULL violation)

-- UNIQUE: Insert duplicate value → should fail
INSERT INTO users (name, email) VALUES ('Test', 'existing@example.com');
-- Expected: ERROR (UNIQUE violation)

-- FOREIGN KEY: Insert invalid reference → should fail
INSERT INTO orders (user_id, total) VALUES (99999, 100.00);
-- Expected: ERROR (FK violation)

-- CHECK constraint
INSERT INTO products (name, price) VALUES ('Test', -10);
-- Expected: ERROR (CHECK violation, price must be >= 0)

-- CASCADE: Delete parent → verify child behavior
DELETE FROM users WHERE id = 1;
-- Verify: orders for user_id=1 are CASCADE deleted/SET NULL per FK rule

Data Migration Testing

Pre-Migration Checklist

-- 1. Record baseline counts
SELECT 'users' AS tbl, COUNT(*) AS cnt FROM users
UNION ALL SELECT 'orders', COUNT(*) FROM orders
UNION ALL SELECT 'products', COUNT(*) FROM products;

-- 2. Record sample checksums
SELECT MD5(GROUP_CONCAT(id, name, email ORDER BY id)) AS checksum
FROM users WHERE id BETWEEN 1 AND 100;

-- 3. Record key aggregates
SELECT SUM(total) AS total_revenue FROM orders;
SELECT COUNT(DISTINCT user_id) AS active_users FROM orders;

Post-Migration Verification

-- 1. Row counts match (or differ by expected amount)
-- 2. Checksums match for unchanged data
-- 3. Aggregates match
-- 4. New columns have correct defaults
-- 5. Transformed data is correct

-- Verify data transformation
SELECT id, old_column, new_column,
  CASE WHEN new_column = EXPECTED_TRANSFORM(old_column)
    THEN 'OK' ELSE 'MISMATCH' END AS status
FROM migrated_table
WHERE status = 'MISMATCH';

Migration Rollback Test

  1. Take snapshot/backup before migration
  2. Run migration forward
  3. Verify data integrity
  4. Run migration rollback
  5. Verify data matches pre-migration snapshot

Transaction & ACID Testing

Atomicity

-- Start transaction, perform multiple operations, simulate failure
BEGIN;
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;
-- Simulate error before commit
ROLLBACK;
-- Verify: both balances unchanged

Isolation Levels

LevelDirty ReadNon-Repeatable ReadPhantom Read
READ UNCOMMITTED✅ possible✅ possible✅ possible
READ COMMITTED❌ prevented✅ possible✅ possible
REPEATABLE READ❌ prevented❌ prevented✅ possible
SERIALIZABLE❌ prevented❌ prevented❌ prevented

Reference: SQL:2016 standard, ISO/IEC 9075

Test procedure: Open two concurrent sessions, verify isolation behavior.

Performance: Slow Query Analysis

-- MySQL: Enable slow query log
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 1;  -- seconds

-- MySQL: Find slow queries
SELECT query, exec_count, avg_latency, rows_examined_avg
FROM sys.statements_with_runtimes_in_95th_percentile
ORDER BY avg_latency DESC LIMIT 10;

-- PostgreSQL: Find slow queries
SELECT query, calls, mean_exec_time, total_exec_time
FROM pg_stat_statements
ORDER BY mean_exec_time DESC LIMIT 10;

-- Check missing indexes
EXPLAIN ANALYZE SELECT * FROM orders WHERE user_id = 123;
-- Look for: Seq Scan (bad) vs Index Scan (good)
-- Look for: high rows examined vs rows returned ratio

Index Effectiveness

-- MySQL: Check index usage
SELECT table_name, index_name, seq_in_index, column_name
FROM information_schema.statistics
WHERE table_schema = DATABASE()
ORDER BY table_name, index_name, seq_in_index;

-- Unused indexes (MySQL 8.0+)
SELECT * FROM sys.schema_unused_indexes;

-- PostgreSQL: Unused indexes
SELECT indexrelname, idx_scan, pg_size_pretty(pg_relation_size(indexrelid))
FROM pg_stat_user_indexes
WHERE idx_scan = 0
ORDER BY pg_relation_size(indexrelid) DESC;

Python Test Script Pattern

"""Database test suite using pytest + direct DB connection.
Reference: PEP 249 (DB-API 2.0)
"""
import pytest
import os

# Use appropriate driver: mysql-connector-python, psycopg2, sqlite3
import mysql.connector  # or psycopg2 for PostgreSQL

@pytest.fixture
def db():
    conn = mysql.connector.connect(
        host=os.getenv("DB_HOST", "localhost"),
        user=os.getenv("DB_USER", "test"),
        password=os.getenv("DB_PASS", "test"),
        database=os.getenv("DB_NAME", "testdb"),
    )
    yield conn
    conn.rollback()  # always rollback test changes
    conn.close()

class TestUserTable:
    def test_insert_valid_user(self, db):
        cur = db.cursor()
        cur.execute(
            "INSERT INTO users (name, email) VALUES (%s, %s)",
            ("Test User", "test@example.com"))
        assert cur.rowcount == 1

    def test_insert_duplicate_email_fails(self, db):
        cur = db.cursor()
        cur.execute(
            "INSERT INTO users (name, email) VALUES (%s, %s)",
            ("User1", "dup@example.com"))
        with pytest.raises(Exception):  # IntegrityError
            cur.execute(
                "INSERT INTO users (name, email) VALUES (%s, %s)",
                ("User2", "dup@example.com"))

    def test_not_null_constraint(self, db):
        cur = db.cursor()
        with pytest.raises(Exception):
            cur.execute(
                "INSERT INTO users (name, email) VALUES (%s, %s)",
                (None, "test@example.com"))

    def test_cascade_delete(self, db):
        cur = db.cursor()
        cur.execute("DELETE FROM users WHERE id = %s", (1,))
        cur.execute("SELECT COUNT(*) FROM orders WHERE user_id = %s", (1,))
        assert cur.fetchone()[0] == 0  # orders cascade deleted

Data Consistency Verification

After API operations, verify database state:

# Pattern: API call → DB check
# 1. Call API to create order
curl -X POST "$URL/api/orders" -d '{"item_id":1,"qty":2}'

# 2. Verify in database
mysql -e "SELECT * FROM orders ORDER BY id DESC LIMIT 1;" $DB_NAME
mysql -e "SELECT stock FROM products WHERE id = 1;" $DB_NAME
# Verify: stock decreased by 2

References

For database-specific testing details:

  • MySQL specific tests: See references/mysql-tests.md
  • PostgreSQL specific tests: See references/postgresql-tests.md

Comments

Loading comments...