Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

'AI自动化测试平台'

v1.0.0

This skill should be used when developing or using an AI-powered automated testing platform based on LangChain+DeepSeek. It implements intelligent test case...

0· 0·0 current·0 all-time
Security Scan
Capability signals
CryptoRequires OAuth tokenRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The code and SKILL.md implement an AI-driven test platform (LangChain/DeepSeek, FastAPI, Pytest, Playwright) consistent with the skill name/description. However the registry metadata claims no required env vars / no install spec while the SKILL.md and deploy docs clearly require DATABASE_URL, DEEPSEEK_API_KEY and SECRET_KEY and include many source files — that metadata/code mismatch is noteworthy.
!
Instruction Scope
Runtime instructions and code parse/upload documents, save uploaded files to disk (data/uploads, data/scripts), spawn background tasks, call external DeepSeek API, and write generated scripts/reports to the filesystem. The SKILL.md also describes an application-level AuthInterceptor but several admin endpoints (e.g., /admin/create_auth) are implemented without explicit protection and are described as 'not subject to authorization interception' — this could allow unauthorized creation of auth codes if deployed without additional protections. The SKILL.md references environment variables (DATABASE_URL, DEEPSEEK_API_KEY, SECRET_KEY) that are not declared in the skill metadata.
Install Mechanism
No install spec is provided (instruction-only in registry), yet the bundle includes 70+ source files and deployment docs. Because code files are present but there's no install mechanism, it's unclear how the consumer is expected to run or trust these files. The lack of an install mechanism reduces automatic risk (nothing forced to download/extract), but means manual review and careful deployment are required.
!
Credentials
The platform legitimately needs DB credentials and an LLM API key, but those env vars are omitted from the registry metadata while being required by the SKILL.md and deploy docs—this is an incoherence. Also the SKILL.md documents a weak/odd AES key generation pattern ("'yanghua' + timestamp + '360sb'") and the deploy docs recommend an insecure default MySQL root password (root123) in examples; both are security red flags. The number and sensitivity of required secrets (DB URL, API key, secret key) are proportional to the platform's function but must be declared and handled securely.
Persistence & Privilege
The skill does not declare always:true and is user-invocable/autonomous invocation is allowed by default (normal). It does write files to its own data directories and persists generated scripts/reports which is expected for this application. There is no evidence the skill attempts to modify other skills or system-wide agent settings.
What to consider before installing
This package appears to implement the described AI testing platform, but there are mismatches and several security concerns you should address before deploying: 1) Metadata vs code mismatch — the registry lists no required env vars or install steps, but the project requires DATABASE_URL, DEEPSEEK_API_KEY and SECRET_KEY and includes full source and deployment docs. Ask the publisher to correct the metadata and provide provenance. 2) Secrets handling — do not install or run this in production until you verify how DEEPSEEK_API_KEY and SECRET_KEY are stored and rotated; ensure they are provided via a secure mechanism (secret manager, environment injected at deploy), not checked into code. 3) Admin endpoints & auth — the /admin/create_auth endpoint is implemented as 'not subject to authorization interceptor' in docs and appears unprotected; confirm that administrative endpoints are access-controlled (e.g., protected by a separate admin auth) before exposing the service. 4) Weak crypto / defaults — the AES key derivation described in SKILL.md is nonstandard and weak; replace with a secure, secret-random key and vetted crypto library. Also do not use the example DB password (root123). 5) File handling & sandboxing — the service writes uploaded documents and generated scripts to disk and executes test scripts (pytest/playwright). Run it in an isolated environment (dedicated VM/container, network restrictions) and review the code that executes generated scripts to avoid arbitrary code execution on your host. 6) Source provenance — this bundle contains many files; if you do not trust the source, perform a manual code review (or run it in an isolated test environment) and scan for hard-coded credentials, backdoors, or unexpected network calls. 7) Recommended next steps: request a signed/official release or origin, run static analysis and dependency checks, confirm metadata is corrected, verify admin auth is enabled, secure secrets, and deploy initially in an isolated environment for testing. If you want, I can list the specific files and code locations to inspect first (e.g., auth code creation/verification, AIGeneratorService usage of DEEPSEEK_API_KEY, places where uploaded files are executed).

Like a lobster shell, security has layers — review code before you run it.

latestvk97bz2nsc34k9t48kk61pn32w1856wtv
0downloads
0stars
1versions
Updated 5h ago
v1.0.0
MIT-0

AI Test Platform Developer Guide

Purpose

Build an AI-powered automated testing platform for internal company use, leveraging DeepSeek LLM and LangChain framework to achieve intelligent test case generation, automated script creation, and test execution management. The platform focuses on API testing (Pytest+Requests) and UI testing (Playwright) with Docker-based isolation and authorization control.

Technology Stack

LayerTechnologyVersionPurpose
FrontendVue3^3.3.0Reactive UI framework
Frontend UIElement Plus^2.4.0UI component library
State ManagementPinia^2.1.0Global state management
BackendFastAPI^0.104.0Async web framework
AI FrameworkLangChain^0.1.0LLM orchestration
LLMDeepSeekAPICore AI capabilities
API TestingPytest^7.4.0Test framework
API Reportingpytest-json-report^1.5.0JSON report generation
UI TestingPlaywright^1.40.0Browser automation
DatabaseMySQL8.0Relational database
Vector DBChroma^0.4.0Vector retrieval
DeploymentDockerLatestContainerization

Architecture Overview

System Layers

【Frontend Layer】Vue3 + Element Plus + Pinia
      ↓ (Polling for progress)
【API Layer】FastAPI + Auth Interceptor
      ↓
【Business Layer】Auth Management, Test Design, Execution, Reporting
      ↓
【AI Layer】LangChain + DeepSeek + RAG (Chroma)
      ↓
【Test Engine Layer】Pytest (API) + Playwright (UI)
      ↓
【Data Layer】MySQL (Business) + Chroma (Vectors)

Core Modules

1. Authorization Management Module

Permission Types:

  • all - Full functionality
  • generate - Case/script generation only
  • execute - Script execution only

Encryption:

  • Algorithm: AES
  • Key generation: "yanghua" + timestamp + "360sb"

Authorization Flow:

Request with auth_code → AuthInterceptor → Verify validity/expire/count/permission
→ Pass: Allow access + increment usage count
→ Fail: Return 401/403

Key Services:

  • AuthCodeService: CRUD, validation, count updates
  • AuthService: Global request interceptor

2. AI Generation Module

Supported Document Formats:

  • Word (.docx) - python-docx
  • Excel (.xlsx) - openpyxl
  • PDF (.pdf) - PyPDF2/pdfplumber
  • Markdown (.md) - markdown

Core Chains:

  • TestCaseChain: Generate test cases from documents
  • ApiScriptChain: Generate Pytest+Requests scripts
  • UiScriptChain: Generate Playwright scripts

AI Configuration:

  • No QPS limit
  • Retry: 2 times on failure
  • Timeout: 30 seconds
  • Expected usage: ≤20 calls/day

3. API Automation Module

Integration:

  • Execute via pytest.main()
  • Parse results using pytest-json-report

Environment:

  • Docker container isolation
  • Dependencies managed via requirements.txt

Script Features:

  • Save, edit, categorize, batch manage
  • Configure test environment, headers, global params
  • Real-time debugging with request/response

4. UI Automation Module

Playwright Configuration:

  • Headless mode (no UI)
  • Screenshot on every test case (success + failure)
  • Trace on failure (HTML trace viewer)
  • Support Chrome and Edge

Capabilities:

  • AI-generated element locators (ID/XPath)
  • Flow script generation (login, click, input, assert, screenshot)
  • Multi-browser compatibility

5. Test Execution Module

Features:

  • Single or batch script execution
  • Real-time log capture
  • Execution timeout control
  • Failure retry mechanism
  • Historical record management

6. Test Report Module

Auto-generation:

  • HTML reports after execution
  • Export to HTML/PDF
  • AI analysis of failures (simple log analysis)

AI Analysis Prompt:

Analyze the following test execution log and identify the failure reason:
{execution_log}

Provide:
1. Main failure cause
2. Possible problem location
3. Suggested solution

Keep it concise and highlight key information.

7. System Configuration Module

Configuration Management:

  • DeepSeek API settings
  • Environment URLs
  • System initialization

Features:

  • Operation logs
  • Authorization usage logs
  • AI call logs
  • Automatic data backup

Database Schema

Core Tables

  1. auth_codes - Authorization codes

    • Fields: id, code(encrypted), permission, expire_time, use_count, max_count, is_active
  2. test_cases - Test cases

    • Fields: id, title, content, type(api/ui), created_by, create_time
  3. auto_scripts - Automated scripts

    • Fields: id, name, content, type(api/ui), status, created_by, create_time, update_time
  4. execute_records - Execution records

    • Fields: id, script_id, auth_code, result(success/fail), log, execute_time, duration
  5. test_reports - Test reports

    • Fields: id, record_id, report_content, file_path, ai_analysis, create_time
  6. task_progress - Task progress tracking

    • Fields: id, task_id, task_type(generate/execute), status, progress, message, result_data

See references/architecture.md for complete SQL definitions.

API Design

Unified Response Format

{
  "code": 200,
  "msg": "success",
  "data": {}
}

Common Parameters

  • auth_code (required) - Authorization code for all core endpoints

Core Endpoints

Authorization:

  • POST /admin/add_auth - Create authorization code
  • POST /auth/verify - Verify authorization
  • GET /auth/list - List authorization codes

AI Generation:

  • POST /generate/case - Generate test cases
  • POST /generate/api - Generate API scripts
  • POST /generate/ui - Generate UI scripts
  • GET /progress/{task_id} - Get task progress

Automation Management:

  • POST /script/save - Save script
  • GET /script/list - List scripts
  • GET /script/{id} - Get script details

Test Execution:

  • POST /execute/run - Run test script
  • GET /execute/record - Get execution records

Report Management:

  • GET /report/generate - Generate report
  • GET /report/export - Export report

System Configuration:

  • GET /system/config - Get system config
  • POST /system/config - Update system config

Frontend Progress Polling

Implementation:

  • State management: Pinia
  • Polling interval: Every 2 seconds
  • Progress types:
    • AI generation (parsing → vectorizing → generating → complete)
    • Script execution (running → collecting → reporting)

Polling Endpoint:

GET /api/progress/{task_id}
# Response
{
  "status": "processing|completed|failed",
  "progress": 50,
  "message": "Generating test cases..."
}

Deployment

Docker Compose Setup

version: '3.8'
services:
  mysql:
    image: mysql:8.0
    environment:
      MYSQL_ROOT_PASSWORD: root123
      MYSQL_DATABASE: ai_test_platform
    volumes:
      - mysql_data:/var/lib/mysql
    ports:
      - "3306:3306"

  app:
    build: .
    ports:
      - "8000:8000"
    depends_on:
      - mysql
    volumes:
      - ./data:/app/data
    environment:
      - DATABASE_URL=mysql+pymysql://root:root123@mysql:3306/ai_test_platform
      - DEEPSEEK_API_KEY=your_api_key

volumes:
  mysql_data:

Data Persistence

  • MySQL: Docker Volume
  • Scripts: ./data/scripts/
  • Reports: ./data/reports/
  • Screenshots: ./data/screenshots/
  • Vectors: ./data/chroma/

When to Use This Skill

Use this skill when:

  • Developing the AI test platform backend or frontend
  • Implementing test case generation features
  • Creating API or UI automated test scripts
  • Integrating AI models (DeepSeek) with LangChain
  • Setting up authorization and security mechanisms
  • Configuring Docker deployment environment
  • Debugging test execution issues
  • Designing test report generation

Usage Guidelines

Backend Development

  1. Follow FastAPI best practices
  2. Implement async/await for I/O operations
  3. Use Pydantic models for request/response validation
  4. Implement proper exception handling
  5. Add logging for all operations

Frontend Development

  1. Use Vue3 Composition API
  2. Implement proper state management with Pinia
  3. Handle loading and error states
  4. Implement progress polling for async operations
  5. Follow Element Plus component guidelines

AI Integration

  1. Use LangChain for chain orchestration
  2. Implement retry logic for API calls
  3. Handle timeouts gracefully
  4. Cache common responses
  5. Monitor API usage

Security Considerations

  • All core endpoints must verify auth_code
  • Encrypt authorization codes using AES
  • Sanitize all user inputs
  • Implement rate limiting (optional)
  • Never expose sensitive data in logs

Project Structure

ai-test-platform/
├── backend/
│   ├── app/
│   │   ├── api/              # API endpoints
│   │   ├── core/             # Core security and config
│   │   ├── models/           # Database models
│   │   ├── schemas/          # Pydantic schemas
│   │   ├── services/         # Business logic
│   │   │   ├── auth.py       # Authorization service
│   │   │   ├── ai.py         # AI generation service
│   │   │   ├── execute.py    # Execution service
│   │   │   └── report.py     # Report service
│   │   └── main.py           # FastAPI app
│   ├── tests/                # Test files
│   ├── Dockerfile
│   └── requirements.txt
├── frontend/
│   ├── src/
│   │   ├── components/       # Vue components
│   │   ├── pages/            # Page components
│   │   ├── stores/           # Pinia stores
│   │   ├── api/              # API calls
│   │   └── main.js           # Entry point
│   └── package.json
├── data/                     # Persistent data
│   ├── scripts/
│   ├── reports/
│   ├── screenshots/
│   └── chroma/
├── docs/                     # Documentation
├── scripts/                  # Utility scripts
│   ├── generate_auth.py      # Auth code generator
│   ├── init_db.py            # Database initializer
│   └── setup.sh              # Setup script
├── docker-compose.yml
└── README.md

Development Workflow

1. Setup Development Environment

# Clone and setup
git clone <repo>
cd ai-test-platform

# Backend setup
cd backend
python -m venv venv
source venv/bin/activate  # or venv\Scripts\activate on Windows
pip install -r requirements.txt

# Frontend setup
cd ../frontend
npm install

2. Initialize Database

python scripts/init_db.py

3. Generate Authorization Codes

python scripts/generate_auth.py

4. Run Development Server

# Backend
cd backend
uvicorn app.main:app --reload

# Frontend
cd frontend
npm run dev

5. Docker Deployment

docker-compose up -d

Reference Documents

  • references/architecture.md - Complete system architecture and database design
  • references/AI 自动化测试平台 需求规格说明书.docx - Original requirements
  • references/AI 自动化测试平台 系统设计说明书.docx - Original system design

Next Steps

  1. Review architecture documentation in references/architecture.md
  2. Set up development environment
  3. Initialize database schema
  4. Generate authorization codes using provided script
  5. Start with core backend services (Auth, AI Generation)
  6. Implement frontend components
  7. Test end-to-end workflows
  8. Deploy using Docker Compose

Notes

  • This platform is designed for internal company use only
  • All data must be stored internally, no external uploads
  • DeepSeek API is the only allowed external model integration
  • Authorization codes are mandatory for all core features
  • Docker-based isolation ensures security and consistency

Comments

Loading comments...