Sui Auto Test

v1.0.0

Analyze Sui Move test coverage, identify untested code, write missing tests, and perform security audits. Includes Python tools for parsing coverage output and generating reports.

0· 1.2k·0 current·0 all-time
byEason Chen@easonc13·duplicate of @easonc13/sui-coverage
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's stated purpose is coverage analysis and auto-improving tests. The included Python scripts (parse/analyze) align with analyzing coverage and producing suggestions, but the SKILL.md language ('write missing tests' / 'auto-improve') overstates automation: the code parses coverage and generates suggestions/reports but does not actually generate or apply test files. Also the runtime expects the 'sui' CLI to be present and executed, but the registry metadata declared no required binaries — that mismatch is unexplained.
Instruction Scope
Instructions direct the agent (or user) to run `sui move test --coverage --trace` and to run the bundled Python scripts on coverage output and the repository source. That scope is appropriate for a coverage tool, but the SKILL.md also instructs committing changes (git add/commit). The scripts read repository source files and may execute an external binary ('sui') via fork/exec; they do not access unrelated system paths or external endpoints. Be aware the agent will read and may modify local repo files if you follow the commit step.
Install Mechanism
No install spec is provided (instruction-only with bundled scripts), so nothing is downloaded at install time. This minimizes supply-chain risk. However, running the skill executes the user's local 'sui' binary (via PTY/fork/exec), which is an external dependency not declared in metadata.
Credentials
The skill requests no environment variables or credentials. The scripts operate on local files and stdin/stdout. That is proportionate to the stated purpose. Note: running git commit may require the user's git identity/credentials but the skill does not request or handle secrets itself.
Persistence & Privilege
always is false and the skill does not request persistent or elevated platform privileges. It does suggest committing changes to the repository, but it does not modify other skills or system-wide configs.
What to consider before installing
This skill appears to be a helpful analyzer for Sui Move coverage, but several things don't add up: - The package metadata lists no required binaries, but the runtime clearly expects and execs the 'sui' CLI. Make sure you have a trusted 'sui' binary on PATH before running — a malicious or symlinked executable named 'sui' could be executed by the script. - The README/description implies automatic test creation, but the included Python tools only parse colored coverage output and generate suggestions/reports. There is no code that auto-generates or inserts Move test files. Expect to write tests yourself (or have the agent author them as content, then you must apply/commit them manually after review). - One of the included files (analyze.py) appears truncated in the packaged content (a line 'source_lines = read' and truncated sections). That indicates either a packaging bug or incomplete file — the tool may fail at runtime. Review the scripts locally before using. - The scripts use os.fork/os.execvp/pty to run the 'sui' CLI and will read source files in the package path. Run the skill in a sandboxed or non-sensitive repository first, and inspect the scripts' code yourself. If you intend to let an agent run this autonomously, restrict its repository access and ensure the 'sui' binary is the expected official binary. Recommended actions before installing/using: - Verify the source of this skill (owner is present but homepage/source unknown). - Inspect the full content of included Python scripts locally; fix or obtain the complete analyze.py if needed. - Ensure 'sui' on your PATH is the legitimate CLI and run the workflow manually once to confirm behavior. - If you allow the agent to commit changes, review diffs before pushing to any remote. If you want, I can list the exact lines that show exec/fork/pty usage and the truncated section to help with your review.

Like a lobster shell, security has layers — review code before you run it.

latestvk970h6254s7252p28117xjrjj980rypn
1.2kdownloads
0stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Sui Coverage Skill

Analyze and automatically improve Sui Move test coverage with security analysis.

Quick Reference

# Location of tools
SKILL_DIR=~/clawd/skills/sui-coverage

# Full workflow
cd /path/to/move/package
sui move test --coverage --trace
python3 $SKILL_DIR/analyze_source.py -m <module> -o coverage.md

Workflow: Auto-Improve Test Coverage

Step 1: Run Coverage Analysis

cd <package_path>
sui move test --coverage --trace
python3 ~/clawd/skills/sui-coverage/analyze_source.py -m <module_name> -o coverage.md

Step 2: Read the Coverage Report

Read the generated coverage.md to identify:

  • 🔴 Uncalled functions - Functions never executed
  • 🔴 Uncovered assertions - assert!() failure paths not tested
  • 🔴 Uncovered branches - if/else paths not taken

Step 3: Write Missing Tests

For each uncovered item, write a test:

A. Uncalled Function

#[test]
fun test_<function_name>() {
    // Setup
    let mut ctx = tx_context::dummy();
    // Call the uncovered function
    <function_name>(...);
    // Assert expected behavior
}

B. Assertion Failure Path (expect_failure)

#[test]
#[expected_failure(abort_code = <ERROR_CODE>)]
fun test_<function>_fails_when_<condition>() {
    let mut ctx = tx_context::dummy();
    // Setup state that triggers the assertion failure
    <function_call_that_should_fail>();
}

C. Branch Coverage (if/else)

#[test]
fun test_<function>_when_<condition_true>() { ... }

#[test]  
fun test_<function>_when_<condition_false>() { ... }

Step 4: Verify Coverage Improved

sui move test --coverage --trace
python3 ~/clawd/skills/sui-coverage/analyze_source.py -m <module_name>

Tools

1. analyze_source.py (Primary Tool)

python3 ~/clawd/skills/sui-coverage/analyze_source.py --module <name> [options]

Options:
  -m, --module    Module name (required)
  -p, --path      Package path (default: .)
  -o, --output    Output file (e.g., coverage.md)
  --json          JSON output
  --markdown      Markdown to stdout

2. analyze.py (LCOV Statistics)

sui move coverage lcov
python3 ~/clawd/skills/sui-coverage/analyze.py lcov.info -f "<package>" -s sources/

Options:
  -f, --filter       Filter by path pattern
  -s, --source-dir   Source directory for context
  -i, --issues-only  Only show files with issues
  -j, --json         JSON output

3. parse_bytecode.py (Low-level)

sui move coverage bytecode --module <name> | python3 ~/clawd/skills/sui-coverage/parse_bytecode.py

Common Patterns

Testing Assertion Failures

// Source code:
public fun withdraw(balance: &mut u64, amount: u64) {
    assert!(*balance >= amount, EInsufficientBalance);  // ← This failure path
    *balance = *balance - amount;
}

// Test for the failure path:
#[test]
#[expected_failure(abort_code = EInsufficientBalance)]
fun test_withdraw_insufficient_balance() {
    let mut balance = 50;
    withdraw(&mut balance, 100);  // Should fail: 50 < 100
}

Testing All Branches

// Source code:
public fun classify(value: u64): u8 {
    if (value == 0) {
        0
    } else if (value < 100) {
        1
    } else {
        2
    }
}

// Tests for all branches:
#[test]
fun test_classify_zero() {
    assert!(classify(0) == 0, 0);
}

#[test]
fun test_classify_small() {
    assert!(classify(50) == 1, 0);
}

#[test]
fun test_classify_large() {
    assert!(classify(100) == 2, 0);
}

Testing Object Lifecycle

#[test]
fun test_full_lifecycle() {
    let mut ctx = tx_context::dummy();
    
    // Create
    let obj = create(&mut ctx);
    assert!(get_value(&obj) == 0, 0);
    
    // Modify
    increment(&mut obj);
    assert!(get_value(&obj) == 1, 0);
    
    // Destroy
    destroy(obj);
}

Error Code Reference

When writing #[expected_failure] tests, use the error constant name:

// If the module defines:
const EInvalidInput: u64 = 1;
const ENotAuthorized: u64 = 2;

// Use in test:
#[expected_failure(abort_code = EInvalidInput)]
fun test_invalid_input() { ... }

// Or use the module-qualified name:
#[expected_failure(abort_code = my_module::EInvalidInput)]
fun test_invalid_input() { ... }

Example: Full Auto-Coverage Session

# 1. Analyze current coverage
cd ~/project/my_package
sui move test --coverage --trace
python3 ~/clawd/skills/sui-coverage/analyze_source.py -m my_module -o coverage.md

# 2. Review what's missing
cat coverage.md
# Shows:
# - decrement() not called
# - assert!(value > 0, EValueZero) failure not tested

# 3. Add tests to sources/my_module.move or tests/my_module_tests.move
# (write the missing tests)

# 4. Verify improvement
sui move test --coverage --trace
python3 ~/clawd/skills/sui-coverage/analyze_source.py -m my_module

# 5. Repeat until 100% coverage

Integration with Agent Workflow

When asked to improve test coverage:

  1. Run analysis - Get current coverage state
  2. Read source - Understand the module's logic
  3. Identify gaps - List uncovered functions/branches/assertions
  4. Security review - Analyze for vulnerabilities while writing tests
  5. Write tests - Create tests for each gap + security edge cases
  6. Report findings - Document any security concerns discovered
  7. Verify - Re-run coverage to confirm improvement

Always commit test improvements:

git add sources/ tests/
git commit -m "Improve test coverage for <module>"

Security Analysis During Testing

Writing tests = Understanding the contract = Finding vulnerabilities

When writing tests, actively look for these issues:

1. Access Control

Questions to ask:
- Who can call this function?
- Should there be owner/admin checks?
- Can unauthorized users manipulate state?

Red flags:
- Public functions that modify critical state without checks
- Missing capability/witness patterns

2. Integer Overflow/Underflow

Questions to ask:
- What happens at u64::MAX?
- What happens when subtracting from 0?
- Are arithmetic operations checked?

Test pattern:
#[test]
fun test_overflow_boundary() {
    // Test with max values
}

3. State Manipulation

Questions to ask:
- Can state be left in inconsistent state?
- Are all state changes atomic?
- Can partial failures corrupt data?

Red flags:
- Multiple state changes without rollback
- Shared objects without proper locking

4. Economic Exploits

Questions to ask:
- Can someone extract more value than deposited?
- Are there rounding errors that can be exploited?
- Flash loan attack vectors?

Red flags:
- Price calculations without slippage protection
- Unbounded loops over user-controlled data

5. Denial of Service

Questions to ask:
- Can someone block legitimate users?
- Are there unbounded operations?
- Can storage be filled maliciously?

Red flags:
- Vectors that grow unbounded
- Loops over external data

Security Report Template

When analyzing a module, generate a security report:

## Security Analysis: <module_name>

### Summary
- Risk Level: [Low/Medium/High/Critical]
- Issues Found: X

### Findings

#### [SEVERITY] Issue Title
- **Location:** Line XX
- **Description:** What the issue is
- **Impact:** What could happen
- **Recommendation:** How to fix

### Tested Edge Cases
- [ ] Overflow at max values
- [ ] Underflow at zero
- [ ] Unauthorized access attempts
- [ ] Empty/null inputs
- [ ] Reentrancy scenarios

Example: Security-Aware Test

// SECURITY: Testing that non-owner cannot withdraw
#[test]
#[expected_failure(abort_code = ENotOwner)]
fun test_unauthorized_withdraw() {
    // Setup: Create vault owned by ALICE
    // Action: BOB tries to withdraw
    // Expected: Should fail with ENotOwner
}

// SECURITY: Testing overflow protection
#[test]
fun test_deposit_overflow_protection() {
    // Deposit near u64::MAX
    // Verify no overflow occurs
}

// SECURITY: Testing economic invariant
#[test]
fun test_total_supply_invariant() {
    // After any operations:
    // sum(all_balances) == total_supply
}

Full Workflow with Security

# 1. Coverage analysis
sui move test --coverage --trace
python3 ~/clawd/skills/sui-coverage/analyze_source.py -m <module> -o coverage.md

# 2. While writing tests, document security findings
# Create SECURITY.md alongside coverage.md

# 3. After tests pass, summarize:
# - Coverage: X% → 100%
# - Security issues found: N
# - Recommendations: ...

Comments

Loading comments...