Fix Llm Artifacts

v1.1.4

Applies fixes from a prior review-llm-artifacts run, with safe/risky classification

0· 107·1 current·1 all-time
byKevin Anderson@anderskev

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for anderskev/fix-llm-artifacts.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Fix Llm Artifacts" (anderskev/fix-llm-artifacts) from ClawHub.
Skill page: https://clawhub.ai/anderskev/fix-llm-artifacts
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install fix-llm-artifacts

ClawHub CLI

Package manager switcher

npx clawhub@latest install fix-llm-artifacts
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill claims to apply fixes from a prior review and its steps (reading .beagle/llm-artifacts-review.json, running git operations, applying code edits, running linters/tests) align with that purpose. However the SKILL.md assumes the availability of developer tools (git, jq, ruff, mypy, npx/npm/yarn, pytest, go, etc.) while the registry metadata declares no required binaries; this mismatch should be noted before running.
Instruction Scope
Instructions operate directly on the repository (git stash, apply fixes, run linters/tests, potentially remove .beagle/llm-artifacts-review.json). That is within scope for a fixer. The doc instructs spawning parallel agents using a 'Task' tool to apply fixes — this grants the skill the ability to perform many parallel edit operations and should be used with caution. Risky fixes are interactively prompted (y/n/s), which reduces silent destructive changes.
Install Mechanism
Instruction-only skill with no install spec or code files; nothing is written to disk by an installer. This is the lowest-risk install model.
Credentials
The skill requests no credentials or environment variables. It reads repository state and a local review JSON file (.beagle/llm-artifacts-review.json), which is appropriate for its purpose. No unrelated secrets or external endpoints are referenced.
Persistence & Privilege
The skill is not forced always-on and has disable-model-invocation set to true (reducing autonomous model use). It modifies only repository files and its own review artifact file; it does not request system-wide configuration or other skills' settings.
Assessment
This skill appears to do what it says, but it will modify your repository and expects common developer tools to be present. Before running: (1) run the --dry-run first to preview changes, (2) make sure you have a clean backup or commit (the skill will stash dirty work and may delete the .beagle review file on success), (3) ensure required tools are installed (git, jq, ruff/mypy for Python, npm/npx or yarn for JS/TS, pytest, go toolchain as relevant), and (4) be prepared to respond to prompts for risky fixes. If you manage a shared or production repo, test on a branch or clone/CI run before applying automatic fixes.

Like a lobster shell, security has layers — review code before you run it.

latestvk97fantt1fbnn2n6cdpk90q54s83w6eb
107downloads
0stars
1versions
Updated 4w ago
v1.1.4
MIT-0

Fix LLM Artifacts

Apply fixes from a previous review-llm-artifacts run with automatic safe/risky classification.

Usage

/beagle-core:fix-llm-artifacts [--dry-run] [--all] [--category <name>]

Flags:

  • --dry-run - Show what would be fixed without changing files
  • --all - Fix entire codebase (runs review with --all first)
  • --category <name> - Only fix specific category: tests|dead-code|abstraction|style

Instructions

1. Parse Arguments

Extract flags from $ARGUMENTS:

  • --dry-run - Preview mode only
  • --all - Full codebase scan
  • --category <name> - Filter to specific category

2. Pre-flight Safety Checks

# Check for uncommitted changes
git status --porcelain

If working directory is dirty, warn:

Warning: You have uncommitted changes. Creating a git stash before proceeding.
Run `git stash pop` to restore if needed.

Create stash if dirty:

git stash push -m "beagle-core: pre-fix-llm-artifacts backup"

3. Load Review Results

Check for existing review file:

cat .beagle/llm-artifacts-review.json 2>/dev/null

If file missing:

  • If --all flag: Run review-llm-artifacts --all --json first
  • Otherwise: Fail with: "No review results found. Run /beagle-core:review-llm-artifacts first."

If file exists, validate freshness:

# Get stored git HEAD from JSON
stored_head=$(jq -r '.git_head' .beagle/llm-artifacts-review.json)
current_head=$(git rev-parse HEAD)

if [ "$stored_head" != "$current_head" ]; then
  echo "Warning: Review was run at commit $stored_head, but HEAD is now $current_head"
fi

If stale, prompt: "Review results are stale. Re-run review? (y/n)"

4. Partition Findings by Safety

Parse findings from JSON and classify by fix_safety field:

Safe Fixes (auto-apply):

  • unused_import - Unused imports
  • todo_comment - Stale TODO/FIXME comments
  • dead_code_obvious - Obviously unreachable code
  • verbose_comment - Overly verbose LLM-style comments
  • redundant_type - Redundant type annotations

Risky Fixes (require confirmation):

  • test_refactor - Test structure changes
  • abstraction_change - Class/function extraction
  • code_removal - Removing functional code
  • mock_boundary - Test mock scope changes
  • logic_change - Any behavioral modifications

5. Apply Safe Fixes

If --dry-run:

## Safe Fixes (would apply automatically)

| File | Line | Type | Description |
|------|------|------|-------------|
| src/api.py | 15 | unused_import | Remove `from typing import List` |
| src/models.py | 42 | verbose_comment | Remove 23-line docstring |
...

Otherwise, spawn parallel agents per category with Task tool:

Task: Apply safe fixes for category "{category}"
Files: [list of files with findings in this category]
Instructions: Apply each fix, preserving surrounding code. Report success/failure per fix.

Categories to parallelize:

  • style - Comments, formatting
  • dead-code - Imports, unreachable code
  • tests - Test-related safe fixes
  • abstraction - Safe refactors

6. Handle Risky Fixes

For each risky fix, prompt interactively:

[src/services/auth.py:156] Remove seemingly unused authenticate_legacy() method?
This method has no callers in the codebase but may be used externally.
(y)es / (n)o / (s)kip all risky:

Track user choices:

  • y - Apply this fix
  • n - Skip this fix
  • s - Skip all remaining risky fixes

7. Post-Fix Verification

Detect project type and run appropriate linters:

Python:

# Check if ruff config exists
if [ -f "pyproject.toml" ] || [ -f "ruff.toml" ]; then
    ruff check --fix .
    ruff format .
fi

# Check if mypy config exists
if [ -f "pyproject.toml" ] || [ -f "mypy.ini" ]; then
    mypy .
fi

TypeScript/JavaScript:

# Check for eslint
if [ -f "eslint.config.js" ] || [ -f ".eslintrc.json" ]; then
    npx eslint --fix .
fi

# Check for TypeScript
if [ -f "tsconfig.json" ]; then
    npx tsc --noEmit
fi

Go:

if [ -f "go.mod" ]; then
    go vet ./...
    go build ./...
fi

8. Run Tests

# Python
if [ -f "pyproject.toml" ] || [ -f "pytest.ini" ]; then
    pytest
fi

# JavaScript/TypeScript
if [ -f "package.json" ]; then
    npm test 2>/dev/null || yarn test 2>/dev/null || true
fi

# Go
if [ -f "go.mod" ]; then
    go test ./...
fi

9. Report Results

## Fix Summary

### Applied Fixes
- [x] src/api.py:15 - Removed unused import `List`
- [x] src/models.py:42-64 - Removed verbose docstring
- [x] src/auth.py:156-189 - Removed dead method (user confirmed)

### Skipped Fixes
- [ ] src/services/cache.py:23 - User declined risky fix
- [ ] tests/test_api.py:45 - Test refactor skipped

### Verification Results
- Linter: PASSED
- Type check: PASSED
- Tests: PASSED (42 passed, 0 failed)

### Diff Summary
```bash
git diff --stat

Cleanup

On successful completion (all verifications pass):

rm .beagle/llm-artifacts-review.json

If any verification fails, keep the file and report:

Review file preserved at .beagle/llm-artifacts-review.json
Fix issues and re-run, or restore with: git stash pop

Example

# Preview all fixes without applying
/beagle-core:fix-llm-artifacts --dry-run

# Fix only dead code issues
/beagle-core:fix-llm-artifacts --category dead-code

# Full codebase scan and fix
/beagle-core:fix-llm-artifacts --all

# Fix style issues only, preview first
/beagle-core:fix-llm-artifacts --category style --dry-run

Comments

Loading comments...