Mixlab Fill Content
ReviewAudited by ClawScan on May 10, 2026.
Overview
The skill mostly matches its Mixdao content-filling purpose, but its update flow can write raw or stale scraped content and uses an under-disclosed AI-provider credential path.
Review the temp/list output and exact IDs before allowing any update. Use scoped Mixdao and AI-provider credentials, confirm which AI endpoint receives content, and avoid automatic bulk updates until the raw-content fallback, URL validation, and stale-temp scoping issues are fixed.
Findings (6)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Mixdao could be updated with full scraped article text instead of the promised short Chinese case description.
If summarization fails, the original scraped content remains in the item and is still submitted in the later batch update. This conflicts with SKILL.md’s claim that Mixdao receives the AI-organized summary rather than the raw long text.
catch (err) { console.error(`[SKIP] 梳理失败 ${items[i].cachedStoryId}: ${err.message},保留原文`); } ... const data = await batchUpdateContent(chunk);Change summarization failures to skip the affected item or require explicit user confirmation before uploading raw content, and document any fallback behavior clearly.
The agent could accidentally update old, unrelated, or wrong Mixdao records from stale temp files.
The list mode enumerates all temp text files, including files not in the provided JSON, and update mode accepts any supplied ID with a matching temp file without re-checking the current JSON or hasContent status.
const files = fs.readdirSync(tempDir).filter(f => f.endsWith('.txt')); ... const ids = args.filter(a => a.trim().length > 0); const items = collectItems(ids);Require the JSON file during update, validate IDs against that file and the latest hasContent=false state, and ask the user to approve the final ID list before PATCHing.
A malicious or bad URL in Mixdao data could cause the local environment to fetch unintended local/private resources, which may later be stored, previewed, sent to the AI provider, or uploaded.
The script automatically fetches URLs taken from Mixdao API data with curl and does not validate scheme, host, localhost, or private-network targets.
const curl = spawn('curl', ['-sL', '--max-time', '30', url]);Restrict fetching to http/https, block localhost/private IP/file schemes, and show URLs for review before bulk fetching.
A user may misunderstand which provider receives the key and scraped content.
The code uses an environment variable named ANTHROPIC_API_KEY but sends requests to a MiniMax-compatible endpoint by default; the registry metadata declares no env vars and SKILL.md does not clearly name that provider destination.
const apiKey = process.env.ANTHROPIC_API_KEY; ... baseURL: baseURL || 'https://api.minimaxi.com/anthropic'
Declare required env vars in metadata, clearly document the default AI provider endpoint, and consider using a provider-specific variable name or requiring an explicit base URL.
Future installs could resolve different dependency versions than the reviewed artifact expects.
The skill includes dependencies with semver ranges but no install spec or lockfile in the supplied artifacts. This is common but means exact installed code is not fully pinned in the review context.
"dependencies": { "@anthropic-ai/sdk": "^0.32.1", "jsonrepair": "^3.13.2" }Provide an install spec and lockfile or pin exact dependency versions for reproducible installs.
Old or incorrect scraped content can remain on disk and later influence updates if not cleaned up.
Fetched article bodies are stored persistently under temp/{cachedStoryId}.txt and can be reused by later list/update runs.
fs.writeFileSync(tempFile, text, 'utf8');
Add cleanup guidance, timestamp/run scoping, and safeguards that prevent stale temp files from being reused unintentionally.
