Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Translate TXT

v1.0.0

Translate text files using OpenAI-compatible APIs (e.g. SiliconFlow, DeepSeek, OpenAI). Use when the user wants to: translate a txt file, translate text to C...

0· 40·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for litousteven/smart-translate-txt.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Translate TXT" (litousteven/smart-translate-txt) from ClawHub.
Skill page: https://clawhub.ai/litousteven/smart-translate-txt
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install smart-translate-txt

ClawHub CLI

Package manager switcher

npx clawhub@latest install smart-translate-txt
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The SKILL.md, setup.sh, and scripts/translate.py all implement a file-translation tool using OpenAI-compatible APIs (SiliconFlow, DeepSeek, OpenAI). Functionality (chunking, glossary, sliding window, concurrent calls) aligns with the description.
Instruction Scope
Runtime instructions are scoped to configuring an API key (via .env or env vars), running the translation script, and reporting an OUTPUT:<path>. They instruct the agent to run the included setup.sh non-interactively (which writes TRANSLATE_API_KEY into a .env in the skill directory). There is nothing in SKILL.md that requests unrelated system files or secrets, but the non-interactive setup path means an agent could be asked to persistently write keys to disk.
Install Mechanism
No external install/downloads are performed. This is an instruction+code bundle shipped in the skill. No remote archives, package installs, or URL downloads are used.
!
Credentials
The skill requires an API key (TRANSLATE_API_KEY) and other translation-related env vars (base URL, model, etc.), which are proportionate. However the registry metadata provided to the evaluator lists no required env vars/primary credential while SKILL.md marks TRANSLATE_API_KEY as required — this metadata mismatch is an incoherence and increases risk because automated tooling or permission checks may not surface the need to supply a secret. Additionally, the base URL is user-configurable, so the API key will be sent to whatever endpoint is configured; a malicious or misconfigured base_url could exfiltrate the key.
Persistence & Privilege
always is false and model invocation is allowed (default). The skill writes a .env in its own skill directory via setup.sh, which is normal for credentials storage. The skill does not attempt to modify other skills or system-wide configs.
What to consider before installing
This skill appears to implement a legitimate translator, but take these precautions before installing or running it: - Expect to provide a translation API key (TRANSLATE_API_KEY). The registry metadata omitted this — confirm you or your org are comfortable storing that key in the skill's .env file (~/.comate/skills/translate-txt/.env) before running setup.sh. Treat it like any other API secret. - Verify the TRANSLATE_BASE_URL you configure. The script will POST your API key and chunks to whatever base_url you set; only use official provider endpoints (e.g., api.openai.com, api.siliconflow.cn, api.deepseek.com) you trust. Do not point it to unknown or personal endpoints. - If you will run the non-interactive setup via an agent, avoid passing high-privilege or multi-service keys. Prefer a scoped or expendable key for testing. - Review scripts/translate.py and setup.sh locally (they are included) before running. They are small and readable; no obfuscated code was found, but manual inspection is still recommended. - Note: test.txt includes copyrighted text; ensure you have rights to translate files you submit. If you want me to, I can: (a) highlight the exact lines where the API key is written/read and where network calls are made, (b) produce a short safe checklist for sandboxed testing, or (c) rewrite setup invocation guidance to avoid persisting secrets to disk.

Like a lobster shell, security has layers — review code before you run it.

latestvk971z37kb2sfcskzqb9458279585p53h
40downloads
0stars
1versions
Updated 7h ago
v1.0.0
MIT-0

translate-txt Skill

Translate .txt files using any OpenAI-compatible API. Defaults to SiliconFlow with Qwen model, translating foreign languages to Chinese.

Features

  • Supports any OpenAI-compatible API (SiliconFlow, DeepSeek, OpenAI, etc.)
  • Auto-detects source language, defaults to translating into Chinese
  • Handles large files by chunking at paragraph/sentence boundaries
  • Concurrent translation — multiple chunks translated in parallel
  • Sliding-window context — each chunk gets glossary + background from nearby chunks; new terms auto-propagate, stale context naturally fades as the window slides
  • Automatic retry with exponential backoff on timeout and transient errors
  • Preserves original formatting and structure

File Structure

translate-txt/
├── SKILL.md              # Skill definition
├── .env                  # User configuration (created by setup)
├── setup.sh              # Setup script (interactive & non-interactive)
└── scripts/
    └── translate.py      # Translation script

Configuration

The script reads config from the .env file in the skill root directory, falling back to environment variables.

VariableDefaultDescription
TRANSLATE_API_KEY(none, required)API key for the translation service
TRANSLATE_BASE_URLhttps://api.siliconflow.cn/v1Base URL for OpenAI-compatible API
TRANSLATE_MODELQwen/Qwen2.5-7B-InstructModel name to use
TRANSLATE_THINKINGautoThinking mode: auto/disabled (recommended) or enabled
TRANSLATE_MAX_TOKENS4096Max output tokens per chunk
TRANSLATE_TEMPERATURE1Model temperature
TRANSLATE_TIMEOUT300API request timeout in seconds

Priority: environment variables > .env file > defaults.

How to Use

Step 1: Check & Setup Configuration

Before first use, check if the API key is configured. The script loads config from the .env file in the skill directory, falling back to environment variables.

If .env does not exist or TRANSLATE_API_KEY is empty, ask the user for their API key and preferred provider, then run:

# Non-interactive setup (for AI agent):
bash ~/.comate/skills/translate-txt/setup.sh --api-key <KEY> --provider <PROVIDER>

# Providers: siliconflow (default), deepseek, openai
# Or specify full config:
bash ~/.comate/skills/translate-txt/setup.sh --api-key <KEY> --base-url <URL> --model <MODEL>

Examples:

# SiliconFlow (default)
bash ~/.comate/skills/translate-txt/setup.sh --api-key sk-xxx --provider siliconflow

# DeepSeek
bash ~/.comate/skills/translate-txt/setup.sh --api-key sk-xxx --provider deepseek

# OpenAI
bash ~/.comate/skills/translate-txt/setup.sh --api-key sk-xxx --provider openai

# Custom endpoint
bash ~/.comate/skills/translate-txt/setup.sh --api-key sk-xxx --base-url https://my-api.example.com/v1 --model my-model

The user can also run the interactive setup manually:

bash ~/.comate/skills/translate-txt/setup.sh

On success, the script outputs CONFIG_SAVED:<path>. If the API key is already configured, skip to Step 2.

Step 2: Run Translation

python3 ~/.comate/skills/translate-txt/scripts/translate.py <input_file> [options]

Options:

  • --output <path> - Output file path (default: <input>_translated.txt)
  • --target-lang <lang> - Target language (default: Chinese)
  • --source-lang <lang> - Source language hint (default: auto for auto-detect)
  • --chunk-size <int> - Max characters per chunk (default: 3000)
  • --concurrency <int> - Max concurrent API calls (default: 3)
  • --context-window <int> - Number of preceding chunks for sliding context (default: 3)

Examples:

# Translate a file to Chinese (default)
python3 ~/.comate/skills/translate-txt/scripts/translate.py document.txt

# Translate to Japanese
python3 ~/.comate/skills/translate-txt/scripts/translate.py document.txt --target-lang Japanese

# Specify output path
python3 ~/.comate/skills/translate-txt/scripts/translate.py document.txt --output result.txt

Step 3: Report Result

After the script completes successfully, it outputs the translated file path in the format OUTPUT:<path>. Report this to the user.

If the script fails, check the error output:

  • CONFIG_ERROR - API key not configured. Ask user for their API key, then run setup.sh --api-key <KEY> --provider <PROVIDER>
  • FILE_ERROR - Input file not found or empty
  • API_ERROR - API call failed (check key, URL, model, and network)

How It Works

The script uses a three-step approach for multi-chunk files:

Step 1: Keyword extraction — Each chunk is processed concurrently with a lightweight prompt to extract proper nouns and domain terms with their translations.

Step 2: Build per-chunk context — For each chunk, the script merges keywords from a sliding window of N preceding chunks (default --context-window 3). This means:

  • New terms introduced in later chapters automatically appear in context for subsequent chunks
  • Context naturally shifts as the window slides forward (e.g., chunk 10's context reflects chunks 7-10, not chunks 1-3)
  • A background description is inferred from initial chunks and prepended to all contexts

Step 3: Translation — All chunks are translated concurrently, each with its own window-scoped context.

Use --context-window to control the window size. Larger windows provide more context but may include irrelevant terms from distant sections.

Progress is reported on stderr:

  • KEYWORDS:chunk N/M / KEYWORDS_DONE:chunk N/M (Step 1)
  • TRANSLATING:chunk N/M / DONE:chunk N/M (Step 3)

Results are reassembled in original order.

Model Selection & Performance

Model choice has a dramatic impact on translation speed. The main factor is whether the model uses thinking/reasoning mode — thinking models spend significant time on internal reasoning, which is unnecessary for translation and makes them 5-10x slower.

Recommended models (fast, good quality):

ModelProvider/EndpointSpeedQualityNotes
deepseek-v3 / deepseek-v3.2DeepSeek or compatibleFast (~1.5min/28K chars)GoodBest choice for translation
gpt-4o-miniOpenAIFastGoodCost-effective
Qwen/Qwen2.5-7B-InstructSiliconFlowModerateDecentDefault, good balance

Models to avoid for translation:

ModelWhy
kimi-k2.5Thinking model — ~13min/28K chars, 8x slower
kimi-k2-thinkingSame issue, even more reasoning overhead
deepseek-r1Reasoning model, slow for straightforward translation

Tips:

  • The script passes enable_thinking: false by default (TRANSLATE_THINKING=auto). If your API doesn't support this, switch to a non-thinking model.
  • For batch translations or large files, prefer deepseek-v3 or deepseek-v3.2.

Notes

  • The script uses only Python standard library (no pip install needed)
  • Translation quality depends on the model; larger models generally produce better translations
  • Keyword extraction adds one API call per chunk but ensures every term is captured
  • Set --concurrency 1 to disable parallel translation if the API has strict rate limits
  • The script preserves original text formatting (paragraphs, line breaks) in the translation
  • Avoid thinking/reasoning models for translation — much slower with no quality benefit
  • Sliding-window context scales to any text length — 10 chunks or 1000 chunks work the same way

Comments

Loading comments...