Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Dl Transformer Finetune

v0.1.0

Build transformer fine-tuning run plans with task settings, hyperparameters, and model-card outputs. Use for repeatable Hugging Face or PyTorch finetuning wo...

0· 450·1 current·2 all-time
byMuhammad Mazhar Saeed@0x-professor
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description match the included artifacts: SKILL.md, a finetune guidance doc, and a Python script that builds run plans and model-card skeletons. No unrelated binaries, env vars, or services are requested.
Instruction Scope
SKILL.md instructs the agent to run the bundled scripts and consult the reference guide; the script only reads an optional JSON input and writes an output file (json/md/csv). This stays within the stated purpose, but note the script will create/overwrite files at the provided output path and can load a user-specified input file (size-limited).
Install Mechanism
No install spec; this is instruction-only with a small included script. Nothing is downloaded or extracted from external URLs.
Credentials
No environment variables, credentials, or config paths are requested. The script does not access other system credentials or external services.
Persistence & Privilege
always:false and no modifications to other skills or system-wide settings. The skill can be invoked autonomously per platform defaults, but it does not request elevated or persistent privileges.
Assessment
This skill appears to do what it says: generate finetuning run plans. Before installing or running it, consider: (1) review the bundled script yourself — it writes files to whatever output path you provide and can overwrite existing files, so avoid privileged/system paths; (2) prefer running with --dry-run first to inspect output without side effects; (3) do not pass secrets or credentials in the optional input JSON; (4) validate any datasets or metrics referenced (license/risk notes are included but not enforced); (5) because the platform allows autonomous invocation, restrict when/where the agent can run this skill if you want to avoid unexpected file writes. Overall the package is coherent and self-contained.

Like a lobster shell, security has layers — review code before you run it.

latestvk97cc06yfx84j5pb3195j1b48n81ws15
450downloads
0stars
1versions
Updated 7h ago
v0.1.0
MIT-0

DL Transformer Finetune

Overview

Generate reproducible fine-tuning run plans for transformer models and downstream tasks.

Workflow

  1. Define base model, task type, and dataset.
  2. Set training hyperparameters and evaluation cadence.
  3. Produce run plan plus model card skeleton.
  4. Export configuration-ready artifacts for training pipelines.

Use Bundled Resources

  • Run scripts/build_finetune_plan.py for deterministic plan output.
  • Read references/finetune-guide.md for hyperparameter baseline guidance.

Guardrails

  • Keep run plans reproducible with explicit seeds and output directories.
  • Include evaluation and rollback criteria.

Comments

Loading comments...