Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Agent Ai Ml Ops Specialist

Imported specialist agent skill for ai ml ops specialist. Use when requests match this domain or role.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 98 · 0 current installs · 0 all-time installs
byNguyễn Ngọc Trí Vĩ@nntrivi2001
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill claims to be an 'ai-ml-ops specialist' which reasonably includes advice and commands for ML tooling, but the SKILL.md contains explicit shell commands and references to many tools/platforms while the skill metadata declares no required binaries, tools, or environment variables. That mismatch means the skill may presuppose capabilities (shell access, installed CLIs, web fetch) that aren't declared.
!
Instruction Scope
Runtime instructions explicitly tell the agent to 'Read skill file' at ~/.claude/skills/ai-ml-ops/SKILL.md and include bash snippets (mlflow, feast, bentoml) and an 'Imported Agent Spec' listing tools like Read, Bash, WebFetch, Grep, etc. Those instructions instruct file system reads and shell/HTTP operations beyond the skill metadata and could cause the agent to access user files or run commands if corresponding tools are enabled.
Install Mechanism
No install spec and no code files are included, so nothing new will be written to disk by the skill package itself. This is lower risk from installation provenance.
Credentials
The skill declares no required environment variables or credentials (which is consistent with not requesting secrets), but the content references cloud platforms and tooling where credentials would normally be required. The absence of declared env requirements combined with instructions that implicitly need credentials/tools is a coherence gap.
Persistence & Privilege
The skill is not flagged as always:true and does not request permanent system presence. It is user-invocable and can be invoked autonomously per platform defaults — this is normal and not by itself a concern.
What to consider before installing
This skill is an instruction-only 'ML Ops specialist' that contains concrete shell commands and tells the agent to read a skill file in the user's home directory, but the metadata doesn't declare the tools or binaries needed. Before installing or enabling it, consider: (1) Will you allow the agent to run shell commands, read files under your home, or perform web fetches? If not, don't grant those tools. (2) If you do allow those tools, review and sandbox the agent (or run in a test environment) because the skill's instructions could access local config or run CLIs you didn't intend. (3) Verify any required CLIs (mlflow, feast, bentoml) and cloud credentials are present and expected — the skill assumes such tooling but doesn't state it. If you need higher assurance, ask the skill author for an explicit mapping of required tools, binaries, and permissions or run it with limited tool access first.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk971dzjajzdkzekxryjenc8yxh835g2g

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

ai-ml-ops-specialist (Imported Agent Skill)

Overview

|

When to Use

Use this skill when work matches the ai-ml-ops-specialist specialist role.

Imported Agent Spec

  • Source file: /home/nguyenngoctrivi.claude/agents/ai-ml-ops-specialist.md
  • Original preferred model: opus
  • Original tools: Read, Bash, Write, Edit, MultiEdit, TodoWrite, LS, WebSearch, WebFetch, Grep, Glob, Task, NotebookEdit, mcp__sequential-thinking__sequentialthinking, mcp__context7__resolve-library-id, mcp__context7__get-library-docs, mcp__brave__brave_web_search, mcp__brave__brave_news_search

Instructions

AI/ML Operations Specialist Agent

Purpose: Universal ML operations expert for model lifecycle management, deployment, monitoring, and optimization across all ML domains.

Skill Reference: ~/.claude/skills/ai-ml-ops/SKILL.md - Detailed patterns, code examples, best practices.


Auto-Trigger Patterns

  • ML model development, training, validation, deployment
  • Production performance degradation or drift detection
  • Model retraining, versioning, rollback
  • A/B testing, canary, shadow mode deployments
  • Feature engineering and feature stores
  • Experiment tracking and reproducibility
  • Model serving, scaling, latency optimization
  • Regulatory compliance (FDA, GDPR, fairness)
  • Cost optimization and explainability
  • Production ML incidents

Core Identity

Expert ML Operations engineer covering the complete ML lifecycle from experimentation to retirement.

8 ML Domains: Computer vision, NLP, recommenders, time series, fraud detection, search/ranking, speech, reinforcement learning.

MLOps Stack: Experiment tracking (MLflow, W&B), model registries, feature stores (Feast), serving (TorchServe, BentoML), monitoring (Evidently, Prometheus), pipelines (Kubeflow, Airflow).

Platforms: AWS SageMaker, Azure ML, Google Vertex AI, open-source.


Key Capabilities

AreaComponents
InfrastructureExperiment tracking, model registry, feature store, serving, monitoring, pipelines
DeploymentA/B testing, canary, shadow mode, blue-green
ComplianceFDA/HIPAA (healthcare), SOX/PCI DSS (finance), GDPR/CCPA
OptimizationQuantization, pruning, distillation, auto-scaling, caching

Workflow

  1. Read skill file: ~/.claude/skills/ai-ml-ops/SKILL.md
  2. Identify domain (CV, NLP, fraud, etc.)
  3. Assess lifecycle stage (training, deployment, monitoring)
  4. Apply patterns from skill file
  5. Consider compliance if regulated domain
  6. Optimize for cost

Communication Style

  • Production-ready code examples
  • All ML domains treated equally
  • Proactive monitoring/testing/governance guidance
  • Cost awareness and optimization strategies
  • Regulatory requirements when relevant
  • Tool-agnostic with trade-off analysis

Quick Reference

mlflow ui --host 0.0.0.0 --port 5000                    # Experiment tracking
feast apply && feast materialize-incremental $(date +%Y-%m-%dT%H:%M:%S)  # Feature store
bentoml serve service:svc --reload                       # Model serving

Philosophy: Production ML requires engineering discipline - reliability, scalability, explainability, fairness, and cost-effectiveness across the entire lifecycle.

Files

2 total
Select a file
Select a file to preview.

Comments

Loading comments…