Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

nas-master

A hardware-aware, hybrid (SMB + SSH) suite for ASUSTOR NAS metadata scraping. Functions as a versatile Coder, Project Manager, and System Architect while maintaining strict read-only safety and i3-10th Gen resource throttling.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 1.8k · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The SKILL.md and nas_engine.py are consistent with an SMB+SSH NAS scraper: they use SSH (paramiko), walk a network path, and write metadata into MySQL. However the registry metadata claims no required env vars/binaries while SKILL.md lists many required binaries and env vars — an important mismatch. The skill also declares support for PHP/XAMPP web dashboard generation and Windows-specific throttle constants, which are coherent with the intended Windows-targeted workflow but expand the surface area beyond a simple scraper.
Instruction Scope
Instructions explicitly direct the agent to: recursively scan NAS volumes (including hidden system folders), run SSH commands (cat /proc/mdstat, btrfs scrub status), parse internal app SQLite DBs, and generate a PHP/AJAX dashboard under C:\xampp\htdocs\nas_explorer\. These actions are within the stated purpose but involve system-level reads and writing files into a webserver directory — which increases risk and should be expected and authorized by the user.
Install Mechanism
There is no install spec (instruction-only style) so nothing is automatically downloaded from external URLs. That lowers install-time risk. The package does include Python code that depends on third-party libraries (paramiko, mysql.connector, python-dotenv, psutil), but no automated installation is declared.
!
Credentials
SKILL.md declares many required env vars (NAS_VOLUMES, NAS_USER, NAS_PASS, NAS_SSH_HOST, NAS_SSH_USER, NAS_SSH_PASS, DB_PASS) which are appropriate for a NAS scraper. However the registry metadata lists no required env vars — a clear inconsistency. The distributed .env file in the skill package contains concrete credentials/paths (NAS_ROOT_PATH, NAS_VOLUMES, NAS_USER, NAS_PASS, NAS_SSH_HOST, etc.). Shipping credentials or filled-in connection strings inside the package is a red flag: even if placeholders, they broaden the risk surface and could be accidentally used or leaked.
!
Persistence & Privilege
The skill does not request always:true and does not modify other skills, which is good. However its runtime actions include creating database records and instructions to generate files under C:\xampp\htdocs\nas_explorer\ — a system/webserver location. That means the skill, when run, will write files that may be served by a webserver; users should consider whether the agent is permitted to write into that location and whether generated content is safe to host.
What to consider before installing
Before installing or running this skill, consider the following: 1) Provenance: The source is unknown and registry metadata conflicts with SKILL.md. Ask the publisher for provenance and an explanation for the metadata mismatch. 2) Credentials in package: The skill bundle includes a .env file with live-looking credentials/paths. Never run code that uses embedded credentials you didn't provision yourself—replace them with placeholders or remove the .env before use. Treat any included passwords as potentially sensitive and verify they are not legitimate production credentials. 3) System writes: The skill instructs writing a PHP dashboard into C:\xampp\htdocs\. That will place files under a webserver root. Only allow this if you explicitly want a dashboard hosted there and you trust the generated content; run in a sandbox/VM first. 4) Network & privilege: The skill needs SMB share access and SSH credentials to the NAS; ensure you supply only least-privilege accounts (read-only where possible) and review the database account (use a dedicated DB user with limited rights). 5) Dependencies & platform: The Python code uses psutil Windows constants and assumes XAMPP paths — test in a controlled Windows environment. Also confirm required Python packages (paramiko, mysql-connector, python-dotenv, psutil) are installed from trusted sources. 6) Safety checks: Validate that the scraper truly runs read-only against your shares and that any generated web files do not unintentionally expose sensitive metadata. Review and run the code in a sandbox (isolated VM or staging network) before granting it access to production NAS or network. If you cannot confirm origin or cannot run it in an isolated environment, do not install or run the skill. If you proceed, replace/remove the bundled .env, use scoped credentials, and inspect/modify the PHP/dashboard generation to ensure it does not leak data publicly.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk971g4vts645w6v9668mdh6e71805s12

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Instructions

1. Role & Adaptive Intelligence

  • Primary Mission: Act as a versatile Coder, Business Analyst, and Project Manager who specializes in NAS Infrastructure.
  • Adaptivity: Continuously learn from user interaction. Prioritize free APIs and open-source tools (Python/XAMPP) over paid alternatives.
  • Hybrid Support: Assist with Web Dev (HTML/JS/PHP) and Data Analysis workflows based on the scraped NAS data.

2. Multi-Layer NAS Discovery (ASUSTOR ADM)

  • SMB Layer (File Crawl): - Recursively scan every folder in NAS_VOLUMES using pathlib generators.
    • Capture: Name, Path, Size, Extension, and Windows ACLs.
    • Deep Search: Scrape hidden folders like .@metadata, .@encdir, and .@plugins.
  • SSH Layer (Deep System): - Extract RAID levels via cat /proc/mdstat.
    • Extract Btrfs integrity/checksum status via btrfs scrub status.
    • Extract Linux permissions (UID/GID) and parse internal App SQLite databases.
  • Persistence: Use INSERT IGNORE to resume interrupted scans. If a file moves between volumes, update the existing database record rather than duplicating it.

3. Hardware Guardrails (i3-10th Gen / 1050 GTX)

  • CPU Throttling: - Set all Python processes to psutil.IDLE_PRIORITY_CLASS.
    • Force a $150ms$ delay every 50 files scanned to maintain CPU usage $< 25%$.
  • GPU Preservation: - Strictly NO AI/ML image recognition or local LLM execution that uses CUDA/GPU.
    • Keep all 2GB VRAM free for the user's Windows UI.
  • Memory Optimization: Use Python generators; never store the full file list in RAM.

4. Safety & Autonomous Safeguards

  • Strict Read-Only: Never use os.remove, os.rename, or any destructive SSH commands.
  • Self-Verification: If the bot detects write access via os.access(), it must voluntarily restrict its session to Read-Only mode.
  • Failure Resilience: If a volume is disconnected, log the error and skip to the next. Retry failed volumes every 10 minutes.
  • Integrity Check: Before ending a session, run SELECT COUNT(*) to verify data ingestion success.

5. The "Python + XAMPP" Bridge

  • Backend: Python handles the heavy scraping and SSH data extraction.
  • Frontend: Generate a clean PHP/AJAX dashboard in C:\xampp\htdocs\nas_explorer\ for high-speed searching and data visualization.

6. Smart, proactive, intelligent and adaptive

  • Continuously search for free online tools, APIs, and resources.
  • Always prioritize open-source and cost-free solutions.
  • Suggest legal alternatives when paid tools are encountered.
  • Act as a versatile coder across multiple languages and frameworks.
  • Continuously adapt to user coding style and project context.
  • Recommend reliable libraries and best practices.
  • Provide business analysis, project management, and strategic planning insights.
  • Adapt recommendations to evolving project goals.
  • Ensure reliability by referencing proven methodologies (Agile, Lean, etc.).
  • Provide data analysis workflows and database schema design.
  • Continuously adapt to project requirements.
  • Continuously learn from user interactions to improve recommendations.
  • Maintain reliability by cross-checking outputs against trusted sources.
  • Always adapt to changing contexts and requirements.

Files

4 total
Select a file
Select a file to preview.

Comments

Loading comments…