Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

linux-forensics-automation

v1.0.0

Automates Linux forensic data collection, generates detailed reports, and uploads them to Google Drive or emails results for fast incident response and audits.

1· 132·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for peachhfuzz/linux-forensics-automation.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "linux-forensics-automation" (peachhfuzz/linux-forensics-automation) from ClawHub.
Skill page: https://clawhub.ai/peachhfuzz/linux-forensics-automation
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install linux-forensics-automation

ClawHub CLI

Package manager switcher

npx clawhub@latest install linux-forensics-automation
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The name/description claim a turnkey automation that runs local collection and uploads results. However the SKILL.md references several scripts (linux_forensics.sh, forensics_and_upload.sh, upload_to_drive.py, send_email.py, setup_gmail.py) that are not included in the skill bundle and there is no homepage or source link. That is inconsistent: either the skill should include or point to the implementation, or it cannot deliver its promised capability.
!
Instruction Scope
The instructions explicitly tell the user/agent to collect broad, sensitive system data (logs, passwd/sudoers, recent files, full process arguments) and to upload or email those results. Collecting and transmitting this data is coherent with a forensics purpose, but the SKILL.md gives no safeguards (confirmation prompts, filtering/minimization, or explicit consent step) and assumes scripts exist locally. The combination of broad data access + automated upload without provided code or provenance is a security concern.
!
Install Mechanism
There is no install spec and no code files — the skill is instruction-only. That is low-risk by itself, but here the instructions direct users to run scripts that are absent, forcing users or agents to obtain code from unspecified sources. That gap raises high risk because fetching/running unknown scripts is a common vector for malicious code.
!
Credentials
The registry metadata declares no required environment variables or primary credential, yet the instructions require Google OAuth credentials and saving tokens (e.g., ~/.gmail_tokens.json). Credential handling is therefore un-declared in the registry metadata. Requesting OAuth tokens for Drive/Gmail is expected for the described functionality, but the absence of declared credential requirements and lack of guidance on least-privilege or service-account alternatives is a proportionality and transparency concern.
Persistence & Privilege
The skill is not always-enabled and is user-invocable (defaults). The agent may still invoke it autonomously (disable-model-invocation is false), which increases risk if an agent runs these flows without explicit human approval because uploads could exfiltrate sensitive data. This is noteworthy but not by itself a disqualifying privilege.
What to consider before installing
Do not run or trust the scripts referenced here unless you can inspect their source. Before installing/using: 1) Ask the publisher for the full source code or a trustworthy repository/homepage; verify the code in the registry matches that repo. 2) Have a security reviewer audit the scripts for what they collect and how they transmit data. 3) Prefer service accounts or scoped credentials rather than personal Gmail OAuth; ensure tokens are stored securely and rotated. 4) Run initial tests in an isolated sandbox or disposable VM and run scripts as a non-root user where possible. 5) Require explicit confirmation and data-minimization (exclude directories that should not be uploaded) before any automated upload. If the publisher cannot provide source or provenance, treat the skill as unsafe to use.

Like a lobster shell, security has layers — review code before you run it.

latestvk97bq34agezhyk4q75w28dmc2d83gbz0
132downloads
1stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Forensics Automation Skill

Automated collection and archival of Linux system forensic data.

Quick Start

Prerequisites

Google Drive API setup required once:

# 1. Create GCP project and enable Google Drive API
# 2. Create OAuth 2.0 Desktop App credentials (JSON)
# 3. Run one-time setup
python3 setup_gmail.py

# Follow OAuth flow, authorize, paste code back
# Tokens saved to ~/.gmail_tokens.json

Basic Usage

Generate forensic report:

bash linux_forensics.sh /tmp
# Creates: /tmp/forensics_YYYYMMDD_HHMMSS.txt

Upload to Google Drive:

python3 upload_to_drive.py /tmp/forensics_20260324_180000.txt
# Returns: File ID and shareable Drive link

One-command: Generate + Upload:

bash forensics_and_upload.sh
# Generates report and uploads in one go

Send forensic data via email:

python3 send_email.py recipient@example.com "Forensic Report" "Report attached"

What Gets Collected

Each forensic report includes:

  • System Info: Kernel version, hostname, OS details
  • Users & Groups: All user accounts, sudoers configuration
  • Network: IP addresses, routes, listening ports, connections
  • Packages: Installed software (apt/rpm)
  • Processes: Full process listing with arguments
  • System Logs: dmesg, auth logs, system events
  • Cron Jobs: Scheduled tasks across all users
  • File Integrity: Recently modified files (last 7 days)
  • Disk Usage: Storage breakdown

Script Details

linux_forensics.sh

Core forensic collection script.

bash linux_forensics.sh [output_directory]

# Example
bash linux_forensics.sh /tmp
# Creates /tmp/forensics_YYYYMMDD_HHMMSS.txt (~300KB typical)

What it does:

  • Gathers comprehensive system information
  • Runs read-only commands (safe to execute)
  • Outputs to timestamped file for easy tracking
  • Minimal dependencies (bash, standard Unix tools)

forensics_and_upload.sh

Orchestration script: Generate report + Upload to Drive in one command.

bash forensics_and_upload.sh

# One-step forensic collection and archival
# Includes 2-second rate limit delay to avoid Google API throttling

What it does:

  • Runs linux_forensics.sh automatically
  • Gets most recent report
  • Waits 2 seconds (rate limiting)
  • Uploads to Google Drive
  • Returns Drive link

upload_to_drive.py

Upload any file to Google Drive using authenticated session.

python3 upload_to_drive.py <file_path> [folder_id]

# Examples
python3 upload_to_drive.py /tmp/report.txt
python3 upload_to_drive.py /tmp/report.txt "1a2b3c4d5e6f7890"  # Optional: upload to specific folder

Returns:

  • File name on Drive
  • File ID (for API access)
  • Shareable link

send_email.py

Send emails via Gmail API.

python3 send_email.py <recipient> <subject> <body>

# Example
python3 send_email.py analyst@company.com "Forensic Report Ready" "New forensics collected and uploaded to Drive"

Integration Examples

Security Operations Center (SOC)

Automate daily forensic snapshots:

#!/bin/bash
# Daily forensic collection cron job

cd /opt/forensics
bash forensics_and_upload.sh

# Email security team
python3 send_email.py security@company.com \
  "Daily Forensic Snapshot" \
  "Today's forensic report has been collected and uploaded to Google Drive"

Incident Response

Rapid forensic collection during incident:

#!/bin/bash
# Incident response script

INCIDENT_ID="INC-2026-003"
bash linux_forensics.sh /tmp

# Upload and tag with incident ID
REPORT=$(ls -t /tmp/forensics_*.txt | head -1)
python3 upload_to_drive.py "$REPORT"

# Notify incident commander
python3 send_email.py "commander@company.com" \
  "Forensics Collected: $INCIDENT_ID" \
  "Forensic data from $REPORT ready for analysis"

Compliance & Auditing

Monthly forensic audits:

#!/bin/bash
# Monthly audit job

MONTH=$(date +%Y-%m)
bash linux_forensics.sh "/var/forensics/$MONTH"

# Archive to Drive
REPORT=$(ls -t "/var/forensics/$MONTH"/forensics_*.txt | head -1)
python3 upload_to_drive.py "$REPORT" "AUDIT_FOLDER_ID"

Setup & Requirements

1. Google Drive API Setup (One-time)

# Create GCP project and enable APIs:
# - Google Drive API
# - Gmail API (for email integration)

# Create OAuth 2.0 Desktop App credentials
# Download JSON credential file

# Place in script directory or set CREDS_FILE path

2. First-time Authorization

python3 setup_gmail.py

# Opens browser for OAuth authorization
# Paste authorization code when prompted
# Tokens saved to ~/.gmail_tokens.json

3. Verify Setup

# Test forensic collection
bash linux_forensics.sh /tmp

# Test Drive upload
python3 upload_to_drive.py /tmp/forensics_*.txt

# Test email
python3 send_email.py your-email@example.com "Test" "Forensics setup working!"

Error Handling

Common Issues

"No tokens found"

Run setup_gmail.py first to authorize

"HTTP Error 400: Bad Request"

Refresh token may be invalid (expires ~24hrs)
Run setup_gmail.py again to re-authorize

"Permission denied" on /var/log

Some logs require elevated privileges
Script gracefully skips unavailable files

Rate limiting from Google APIs

`forensics_and_upload.sh` includes 2-second delay
For batch operations, add `sleep 5` between uploads

Performance Notes

  • Forensic collection: ~1-5 seconds (depends on system load)
  • Report size: ~250-400KB typical
  • Drive upload: ~2-5 seconds (depends on network)
  • Email send: ~1-2 seconds
  • Total one-command: ~10-15 seconds

Security Considerations

  1. OAuth tokens stored in ~/.gmail_tokens.json — keep secure (600 permissions)
  2. Refresh tokens enable long-term automation without re-auth
  3. Scripts run read-only — no system modification
  4. Drive links are shareable — consider folder permissions

Customization

Extend forensic data collection

Edit linux_forensics.sh to add custom commands:

echo "=== CUSTOM DATA ===" | tee -a "$REPORT"
your-command-here >> "$REPORT"

Change upload destination

Specify Google Drive folder:

python3 upload_to_drive.py report.txt "FOLDER_ID"

Batch operations

Upload multiple reports:

for file in /tmp/forensics_*.txt; do
  python3 upload_to_drive.py "$file"
  sleep 5  # Rate limiting
done

References

Comments

Loading comments...