Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

ARC Creator

v1.0.0

Create and populate Annotated Research Contexts (ARCs) following the nfdi4plants ARC specification. Use when creating a new ARC, adding studies/assays/workflows/runs, annotating ISA metadata, organizing research data into ARC structure, or pushing ARCs to a DataHUB. Guides the user interactively through all required and optional metadata fields.

0· 1.1k·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for ingogiebel/arc-creator.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "ARC Creator" (ingogiebel/arc-creator) from ClawHub.
Skill page: https://clawhub.ai/ingogiebel/arc-creator
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install arc-creator

ClawHub CLI

Package manager switcher

npx clawhub@latest install arc-creator
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
The name and description match the instructions and included script: this is an ARC creation helper that initializes a directory, creates ARC subdirectories, updates ARC metadata with an 'arc' CLI if present, and guides the user to add studies/assays/workflows and push to a DataHUB. However the skill metadata lists no required binaries or env vars while SKILL.md explicitly lists prerequisites (git, git-lfs, optional ARC Commander CLI and DataHUB Personal Access Token). That mismatch is an inconsistency (likely sloppy metadata) but not proof of malicious intent.
Instruction Scope
The SKILL.md stays within the stated purpose: it interactively collects identifiers/metadata and instructs the agent to run a small create script, run 'arc' subcommands, copy files into repository folders, run git commands, and optionally push to a remote DataHUB. It does not instruct arbitrary system enumeration or to read unrelated files. It does recommend creating/pushing remotes (which requires credentials provided by the user) and references external hosts (git.nfdi4plants.org, datahub.hhu.de) in a way consistent with the described workflow.
Install Mechanism
There is no install specification (instruction-only skill plus a small helper script). The included script (create_arc.sh, 877 bytes) is short, readable, and performs local filesystem and git initialization only. No downloads, extraction, or third-party package installs are performed by the skill itself.
!
Credentials
The skill metadata declares no required environment variables or primary credential, yet the SKILL.md references a 'Personal Access Token for git.nfdi4plants.org or datahub.hhu.de' for DataHUB sync and expects git/git-lfs and optionally an ARC Commander CLI binary. The absence of declared env vars/binaries in metadata is an omission that could lead an agent to attempt to use credentials or binaries from the environment without explicit requirements being visible to the user. This is a proportionality / transparency concern (not direct evidence of exfiltration).
Persistence & Privilege
The skill does not request persistent or elevated privileges; always is false and it does not modify other skills or system-wide agent settings. The runtime actions are limited to creating directories, initializing git, and running local arc/git commands (subject to user consent).
What to consider before installing
This skill appears to do what it says (create ARC repositories and guide metadata entry), but its metadata is incomplete: SKILL.md requires git, git-lfs, and optionally an ARC Commander CLI and a DataHUB personal access token, yet none of these are declared in the skill manifest. Before installing or running: 1) review and understand scripts/create_arc.sh (it will mkdir, cd into the target, run arc init if available or git init otherwise); 2) ensure git and git-lfs are installed and be prepared to supply DataHUB credentials if you choose to push; 3) be cautious when the agent asks to create or push a remote repository — only provide tokens or create remotes for hosts you trust; 4) the skill will run shell commands and modify files under the chosen path, so avoid running it with elevated privileges and verify the target path; 5) the recommendation to store projects under '/home/uranus/...' is just an example and should be changed to a path you control. If you want higher assurance, ask the developer to update the skill manifest to list required binaries and any env vars (e.g., DATAHUB_TOKEN) explicitly.

Like a lobster shell, security has layers — review code before you run it.

latestvk9706wap4kh17v00p1tpens6cx80v124
1.1kdownloads
0stars
1versions
Updated 38m ago
v1.0.0
MIT-0

ARC Creator

Create FAIR Digital Objects following the nfdi4plants ARC specification v3.0.0.

Prerequisites

  • git and git-lfs installed
  • ARC Commander CLI at ~/bin/arc (optional but recommended)
  • For DataHUB sync: Personal Access Token for git.nfdi4plants.org or datahub.hhu.de

Interactive ARC Creation Workflow

Guide the user through these phases in order. Ask questions conversationally — don't dump all questions at once. Batch 2-4 related questions per message.

Phase 1: Investigation Setup

Ask the user:

  1. Investigation identifier (short, lowercase-hyphenated, e.g. cold-stress-arabidopsis)
  2. Title (concise name for the investigation)
  3. Description (textual description of the research goals)
  4. Where to store the ARC locally (suggest /home/uranus/arc-projects/<identifier>/)

Then run scripts/create_arc.sh <path> <identifier> and set investigation metadata via:

arc investigation update -i "<id>" --title "<title>" --description "<desc>"

Phase 2: Studies

For each study, ask:

  1. Study identifier (e.g. plant-growth)
  2. Title and description
  3. Organism (for Characteristic [Organism])
  4. Growth conditions (temperature, light, medium, etc.)
  5. Source materials (what goes in — seeds, cell lines, etc.)
  6. Sample materials (what comes out — leaves, roots, extracts, etc.)
  7. Protocols — does the user have protocol documents to include?
  8. Factors — what experimental variables are being tested? (e.g., temperature, genotype, treatment)

Create with:

arc study init --studyidentifier "<id>"
arc study update --studyidentifier "<id>" --title "<title>" --description "<desc>"

Copy protocol files to studies/<id>/protocols/. Copy resource files to studies/<id>/resources/.

Phase 3: Assays

For each assay, ask:

  1. Assay identifier (e.g. proteomics-ms, rnaseq, sugar-measurement)
  2. Measurement type (e.g., protein expression profiling, transcription profiling, metabolite profiling)
  3. Technology type (e.g., mass spectrometry, nucleotide sequencing, plate reader)
  4. Technology platform (e.g., Illumina NovaSeq, Bruker timsTOF)
  5. Data files — where are the raw data files? (will go into assays/<id>/dataset/)
  6. Processed data — any processed output files?
  7. Protocols — assay-specific protocols?
  8. Performers — who performed this assay? (name, affiliation, role)

Create with:

arc assay init -a "<id>" --measurementtype "<type>" --technologytype "<tech>"

Copy data to assays/<id>/dataset/, protocols to assays/<id>/protocols/.

Phase 4: Workflows (optional)

Ask if there are computational analysis steps. For each:

  1. Workflow identifier (e.g. deseq2-analysis, heatmap-generation)
  2. Description of what it does
  3. Code files (scripts, notebooks)
  4. Dependencies (Python packages, R libraries, Docker image)

Place code in workflows/<id>/. Note: workflow.cwl is REQUIRED by spec but often created later. Inform user.

Phase 5: Runs (optional)

Ask if there are computation outputs. For each:

  1. Run identifier
  2. Which workflow produced it
  3. Output files (figures, tables, processed data)

Place outputs in runs/<id>/.

Phase 6: Contacts & Publications

Ask:

  1. Investigation contacts (name, email, affiliation, role — at minimum the PI)
  2. Publications (if any — DOI, PubMed ID, title, authors)

Add via:

arc investigation person register --lastname "<last>" --firstname "<first>" --email "<email>" --affiliation "<aff>"

Phase 7: Git Commit & DataHUB Sync

  1. Configure git user:
git config user.name "<name>"
git config user.email "<email>"
  1. Commit:
git add -A
git commit -m "Initial ARC: <investigation title>"
  1. Ask if the user wants to push to a DataHUB. If yes:
    • Ask which host (git.nfdi4plants.org, datahub.hhu.de, etc.)
    • Create remote repo (via browser or API)
    • Set remote and push

ISA Metadata Reference

For detailed ISA-XLSX fields, annotation table columns, and ontology references, read references/arc-spec.md.

Key Reminders

  • Assay data is immutable — never modify files in assays/<id>/dataset/ after initial placement
  • Studies describe materials, assays describe measurements
  • Workflows are code, runs are outputs
  • Git LFS for files > 100 MB: git lfs track "*.fastq.gz" "*.bam" "*.raw"
  • Don't store ARCs on OneDrive/Dropbox — Git + cloud sync causes conflicts
  • ARC Commander CLI reference: arc <subcommand> --help

Comments

Loading comments...