Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Eve Research Supervisor Pro

v5.1.0

EVE manages the full research lifecycle with Auto, Semi-Manual, or Manual modes to produce a publication-ready LaTeX paper from topic search to gap analysis.

1· 110·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Files and scripts (arxiv_downloader, bib_generator, citation_graph, gap_detector, idea_generator, paper_writer, server_monitor, experiment_alert, session_memory, etc.) are consistent with a 'research supervisor' that searches papers, builds citation graphs, generates gaps/ideas, writes LaTeX, and monitors compute. However the registry metadata declares no required env vars or primary credential while many scripts clearly use/expect an LLM API key (OPENAI_API_KEY / OPENAI_BASE_URL) or a PetClaw built-in key. Also config defaults point to a nonstandard base URL (https://api.openai-hk.com/v1) instead of the official api.openai.com — this is unexpected and deserves verification.
!
Instruction Scope
SKILL.md instructs the agent to read session memory before every action, save a persistent user profile, create project directories under ~/.openclaw/workspace/research-supervisor-pro, and run many Python scripts. Those runtime actions will read and write files under the user's home and will repeatedly access stored memory. The agent is instructed to run commands that interact with server monitoring and SSH (via server_monitor/experiment_alert), which gives the skill the ability to access remote systems configured by the user. Reading/writing memory 'forever' and automatically resuming across sessions increases the persistence of any sensitive data saved.
Install Mechanism
There is no external download of arbitrary binaries in the install spec in the registry, but the package contains an install.sh that copies scripts into ~/.openclaw/workspace/research-supervisor-pro, sets execute bits, and calls pip to install Python packages. That installer writes into the user's home and installs Python dependencies from PyPI — expected for this kind of skill but you should review install.sh before running. README suggests a git clone from GitHub (an external repo) if you follow the quick-install one-liner; the registry metadata lists 'Source: unknown' and 'Homepage: none', so verify the origin.
!
Credentials
Registry metadata claimed no required env vars, but many scripts read OPENAI_API_KEY and OPENAI_BASE_URL (or fallback to a PetClaw settings file). The skill therefore requires LLM credentials for core functionality; this is proportional to writing LLM-generated content but mismatched with the declared requirements. More importantly, the default OPENAI_BASE_URL in multiple places is set to https://api.openai-hk.com/v1 (nonstandard). That could route prompts/data to an unexpected endpoint. The skill also stores server_config.json (host, user, port, ssh_key) for SSH-based server monitoring — storing SSH key paths or host credentials is sensitive but functionally explained by the server-monitor feature.
Persistence & Privilege
The skill creates and uses persistent storage under ~/.openclaw/workspace/research-supervisor-pro/memory/ and explicitly says it will 'remember' the user's profile permanently. It also can store alerts, project experiment_data.json, server_config.json (including SSH key path), and other artifacts. 'always' is false, so it is not force-included, but the persistent memory and potential storage of SSH config / keys increases the blast radius if you grant it credentials or enable autonomous invocation.
What to consider before installing
What to check before installing or running this skill: - Verify origin: the package metadata lists 'Source: unknown' and README references a GitHub repo; only install from a trusted repository. Inspect the repository yourself. - Review defaults for LLM endpoint: multiple scripts default OPENAI_BASE_URL to https://api.openai-hk.com/v1 — change this to the official endpoint (or your provider) before supplying any API key, and confirm why that default was chosen. - Do not provide credentials blindly: the skill needs an LLM API key for full features; supply only a key with appropriate usage limits or use a disposable key for testing. Avoid storing long-lived keys in global env vars if you are unsure. - Server monitoring is powerful: server_monitor/experiment_alert expect a memory/server_config.json that may reference an ssh_key path. Do not store private SSH keys or production host credentials in the skill's config unless you understand the implications. Prefer a read-only account or avoid configuring remote servers until you've audited the code. - Inspect install.sh and scripts before running: install.sh copies code into your home and runs pip installs; open each script to confirm there are no hidden network endpoints, obfuscated code, or data-exfiltration logic. - Sandbox first: run the skill in an isolated environment (VM or container) and do not enable any autonomous invocation until you're comfortable. Test non-sensitive flows (e.g., run arxiv_downloader on a small query) and observe network destinations. - Check persistent data: the skill will write profile, memory, citation graphs, and other artifacts to ~/.openclaw/workspace/research-supervisor-pro. If you decide to remove it later, delete that directory and any API keys you provisioned. If you want, I can: (a) show the exact lines where OPENAI_BASE_URL/OPENAI_API_KEY are used, (b) highlight any places that send data to non-official endpoints, or (c) generate a short checklist you can follow to safely test this skill in a sandbox.

Like a lobster shell, security has layers — review code before you run it.

latestvk977v9bm0grbdmv3syzrk0gsn1836yvm

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments