Hyperspace
WarnAudited by ClawScan on May 10, 2026.
Overview
Hyperspace is openly a distributed autonomous AI node, but it describes persistent P2P activity, generated-code experiments, and peer-network routing without clear user controls or data boundaries.
Install only if you intentionally want to run a persistent P2P autonomous AI node. Verify the remote installer, use isolation and resource limits, avoid sensitive prompts or files, and make sure you understand how to stop the node and what data it publishes or sends to peers.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
After setup, the node may keep using compute and network resources and may act or publish results without the user reviewing each action.
The artifact explicitly describes a continuing autonomous loop rather than a bounded user-directed action.
Every node runs an autonomous agent with a 30-second cognitive cycle — perceive, reason, act.
Only run it if you want a persistent autonomous node; require clear start/stop commands, resource limits, and confirmation before ongoing network activity.
Generated or mutated code could consume local resources or affect the local environment if not isolated.
The node is described as creating or mutating scripts and then executing them locally, but the artifacts do not show sandboxing or execution limits.
Evolves a training script ... Runs the experiment (Python on GPU, TypeScript on CPU, WebGPU in browser)
Run in a sandbox, VM, or container, and verify that generated experiments cannot access sensitive files, credentials, or unrestricted system resources.
User prompts or task content could be sent to unknown peers in the distributed network.
The skill contemplates sending inference work to peers, but the artifact does not clearly define peer identity, privacy guarantees, or what user data may be transmitted.
route inference to the P2P swarm
Do not send sensitive, private, or regulated data through the swarm unless the provider documents encryption, retention, peer trust, and data handling.
A malicious or bad peer result could influence the node's future reasoning, experiments, or outputs.
Untrusted peer-generated experiment data is fed into an LLM-driven decision loop and reused as inspiration for future actions.
It receives the top 20 best experiments across all domains as inspiration ... Its LLM reads those experiments and reasons about what to try next
Require provenance checks, validation, and separation between peer-provided content and trusted instructions before allowing it to steer autonomous actions.
Installing runs code fetched from the provider at install time, so the reviewed SKILL.md alone does not show exactly what will execute.
The installer is a remote shell script executed directly by bash. This is disclosed and related to installing the node, but it leaves the actual installed code outside the provided artifact review.
"command": "curl -fsSL https://agents.hyper.space/cli | bash"
Inspect the installer and releases first, prefer pinned checksums or signed releases, and install only from a trusted source.
