The Spatiotemporal Rendering Engine

v1.0.0

Generates predictive 4D timelines with scheduled keyframes to orchestrate smart home elements across the 6-Element Spatial Matrix based on natural language i...

0· 94·0 current·0 all-time
byMilesXiang@spacesq
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The manifest, SKILL.md, and skill.py align: the orchestrator consumes an Active Mounts JSON, uses a local LLM to generate timeline keyframes, and injects the resulting track into a local rendered_tracks.json. There are no unexpected external credentials, unrelated binaries, or config paths requested.
Instruction Scope
SKILL.md describes features like microphone monitoring, mmWave sensing, swarm pings and booking actions; the code itself does not access microphones, radar sensors, or external booking APIs — it only reads active_hardware_mounts.json and writes rendered_tracks.json. This is coherent if other S2 modules (e.g., s2-nlp-connector) supply sensor data; confirm those connectors are what provide sensitive inputs rather than this skill directly.
Install Mechanism
No install spec or external downloads are present; this is an instruction+code skill that runs from included skill.py. No external packages or remote archives are fetched by the skill itself.
Credentials
The skill requests no environment variables or credentials. The only network call is to http://localhost:1234 (a local LLM endpoint) which is consistent with the declared behavior. No unrelated secrets or external service credentials are requested.
Persistence & Privilege
always is false and the skill does not attempt to modify other skills or system-wide agent settings. It writes its own timeline DB under the current working directory (s2_timeline_data/rendered_tracks.json), which is a scoped and expected persistence behavior.
Assessment
This skill appears to do what it claims: read an active_hardware_mounts.json, call a local LLM on localhost:1234 to generate timeline JSON, and save tracks to s2_timeline_data/rendered_tracks.json. Before installing, ensure: (1) the local LLM at localhost:1234 is trusted — untrusted LLMs can produce unexpected or malformed JSON; (2) other S2 connectors (e.g., s2-nlp-connector) are the only modules that provide microphone/mmWave/sensor data — the orchestrator itself does not access hardware; (3) the working directory and the two data files (active_hardware_mounts.json, rendered_tracks.json) do not contain sensitive credentials you don't want written or aggregated; and (4) you understand downstream enforcement: this skill only writes scheduled keyframes — another module would need to execute them on devices. If you want to be extra cautious, run the skill in a sandboxed environment, inspect active_hardware_mounts.json contents, and verify what component actually executes the saved tracks.

Like a lobster shell, security has layers — review code before you run it.

latestvk977jr0cmksh0cwxk3dkwfcy09838nks

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments