{"skill":{"slug":"s2-multimodal-fusion-predictor","displayName":"S2 多模态融合与空间预测引擎","summary":"Instructs the Embodied AI on how to process incoming multimodal sensor data (LiDAR, Camera, Tactile), avoid visual illusions, and output 1s-60s physical caus...","tags":{"latest":"1.0.0"},"stats":{"comments":0,"downloads":100,"installsAllTime":0,"installsCurrent":0,"stars":0,"versions":1},"createdAt":1775437615005,"updatedAt":1775438208562},"latestVersion":{"version":"1.0.0","createdAt":1775437615005,"changelog":"s2-multimodal-fusion-predictor 1.0.0\n\n- Initial release enabling embodied AI to fuse LiDAR, camera, and tactile inputs for robust physical reasoning.\n- Implements cross-validation to avoid visual illusions and bans reliance on single or pseudo-sensors (e.g., PIR).\n- Provides explicit guidance for temporal causal prediction (1-60s) and requires structured, multimodal output.\n- Introduces protocol for resolving sensor conflicts (e.g., vision vs. LiDAR on transparent objects).\n- Exposes `execute_multimodal_fusion` tool for raw sensor array processing.","license":"MIT-0"},"metadata":{"os":null,"systems":null},"owner":{"handle":"spacesq","userId":"s17c8pmv1e8tb2mcc5aset88e183hq1t","displayName":"MilesXiang","image":"https://avatars.githubusercontent.com/u/259359437?v=4"},"moderation":null}