EO-Robotics Collection EmbodiedOneVision is a unified framework for multimodal embodied reasoning and robot control, featuring interleaved vision-text-action pretraining. • 7 items • Updated 4 days ago • 8
F1: A Vision-Language-Action Model Bridging Understanding and Generation to Actions Paper • 2509.06951 • Published Sep 8 • 31
EmbodiedOneVision: Interleaved Vision-Text-Action Pretraining for General Robot Control Paper • 2508.21112 • Published Aug 28 • 77
Hume: Introducing System-2 Thinking in Visual-Language-Action Model Paper • 2505.21432 • Published May 27 • 4
Fast-UMI: A Scalable and Hardware-Independent Universal Manipulation Interface Paper • 2409.19499 • Published Sep 29, 2024
SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model Paper • 2501.15830 • Published Jan 27 • 13
FreeGaussian: Annotation-free Controllable 3D Gaussian Splats with Flow Derivatives Paper • 2410.22070 • Published Oct 29, 2024
Hume: Introducing System-2 Thinking in Visual-Language-Action Model Paper • 2505.21432 • Published May 27 • 4
SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model Paper • 2501.15830 • Published Jan 27 • 13
Exploring the Potential of Encoder-free Architectures in 3D LMMs Paper • 2502.09620 • Published Feb 13 • 26
OpenFly: A Versatile Toolchain and Large-scale Benchmark for Aerial Vision-Language Navigation Paper • 2502.18041 • Published Feb 25 • 1