仅需Python基础,从0构建自己的具身智能机器人;从0逐步构建VLA/OpenVLA/SmolVLA/Pi0, 深入理解具身智能
-
Updated
May 13, 2026 - Python
仅需Python基础,从0构建自己的具身智能机器人;从0逐步构建VLA/OpenVLA/SmolVLA/Pi0, 深入理解具身智能
Convert between robotics dataset formats (RLDS, LeRobot v2/v3, Zarr, HDF5, Rosbag). Inspect, visualize, and analyze datasets. Works with HuggingFace Hub. Built for OpenVLA, Octo, LeRobot, and Diffusion Policy workflows.
PickAgent: OpenVLA-powered Pick and Place Agent | Gradio&Simulation | Vision Language Action Model
FR3 robot in Gazebo integrated with 4-bit quantized OpenVLA and MoveIt
Independent VLA research notes: OpenVLA / π0 / Spirit paper reviews, LIBERO reproduction, XLeRobot integration. Transitioning from AD motion planning to embodied AI.
Train and deploy Vision-Language-Action models natively on Apple Silicon.
Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success
Reproducing OpenVLA-7B on LIBERO 4-suite on a single commodity GPU. 400 rollouts, 4 demo videos, 6 debugging gotchas, all open.
Add a description, image, and links to the openvla topic page so that developers can more easily learn about it.
To associate your repository with the openvla topic, visit your repo's landing page and select "manage topics."