RynnVLA-001
Collection
Using Human Demonstrations to Improve Robot Manipulation
β’
3 items
β’
Updated
β’
2
Github Repo: https://github.com/alibaba-damo-academy/RynnVLA-001
π₯ We release RynnVLA-001-7B-Base (Stage 1: Ego-Centric Video Generative Pretraining), which is pretrained on large-scale ego-centric manipulation videos.
RynnVLA-001 is a VLA model based on pretrained video generation model. The key insight is to implicitly transfer manipulation skills learned from human demonstrations in ego-centric videos to the manipulation of robot arms.