The dataset viewer is not available for this subset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 64, in _split_generators
                  with h5py.File(first_file, "r") as h5:
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 564, in __init__
                  fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 238, in make_fid
                  fid = h5f.open(name, flags, fapl=fapl)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "h5py/_objects.pyx", line 56, in h5py._objects.with_phil.wrapper
                File "h5py/_objects.pyx", line 57, in h5py._objects.with_phil.wrapper
                File "h5py/h5f.pyx", line 102, in h5py.h5f.open
              FileNotFoundError: [Errno 2] Unable to synchronously open file (unable to open file: name = 'hf://datasets/Facebear/XVLA-Soft-Fold@1dd4730d6c6c5fdefd18e2638322334d692e2a1d/0702_21pm_stage_1_stage2new_new_cam_very_slow/episode_100.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Cloth-Folding Dataset for X-VLA Paper
This dataset contains 1,500 episodes of cloth folding, collected using Agilex's robotic arm. It was used in the X-VLA paper for cloth-folding tasks, showcasing a near-perfect success rate in folding accuracy.
Dataset Overview
- Total Episodes: ο½1,500
 - Task: Automated cloth folding
 - Robot: Agilex Aloha
 - Performance: Near 100% success rate in completing the folding task
 
Hardware setup
We observed that the camera setup of the official Agilex Aloha platform is positioned relatively low, which prevents it from capturing the full cloth-folding process, where many frames fail to include the robot arms. To address this issue, we modified the camera setup accordingly.
You can find the .stl, .step, .sldprt files of our new camera mount, which can be used for 3D printing. The installation instruction can be found in the camera_mount_install.md.
Usage
You can find .hdf5 and .mp4 files in each directory. The .mp4 files are just used for visulization and are not used for training. The .hdf5 files contains all necessary keys and data, including:
HDF5 file hierarchy
βββ action # nx14 absolute bimanual joints, not used in our paper
βββ base_action # nx2 chassis actions, not used in our paper
βββ language_instruction # π"fold the cloth"
βββ observations
β   βββ eef # nx14 absolute eef pos using euler angles to represent the rotation, not used in our paper
β   β   eef_quaternion # nx16 absolute eef pos using quaternion to represent the rotation, not used in our paper
β   β   eef_6d # πnx20 absolute eef pos using rotate6d to represent the rotation
β   β   eef_left_time # πnx1 the time stamp for left arm eef pos, can be used for resample or interpolation
β   β   eef_right_time # πnx1 the time stamp for right arm eef pos, can be used for resample or interpolation
β   βββ qpos # nx14 absolute bimanual joints, not used in our paper
β   βββ qpos_left_time # nx1 the time stamp for left arm joint pos, can be used for resample or interpolation, not used in our paper
β   βββ qpos_right_time # nx1 the time stamp for right arm joint pos, can be used for resample or interpolation, not used in our paper
β   βββ qvel # nx14 bimanual joint velocity, not used in our paper
β   βββ effort # nx14 bimanual joint effort, not used in our paper
β   βββ images
β   β   βββ cam_high  # πthe encoded head cam view, should be decoded using cv2
β   β   βββ cam_left_wrist  # πthe encoded left wrist view, should be decoded using cv2
β   β   βββ cam_right_wrist  # πthe encoded right wrist view, should be decoded using cv2
βββ time_stamp # the time stamp for each sample, not used in our paper
How to read the hdf5 file:
import h5py
import cv2
import io
from mmengine import fileio
path = "REPLACE TO YOUR HDF5 FILE PATH HERE"
# load the hdf5 file
value = fileio.get(path)
f = io.BytesIO(value)
h = h5py.File(f,'r')
# you can monitor the hdf5 hierarchy by print out its keys
print(h.keys())
# this is one example to read out the data, for example, the 'cam_high' data
head_view_bytes = h['observations/images/cam_high'][()]  # π NOTE: we compress all images to bytes using cv2.imencode
head_view = cv2.imdecode(head_view_bytes, cv2.IMREAD_COLOR)  # π NOTE: we should decode it back to RGB images for further usage
#Then you can go free to use our data :)
# ...
# ...
Visualize the data
You can find some dictionary have .mp4 file for visulization. If you want to visualize all the .hdf5 file, you can run the following code:
from mmengine import fileio
import io
import h5py
import cv2
import matplotlib.pyplot as plt
from tqdm import tqdm
from PIL import Image
from IPython.display import display, Image as IPImage
from IPython.display import Video
import os
import imageio
import numpy as np
# π Just replace the path here, then run this script. This script will generate all the .mp4 files for the .hdf5 file
top_path = 'REPLACE TO YOUR XVLA-SOFT-FOLD PATH'
hdf5_files = fileio.list_dir_or_file(top_path, suffix='.hdf5', recursive=True, list_dir=False)
for hdf5_name in hdf5_files:
    path = os.path.join(top_path, hdf5_name)
    # Prepare OpenCV VideoWriter to save as MP4
    video_path = path.replace('.hdf5', '.mp4')
    fps = 30  # Adjust the FPS if needed
    image_list = []
    print(video_path)
    if os.path.exists(video_path):
        print(f"pass {video_path}, it already exists")
        continue
    
    
    value = fileio.get(path)
    f = io.BytesIO(value)
    h = h5py.File(f,'r')
    images = h['/observations/images/cam_high'][()]
    images_left = h['/observations/images/cam_left_wrist'][()]
    images_right = h['/observations/images/cam_right_wrist'][()]
    ep_len = images.shape[0]
    
    for i in tqdm(range(ep_len)):
        img = images[i]
        img_left = images_left[i]
        img_right = images_right[i]
        
        img = cv2.imdecode(img, cv2.IMREAD_COLOR)  # Decode image from bytes
        img_left = cv2.imdecode(img_left, cv2.IMREAD_COLOR)  # Decode image from bytes
        img_right = cv2.imdecode(img_right, cv2.IMREAD_COLOR)  # Decode image from bytes
        
        img = np.concatenate([img, img_left, img_right], axis = 1)
        image_list.append(img)
    # Release the VideoWriter and show output
    imageio.mimsave(video_path, image_list, fps=fps)
Citation
If you use this dataset in your research or for any related work, please cite the X-VLA Paper:
@article{zheng2025x,
  title={X-VLA: Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model},
  author={Zheng, Jinliang and Li, Jianxiong and Wang, Zhihao and Liu, Dongxiu and Kang, Xirui and Feng, Yuchun and Zheng, Yinan and Zou, Jiayin and Chen, Yilun and Zeng, Jia and others},
  journal={arXiv preprint arXiv:2510.10274},
  year={2025}
}
- Downloads last month
 - 2,170