Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
output_dir: null
job_name: string
resume: bool
seed: int64
num_workers: int64
batch_size: int64
steps: int64
eval_freq: int64
log_freq: int64
save_checkpoint: bool
save_freq: int64
use_policy_training_preset: bool
optimizer: null
scheduler: null
wandb: struct<enable: bool, project: string, disable_artifact: bool>
dataset: struct<repo_id: string, use_imagenet_stats: bool>
env: struct<type: string, name: string, task: string, fps: int64, robot: null, teleop: null, processor: struct<control_mode: string, gripper: struct<use_gripper: bool, gripper_penalty: double, gripper_penalty_in_reward: bool>, reset: struct<fixed_reset_joint_positions: list<item: double>, reset_time_s: double, control_time_s: double, terminate_on_success: bool>>, features: struct<observation.images.front: struct<type: string, shape: list<item: int64>>, observation.images.wrist: struct<type: string, shape: list<item: int64>>, observation.state: struct<type: string, shape: list<item: int64>>, action: struct<type: string, shape: list<item: int64>>>, features_map: struct<observation.images.front: string, observation.images.wrist: string, observation.state: string, action: string>>
policy: struct<type: string, n_obs_steps: int64, normalization_mapping: struct<VISUAL: string, STATE: string, ENV: string, ACTION: string>, input_features: struct<observation.images.front: struct<type: string, shape: list<item: int64>>, observation.images.wrist: struct<type: string, shape: list<item: int64>>, observation.state: struct<type: string, shape: list<item: int64>>>, output_features: struct<action: struct<type: string, shape: list<item: int64>>>, device: string, use_amp: bool, dataset_stats: struct<observation.images.front: struct<mean: list<item: double>, std: list<item: double>>, observation.images.wrist: struct<mean: list<item: double>, std: list<item: double>>, observation.state: struct<min: list<item: double>, max: list<item: double>>, action: struct<min: list<item: double>, max: list<item: double>>>, repo_id: string, storage_device: string, vision_encoder_name: string, freeze_vision_encoder: bool, image_encoder_hidden_dim: int64, shared_encoder: bool, online_steps: int64, online_env_seed: int64, online_buffer_capacity: int64, offline_buffer_capacity: int64, online_step_before_learning: int64, policy_update_freq: int64, discount: double, temperature_init: double, num_critics: int64, num_subsample_critics: null, critic_lr: double, actor_lr: double, temperature_lr: double, critic_target_update_weight: double, utd_ratio: int64, state_encoder_hidden_dim: int64, latent_dim: int64, target_entropy: null, use_backup_entropy: bool, grad_clip_norm: double, num_discrete_actions: int64, critic_network_kwargs: struct<hidden_dims: list<item: int64>, activate_final: bool, final_activation: null>, actor_network_kwargs: struct<hidden_dims: list<item: int64>, activate_final: bool>, policy_kwargs: struct<use_tanh_squash: bool, std_min: double, std_max: int64, init_final: double>, actor_learner_config: struct<learner_host: string, learner_port: int64, policy_parameters_push_frequency: int64>, concurrency: struct<actor: string, learner: string>>
vs
output_dir: null
job_name: string
resume: bool
seed: int64
num_workers: int64
batch_size: int64
steps: int64
log_freq: int64
save_checkpoint: bool
save_freq: int64
wandb: struct<enable: bool, project: string, disable_artifact: bool>
dataset: struct<repo_id: string, use_imagenet_stats: bool>
policy: struct<type: string, n_obs_steps: int64, normalization_mapping: struct<VISUAL: string, STATE: string, ENV: string, ACTION: string>, input_features: struct<observation.images.top: struct<type: string, shape: list<item: int64>>, observation.state: struct<type: string, shape: list<item: int64>>>, output_features: struct<action: struct<type: string, shape: list<item: int64>>>, device: string, use_amp: bool, dataset_stats: struct<observation.images.top: struct<mean: list<item: double>, std: list<item: double>>, observation.state: struct<min: list<item: double>, max: list<item: double>>, action: struct<min: list<item: double>, max: list<item: double>>>, num_discrete_actions: int64, storage_device: string, vision_encoder_name: string, freeze_vision_encoder: bool, image_encoder_hidden_dim: int64, shared_encoder: bool, online_steps: int64, online_env_seed: int64, online_buffer_capacity: int64, offline_buffer_capacity: int64, online_step_before_learning: int64, policy_update_freq: int64, discount: double, async_prefetch: bool, temperature_init: double, num_critics: int64, num_subsample_critics: null, critic_lr: double, actor_lr: double, temperature_lr: double, critic_target_update_weight: double, utd_ratio: int64, state_encoder_hidden_dim: int64, latent_dim: int64, target_entropy: null, use_backup_entropy: bool, grad_clip_norm: double, critic_network_kwargs: struct<hidden_dims: list<item: int64>, activate_final: bool, final_activation: null>, actor_network_kwargs: struct<hidden_dims: list<item: int64>, activate_final: bool>, policy_kwargs: struct<use_tanh_squash: bool, std_min: int64, std_max: int64, init_final: double>, actor_learner_config: struct<learner_host: string, learner_port: int64, policy_parameters_push_frequency: int64>, concurrency: struct<actor: string, learner: string>, repo_id: string>
env: struct<type: string, robot: struct<type: string, port: string, id: string, cameras: struct<top: struct<type: string, index_or_path: int64, height: int64, width: int64, fps: int64>>>, teleop: struct<type: string, use_gripper: bool>, processor: struct<control_mode: string, observation: struct<display_cameras: bool, add_joint_velocity_to_observation: bool, add_current_to_observation: bool, add_ee_pose_to_observation: bool>, image_preprocessing: struct<crop_params_dict: struct<observation.images.top: list<item: int64>>, resize_size: list<item: int64>>, gripper: struct<use_gripper: bool, gripper_penalty: double, gripper_penalty_in_reward: bool>, reset: struct<fixed_reset_joint_positions: list<item: double>, reset_time_s: double, control_time_s: double, terminate_on_success: bool>, inverse_kinematics: struct<urdf_path: string, target_frame_name: string, end_effector_bounds: struct<min: list<item: double>, max: list<item: double>>, end_effector_step_sizes: struct<x: double, y: double, z: double>>, reward_classifier: struct<pretrained_path: null, success_threshold: double, success_reward: double>, max_gripper_pos: int64>, name: string, fps: int64, task: string, features: struct<observation.images.top: struct<type: string, shape: list<item: int64>>, observation.state: struct<type: string, shape: list<item: int64>>, action: struct<type: string, shape: list<item: int64>>>, features_map: struct<observation.images.top: string, observation.state: string, action: string>>
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 559, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              output_dir: null
              job_name: string
              resume: bool
              seed: int64
              num_workers: int64
              batch_size: int64
              steps: int64
              eval_freq: int64
              log_freq: int64
              save_checkpoint: bool
              save_freq: int64
              use_policy_training_preset: bool
              optimizer: null
              scheduler: null
              wandb: struct<enable: bool, project: string, disable_artifact: bool>
              dataset: struct<repo_id: string, use_imagenet_stats: bool>
              env: struct<type: string, name: string, task: string, fps: int64, robot: null, teleop: null, processor: struct<control_mode: string, gripper: struct<use_gripper: bool, gripper_penalty: double, gripper_penalty_in_reward: bool>, reset: struct<fixed_reset_joint_positions: list<item: double>, reset_time_s: double, control_time_s: double, terminate_on_success: bool>>, features: struct<observation.images.front: struct<type: string, shape: list<item: int64>>, observation.images.wrist: struct<type: string, shape: list<item: int64>>, observation.state: struct<type: string, shape: list<item: int64>>, action: struct<type: string, shape: list<item: int64>>>, features_map: struct<observation.images.front: string, observation.images.wrist: string, observation.state: string, action: string>>
              policy: struct<type: string, n_obs_steps: int64, normalization_mapping: struct<VISUAL: string, STATE: string, ENV: string, ACTION: string>, input_features: struct<observation.images.front: struct<type: string, shape: list<item: int64>>, observation.images.wrist: struct<type: string, shape: list<item: int64>>, observation.state: struct<type: string, shape: list<item: int64>>>, output_features: struct<action: struct<type: string, shape: list<item: int64>>>, device: string, use_amp: bool, dataset_stats: struct<observation.images.front: struct<mean: list<item: double>, std: list<item: double>>, observation.images.wrist: struct<mean: list<item: double>, std: list<item: double>>, observation.state: struct<min: list<item: double>, max: list<item: double>>, action: struct<min: list<item: double>, max: list<item: double>>>, repo_id: string, storage_device: string, vision_encoder_name: string, freeze_vision_encoder: bool, image_encoder_hidden_dim: int64, shared_encoder: bool, online_steps: int64, online_env_seed: int64, online_buffer_capacity: int64, offline_buffer_capacity: int64, online_step_before_learning: int64, policy_update_freq: int64, discount: double, temperature_init: double, num_critics: int64, num_subsample_critics: null, critic_lr: double, actor_lr: double, temperature_lr: double, critic_target_update_weight: double, utd_ratio: int64, state_encoder_hidden_dim: int64, latent_dim: int64, target_entropy: null, use_backup_entropy: bool, grad_clip_norm: double, num_discrete_actions: int64, critic_network_kwargs: struct<hidden_dims: list<item: int64>, activate_final: bool, final_activation: null>, actor_network_kwargs: struct<hidden_dims: list<item: int64>, activate_final: bool>, policy_kwargs: struct<use_tanh_squash: bool, std_min: double, std_max: int64, init_final: double>, actor_learner_config: struct<learner_host: string, learner_port: int64, policy_parameters_push_frequency: int64>, concurrency: struct<actor: string, learner: string>>
              vs
              output_dir: null
              job_name: string
              resume: bool
              seed: int64
              num_workers: int64
              batch_size: int64
              steps: int64
              log_freq: int64
              save_checkpoint: bool
              save_freq: int64
              wandb: struct<enable: bool, project: string, disable_artifact: bool>
              dataset: struct<repo_id: string, use_imagenet_stats: bool>
              policy: struct<type: string, n_obs_steps: int64, normalization_mapping: struct<VISUAL: string, STATE: string, ENV: string, ACTION: string>, input_features: struct<observation.images.top: struct<type: string, shape: list<item: int64>>, observation.state: struct<type: string, shape: list<item: int64>>>, output_features: struct<action: struct<type: string, shape: list<item: int64>>>, device: string, use_amp: bool, dataset_stats: struct<observation.images.top: struct<mean: list<item: double>, std: list<item: double>>, observation.state: struct<min: list<item: double>, max: list<item: double>>, action: struct<min: list<item: double>, max: list<item: double>>>, num_discrete_actions: int64, storage_device: string, vision_encoder_name: string, freeze_vision_encoder: bool, image_encoder_hidden_dim: int64, shared_encoder: bool, online_steps: int64, online_env_seed: int64, online_buffer_capacity: int64, offline_buffer_capacity: int64, online_step_before_learning: int64, policy_update_freq: int64, discount: double, async_prefetch: bool, temperature_init: double, num_critics: int64, num_subsample_critics: null, critic_lr: double, actor_lr: double, temperature_lr: double, critic_target_update_weight: double, utd_ratio: int64, state_encoder_hidden_dim: int64, latent_dim: int64, target_entropy: null, use_backup_entropy: bool, grad_clip_norm: double, critic_network_kwargs: struct<hidden_dims: list<item: int64>, activate_final: bool, final_activation: null>, actor_network_kwargs: struct<hidden_dims: list<item: int64>, activate_final: bool>, policy_kwargs: struct<use_tanh_squash: bool, std_min: int64, std_max: int64, init_final: double>, actor_learner_config: struct<learner_host: string, learner_port: int64, policy_parameters_push_frequency: int64>, concurrency: struct<actor: string, learner: string>, repo_id: string>
              env: struct<type: string, robot: struct<type: string, port: string, id: string, cameras: struct<top: struct<type: string, index_or_path: int64, height: int64, width: int64, fps: int64>>>, teleop: struct<type: string, use_gripper: bool>, processor: struct<control_mode: string, observation: struct<display_cameras: bool, add_joint_velocity_to_observation: bool, add_current_to_observation: bool, add_ee_pose_to_observation: bool>, image_preprocessing: struct<crop_params_dict: struct<observation.images.top: list<item: int64>>, resize_size: list<item: int64>>, gripper: struct<use_gripper: bool, gripper_penalty: double, gripper_penalty_in_reward: bool>, reset: struct<fixed_reset_joint_positions: list<item: double>, reset_time_s: double, control_time_s: double, terminate_on_success: bool>, inverse_kinematics: struct<urdf_path: string, target_frame_name: string, end_effector_bounds: struct<min: list<item: double>, max: list<item: double>>, end_effector_step_sizes: struct<x: double, y: double, z: double>>, reward_classifier: struct<pretrained_path: null, success_threshold: double, success_reward: double>, max_gripper_pos: int64>, name: string, fps: int64, task: string, features: struct<observation.images.top: struct<type: string, shape: list<item: int64>>, observation.state: struct<type: string, shape: list<item: int64>>, action: struct<type: string, shape: list<item: int64>>>, features_map: struct<observation.images.top: string, observation.state: string, action: string>>

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

A repository that contains example json files that can be used for different applications of the LeRobot code base.

Current available configs:

  • RL :
    • env_config.json: Environment config for a real robot setup using a gamepad for teleoperation and an SO10* arm as the main agent. Using gym_manipulator.py, one can use this config to teleoperate the robot and record a dataset for reinforcement learning.
    • train_config.json: Training config for the HIL-SERL implementation in LeRobot on the real robot using the similar environment configuration to the env_config.json in the same directory.
    • gym_hil:
      • env_config.json: Environment config for simulation using the gym_hil environment.
      • train_config.json: Training config for the HIL-SERL implementation in LeRobot for simulated environments.
  • Sim IL:
    • env_config.json: Environment config for simulation using the gym_hil environment. You can use this configuration to collect a dataset in simulation that can be used for imitation learning or reinforcement learning.
    • eval_config.json: Evaluation config for models trained on datasets collected from gym_hil environment.
  • Reward Classifier:
    • config.json: Main configuration for training a reward classifier.
Downloads last month
1,137