The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: GatedRepoError
Message: 401 Client Error. (Request ID: Root=1-68b32fcf-3381955a1a2d24503c764959;6fd14494-85cf-4234-8dbd-a28e8cd0008a)
Cannot access gated repo for url https://huggingface.co/datasets/Hussain223/franka-episodes-v1/resolve/ee80bf728f3573e08a6777c050071c030171a633/images/ep_001/ep001_step000.png.
Access to dataset Hussain223/franka-episodes-v1 is restricted. You must have access to it and be authenticated to access it. Please log in.
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 409, in hf_raise_for_status
response.raise_for_status()
File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/datasets/Hussain223/franka-episodes-v1/resolve/ee80bf728f3573e08a6777c050071c030171a633/images/ep_001/ep001_step000.png
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1586, in _prepare_split_single
writer.write(example, key)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 553, in write
self.write_examples_on_file()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 511, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 631, in write_batch
self.write_table(pa_table, writer_batch_size)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 646, in write_table
pa_table = embed_table_storage(pa_table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2248, in embed_table_storage
arrays = [
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2249, in <listcomp>
embed_array_storage(table[name], feature, token_per_repo_id=token_per_repo_id)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2124, in embed_array_storage
return feature.embed_storage(array, token_per_repo_id=token_per_repo_id)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/image.py", line 282, in embed_storage
[
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/image.py", line 283, in <listcomp>
(path_to_bytes(x["path"]) if x["bytes"] is None else x["bytes"]) if x is not None else None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 310, in wrapper
return func(value) if value is not None else None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/image.py", line 279, in path_to_bytes
return f.read()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 813, in read_with_retries
out = read(*args, **kwargs)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 811, in track_read
out = f_read(*args, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 1012, in read
return f.read()
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 811, in track_read
out = f_read(*args, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 1076, in read
hf_raise_for_status(self.response)
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 426, in hf_raise_for_status
raise _format(GatedRepoError, message, response) from e
huggingface_hub.errors.GatedRepoError: 401 Client Error. (Request ID: Root=1-68b32fcf-3a74e303677466891806c313;69eb7f33-bf33-4e67-b1f7-f1b4fdb4a130)
Cannot access gated repo for url https://huggingface.co/datasets/Hussain223/franka-episodes-v1/resolve/ee80bf728f3573e08a6777c050071c030171a633/images/ep_001/ep001_step000.png.
Access to dataset Hussain223/franka-episodes-v1 is restricted. You must have access to it and be authenticated to access it. Please log in.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 409, in hf_raise_for_status
response.raise_for_status()
File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/datasets/Hussain223/franka-episodes-v1/resolve/ee80bf728f3573e08a6777c050071c030171a633/images/ep_001/ep001_step000.png
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1595, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 658, in finalize
self.write_examples_on_file()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 511, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 631, in write_batch
self.write_table(pa_table, writer_batch_size)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 646, in write_table
pa_table = embed_table_storage(pa_table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2248, in embed_table_storage
arrays = [
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2249, in <listcomp>
embed_array_storage(table[name], feature, token_per_repo_id=token_per_repo_id)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2124, in embed_array_storage
return feature.embed_storage(array, token_per_repo_id=token_per_repo_id)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/image.py", line 282, in embed_storage
[
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/image.py", line 283, in <listcomp>
(path_to_bytes(x["path"]) if x["bytes"] is None else x["bytes"]) if x is not None else None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 310, in wrapper
return func(value) if value is not None else None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/image.py", line 279, in path_to_bytes
return f.read()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 813, in read_with_retries
out = read(*args, **kwargs)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 811, in track_read
out = f_read(*args, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 1012, in read
return f.read()
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 811, in track_read
out = f_read(*args, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 1076, in read
hf_raise_for_status(self.response)
File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 426, in hf_raise_for_status
raise _format(GatedRepoError, message, response) from e
huggingface_hub.errors.GatedRepoError: 401 Client Error. (Request ID: Root=1-68b32fcf-3381955a1a2d24503c764959;6fd14494-85cf-4234-8dbd-a28e8cd0008a)
Cannot access gated repo for url https://huggingface.co/datasets/Hussain223/franka-episodes-v1/resolve/ee80bf728f3573e08a6777c050071c030171a633/images/ep_001/ep001_step000.png.
Access to dataset Hussain223/franka-episodes-v1 is restricted. You must have access to it and be authenticated to access it. Please log in.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1451, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 994, in stream_convert_to_parquet
builder._prepare_split(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1447, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1604, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
image
image | label
class label |
|---|---|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
|
0ep_001
|
Franka Episodes Dataset (Cube & Ball)
This dataset contains RGB images and per-episode JSON logs recorded from a Franka Panda robotic arm simulation.
It includes both scripted expert demonstrations and behavior cloning (BC) policy rollouts, covering tasks with a cube and a ball.
π Contents
episodes/β JSON logs per episode.- Each file (e.g.,
episode_000.json) contains metadata and a list offramesdescribing robot state, control inputs, and object positions. - Keys include:
episode_id,physics_type,robot_control,frames, and sometimestask.
- Each file (e.g.,
images/β RGB frames associated with episodes (PNG).- Organized in subfolders like
images/ep_001/ep001_step000.png.
- Organized in subfolders like
bc_models/β Behavior cloning checkpoints used for rollout generation (optional, not required for dataset use).
Example structure
franka-episodes-v1/
ββ episodes/
β ββ episode_000.json
β ββ episode_001.json
β ββ ...
ββ images/
β ββ ep_0001/
β β ββ ep001_step000.png
β β ββ ep001_step001.png
β β ββ ...
β ββ ep_0002/...
ββ bc_models/
π Dataset Statistics
- Total size: ~7.58 GB
- Episodes (JSON): 61
- Episodes with images: 60
- Total images: ~70,000
- Frames per episode: ~695 to ~2104
- Image resolution: 640Γ480 RGB
β οΈ Note on Data
Some episodes were perturbed or modified for robustness experiments.
This makes the dataset useful for research on adversarial robustness, poisoned data handling, and imitation learning under noisy conditions.
If you just need clean demonstrations, you can filter or manually check episodes.
π§ Dataset Generation
- Environment: Genesis Simulator with Franka Panda arm.
- Episode types:
- Scripted expert motions (pick, place, throw).
- Behavior cloning (BC) rollouts trained from demonstrations.
π Usage
Load with π€ Datasets
from datasets import load_dataset
dataset = load_dataset("Hussain223/franka-episodes-v1")
# Example access
episode = dataset["train"][0]
print(episode.keys())
Parse a JSON log
import json
with open("episodes/episode_000.json", "r") as f:
ep = json.load(f)
print("Episode ID:", ep["episode_id"])
print("Number of frames:", len(ep["frames"]))
π‘ Applications
- Imitation learning β train BC or DAGGER policies.
- Robustness research β benchmark algorithms under poisoned/noisy data.
- Perception tasks β object recognition, pose estimation from frames.
- Policy evaluation β compare scripted vs. learned rollouts.
π License
Released under the MIT License.
π Versioning
- v1.0 (current): 61 episodes, ~70k images, 7.58 GB.
- Future versions may include more tasks, objects, or real-world data.
βοΈ Citation
If you use this dataset, please cite:
@misc{franka_episodes_hussain223,
title = {Franka Episodes (Cube & Ball)},
author = {Hussain Alibrahim},
year = {2025},
url = {https://huggingface.co/datasets/Hussain223/franka-episodes-v1}
}
π¬ Contact
- Open a discussion on the dataset page.
- Or reach out via Hugging Face profile contact.
- Downloads last month
- 1,040