The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
txt: string
__key__: string
__url__: string
jsonl: null
to
{'jsonl': Value(dtype='binary', id=None), '__key__': Value(dtype='string', id=None), '__url__': Value(dtype='string', id=None)}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2285, in __iter__
for key, example in ex_iterable:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
for key, pa_table in self._iter_arrow():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1888, in _iter_arrow
pa_table = cast_table_to_features(pa_table, self.features)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2215, in cast_table_to_features
raise CastError(
datasets.table.CastError: Couldn't cast
txt: string
__key__: string
__url__: string
jsonl: null
to
{'jsonl': Value(dtype='binary', id=None), '__key__': Value(dtype='string', id=None), '__url__': Value(dtype='string', id=None)}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
π¦ Spatial Perception And Reasoning Dataset β RGBD (SPAR-7M-RGBD)
A large-scale multimodal dataset for 3D-aware spatial perception and reasoning in vision-language models.
SPAR-7M-RGBD extends the original SPAR-7M with additional depths, camera intrinsics, and pose information. It contains over 7 million QA pairs across 33 spatial tasks, built from 4,500+ richly annotated indoor 3D scenes.
This version supports single-view, multi-view, and video-based inputs.
π₯ Download
We provide two versions of the dataset:
| Version | Description |
|---|---|
SPAR-7M |
RGB-only images + QA annotations |
SPAR-7M-RGBD |
Includes depths, camera intrinsics, and pose matrices for 3D-aware training |
You can download both versions from Hugging Face:
# Download SPAR-7M (default)
huggingface-cli download jasonzhango/SPAR-7M --repo-type dataset
# Download SPAR-7M-RGBD (with depth and camera parameters)
huggingface-cli download jasonzhango/SPAR-7M-RGBD --repo-type dataset
These datasets are split into multiple .tar.gz parts due to Hugging Face file size limits. After downloading all parts, run the following to extract:
# For SPAR-7M
cat spar-*.tar.gz | tar -xvzf -
# For SPAR-7M-RGBD
cat spar-rgbd-*.tar.gz | tar -xvzf -
Alternatively, if Hugging Face is not accessible, you can use the provided script:
wget https://hf-mirror.com/hfd/hfd.sh
chmod a+x hfd.sh
export HF_ENDPOINT=https://hf-mirror.com
./hfd.sh jasonzhango/SPAR-7M --dataset
./hfd.sh jasonzhango/SPAR-7M-RGBD --dataset
The dataset directory structure is:
spar/
βββ rxr/
βββ scannet/
β βββ images/
β | βββ scene0000_00/
β | βββ image_color/
β | βββ video_color/
β | βββ image_depth/ # only in SPAR-7M-RGBD
β | βββ video_depth/ # only in SPAR-7M-RGBD
β | βββ pose/ # only in SPAR-7M-RGBD
β | βββ video_pose/ # only in SPAR-7M-RGBD
β | βββ intrinsic/ # only in SPAR-7M-RGBD
β | βββ video_idx.txt
β βββ qa_jsonl/
β βββ train/
β | βββ depth_prediction_oo/
β | | βββ fill/
β | | | βββ fill_76837.jsonl
β | | βββ select/
β | | βββ sentence/
β | βββ obj_spatial_relation_oc/
β | βββ spatial_imagination_oo_mv/
β βββ val/
βββ scannetpp/
βββ structured3d/
Each QA task (e.g., depth_prediction_oc, spatial_relation_oo_mv, etc.) is organized by task type, with subfolders for different answer formats:
fill/β numerical or descriptive answersselect/β multiple choicesentence/β natural language answers
π Bibtex
If you find this project or dataset helpful, please consider citing our paper:
@article{zhang2025from,
title={From Flatland to Space: Teaching Vision-Language Models to Perceive and Reason in 3D},
author={Zhang, Jiahui and Chen, Yurui and Xu, Yueming and Huang, Ze and Mei, Jilin and Chen, Junhui and Zhou, Yanpeng and Yuan, Yujie and Cai, Xinyue and Huang, Guowei and Quan, Xingyue and Xu, Hang and Zhang, Li},
year={2025},
journal={arXiv preprint arXiv:2503.22976},
}
- Downloads last month
- 605