The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
DanceNet3D Dataset
DanceNet3D is a comprehensive multi-view dataset featuring professional dance performances captured with synchronized cameras. This dataset provides high-quality RGB images, depth maps, segmentation masks, and COLMAP sparse 3D reconstructions, making it ideal for research in 3D human reconstruction, novel view synthesis, 3D Gaussian Splatting (3DGS), and dynamic scene understanding.
Overview
The dataset captures ballet and contemporary dance movements across multiple camera viewpoints, providing:
- Multi-view synchronized captures: 3+ camera angles per dance sequence
- Dense annotations: Images, depth maps, masks, and 3D sparse reconstructions
- Professional choreography: High-quality dance performances with complex movements
- COLMAP reconstructions: Camera parameters and sparse point clouds ready for 3D reconstruction tasks
Perfect for researchers working on:
- 3D Gaussian Splatting and Neural Radiance Fields (NeRF)
- Human pose estimation and tracking
- Dynamic scene reconstruction
- Novel view synthesis
- Multi-view geometry
For more information, visit our project page: https://nyuvideolab.github.io/DanceNet3D/dataset
Dataset Structure
The dataset is organized by dance sequences, where each dance contains multiple camera views/frames:
<DanceName>/
βββ <FrameID_1>/ # e.g., 0000001 (Camera view 1)
β βββ images/ # RGB images without background
β βββ images_with_bg/ # RGB images with background
β βββ annotated/ # Annotated/labeled images
β βββ masks/ # Segmentation masks (binary)
β βββ depth/ # Depth maps
β β βββ XXXX.npy # Raw depth values (NumPy arrays)
β β βββ XXXX_colored.png # Depth visualizations
β βββ depth_with_bg/ # Depth maps with background
β β βββ XXXX.npy
β β βββ XXXX_colored.png
β βββ sparse/ # COLMAP sparse reconstruction
β βββ 0/
β βββ cameras.txt # Camera intrinsics
β βββ images.txt # Camera extrinsics/poses
β βββ points3D.txt # Sparse 3D points
β βββ points3D.ply # 3D point cloud
βββ <FrameID_2>/ # e.g., 0000002 (Camera view 2)
β βββ ...
βββ <FrameID_3>/ # e.g., 0000003 (Camera view 3)
βββ ...
Current Dances
- AttitudePromenade: 3 camera views, ~29 frames each
Data Types
- images/: Clean RGB images with background removed (PNG)
- images_with_bg/: Original RGB images with background (PNG)
- annotated/: Images with annotations overlaid (PNG)
- masks/: Binary segmentation masks for foreground/background (PNG)
- depth/: Depth maps as NumPy arrays (.npy) + colored visualizations (_colored.png)
- depth_with_bg/: Depth maps including background geometry
- sparse/: COLMAP format sparse 3D reconstruction data
Usage
Important: This dataset is large. We recommend using the Hugging Face Hub to download only the data you need rather than cloning the entire repository.
Download Location: By default, files are cached in
~/.cache/huggingface/hub/. To download to a specific directory, add thelocal_dir="./your_directory"parameter to any download function (examples shown below).
1. Install Dependencies
pip install huggingface_hub pillow numpy
2. Load Individual Data Types Using Hugging Face Hub
Note on Download Location: By default,
hf_hub_download()caches files in~/.cache/huggingface/hub/. To download to a specific directory, add thelocal_dirparameter to any download function (see examples below).
Load RGB Images
from huggingface_hub import hf_hub_download
from PIL import Image
# Download and load a specific image
dance_name = "AttitudePromenade"
frame_id = "0000001"
image_id = "0028"
# Option 1: Download to default cache (recommended for small downloads)
image_path = hf_hub_download(
repo_id="normanisfine/test",
filename=f"{dance_name}/{frame_id}/images/{image_id}.png",
repo_type="dataset"
)
img = Image.open(image_path)
# Option 2: Download to a specific directory
image_path = hf_hub_download(
repo_id="normanisfine/test",
filename=f"{dance_name}/{frame_id}/images/{image_id}.png",
repo_type="dataset",
local_dir="./my_dataset" # Downloads to ./my_dataset/AttitudePromenade/0000001/images/0028.png
)
img = Image.open(image_path)
# Download with background
image_bg_path = hf_hub_download(
repo_id="normanisfine/test",
filename=f"{dance_name}/{frame_id}/images_with_bg/{image_id}.png",
repo_type="dataset",
local_dir="./my_dataset" # Optional: specify download directory
)
img_bg = Image.open(image_bg_path)
Load Depth Maps
from huggingface_hub import hf_hub_download
import numpy as np
from PIL import Image
# Download and load raw depth values
depth_path = hf_hub_download(
repo_id="normanisfine/test",
filename=f"{dance_name}/{frame_id}/depth/{image_id}.npy",
repo_type="dataset",
local_dir="./my_dataset" # Optional: specify download directory
)
depth = np.load(depth_path)
print(f"Depth shape: {depth.shape}, dtype: {depth.dtype}")
# Download and load depth visualization
depth_vis_path = hf_hub_download(
repo_id="normanisfine/test",
filename=f"{dance_name}/{frame_id}/depth/{image_id}_colored.png",
repo_type="dataset",
local_dir="./my_dataset" # Optional: specify download directory
)
depth_vis = Image.open(depth_vis_path)
Load Segmentation Masks
from huggingface_hub import hf_hub_download
from PIL import Image
import numpy as np
# Download and load mask
mask_path = hf_hub_download(
repo_id="normanisfine/test",
filename=f"{dance_name}/{frame_id}/masks/{image_id}.png",
repo_type="dataset",
local_dir="./my_dataset" # Optional: specify download directory
)
mask = Image.open(mask_path)
mask_array = np.array(mask) # Binary mask: 0 = background, 255 = foreground
Load Annotated Images
from huggingface_hub import hf_hub_download
from PIL import Image
# Download and load annotated image with labels
annotated_path = hf_hub_download(
repo_id="normanisfine/test",
filename=f"{dance_name}/{frame_id}/annotated/{image_id}.png",
repo_type="dataset",
local_dir="./my_dataset" # Optional: specify download directory
)
annotated = Image.open(annotated_path)
Load COLMAP Sparse Reconstruction
from huggingface_hub import hf_hub_download
from plyfile import PlyData
# Download COLMAP files
cameras_path = hf_hub_download(
repo_id="normanisfine/test",
filename=f"{dance_name}/{frame_id}/sparse/0/cameras.txt",
repo_type="dataset",
local_dir="./my_dataset" # Optional: specify download directory
)
images_path = hf_hub_download(
repo_id="normanisfine/test",
filename=f"{dance_name}/{frame_id}/sparse/0/images.txt",
repo_type="dataset",
local_dir="./my_dataset" # Optional: specify download directory
)
points_txt_path = hf_hub_download(
repo_id="normanisfine/test",
filename=f"{dance_name}/{frame_id}/sparse/0/points3D.txt",
repo_type="dataset",
local_dir="./my_dataset" # Optional: specify download directory
)
# Download and load the point cloud
points_ply_path = hf_hub_download(
repo_id="normanisfine/test",
filename=f"{dance_name}/{frame_id}/sparse/0/points3D.ply",
repo_type="dataset",
local_dir="./my_dataset" # Optional: specify download directory
)
ply = PlyData.read(points_ply_path)
# Read the downloaded files
with open(cameras_path, 'r') as f:
cameras_data = f.read()
with open(images_path, 'r') as f:
images_data = f.read()
3. Download Only Images and Sparse for 3D Gaussian Splatting
If you only need images and sparse reconstruction for 3DGS training (saving bandwidth and storage):
Recommended: Using Hugging Face Hub for Selective Download
from huggingface_hub import snapshot_download
def download_3dgs_data(dance_name, local_dir="./3dgs_data"):
"""
Download only images and sparse reconstruction for 3DGS training.
This saves significant bandwidth and storage by skipping depth maps,
masks, and other data not needed for 3DGS.
"""
print(f"Downloading {dance_name} images and sparse data...")
snapshot_download(
repo_id="normanisfine/test",
repo_type="dataset",
local_dir=local_dir,
allow_patterns=[
f"{dance_name}/*/images/*.png", # Only RGB images
f"{dance_name}/*/sparse/**/*" # Only sparse reconstruction
]
)
print(f"β Downloaded to {local_dir}")
print(f" - Images: {local_dir}/{dance_name}/*/images/")
print(f" - Sparse: {local_dir}/{dance_name}/*/sparse/")
return local_dir
# Example 1: Download all frames for one dance
data_dir = download_3dgs_data("AttitudePromenade", "./my_3dgs_data")
# Example 2: Download specific dance for training
import os
os.makedirs("./training_data", exist_ok=True)
download_3dgs_data("AttitudePromenade", "./training_data")
Alternative: Download a Single Frame Only
from huggingface_hub import snapshot_download
def download_single_frame(dance_name, frame_id, local_dir="./data"):
"""Download images and sparse data for a single camera frame"""
snapshot_download(
repo_id="normanisfine/test",
repo_type="dataset",
local_dir=local_dir,
allow_patterns=[
f"{dance_name}/{frame_id}/images/*.png",
f"{dance_name}/{frame_id}/sparse/**/*"
]
)
print(f"β Downloaded {dance_name}/{frame_id}")
return f"{local_dir}/{dance_name}/{frame_id}"
# Download only frame 0000001
frame_path = download_single_frame("AttitudePromenade", "0000001", "./single_frame")
4. Prepare for 3D Gaussian Splatting Training
After downloading images and sparse data:
# Your structure for 3DGS (e.g., for Nerfstudio/gsplat):
AttitudePromenade/
βββ 0000001/
β βββ images/ # Input images
β βββ sparse/
β βββ 0/ # COLMAP format camera parameters
βββ 0000002/
β βββ images/
β βββ sparse/
βββ 0000003/
βββ images/
βββ sparse/
# Ready to train! Example with Nerfstudio:
ns-train gaussian-splatting --data AttitudePromenade/0000001 colmap
5. Complete Dataset Loader with Hugging Face Hub
from pathlib import Path
import numpy as np
from PIL import Image
from typing import Dict, List, Optional
from huggingface_hub import hf_hub_download
class DanceDatasetHub:
"""
Dataset loader that downloads data on-demand from Hugging Face Hub.
Only downloads files when you access them - efficient for large datasets!
"""
def __init__(self, repo_id: str = "normanisfine/test", local_dir: Optional[str] = None):
self.repo_id = repo_id
self.repo_type = "dataset"
self.local_dir = local_dir # If None, uses default cache; otherwise downloads to specified directory
def load_image(self, dance: str, frame_id: str, image_id: str,
with_bg: bool = False) -> Image.Image:
"""Download and load a specific image"""
folder = "images_with_bg" if with_bg else "images"
path = hf_hub_download(
repo_id=self.repo_id,
filename=f"{dance}/{frame_id}/{folder}/{image_id}.png",
repo_type=self.repo_type,
local_dir=self.local_dir
)
return Image.open(path)
def load_depth(self, dance: str, frame_id: str, image_id: str,
with_bg: bool = False, colored: bool = False) -> np.ndarray:
"""Download and load depth map"""
folder = "depth_with_bg" if with_bg else "depth"
if colored:
filename = f"{dance}/{frame_id}/{folder}/{image_id}_colored.png"
path = hf_hub_download(
repo_id=self.repo_id,
filename=filename,
repo_type=self.repo_type,
local_dir=self.local_dir
)
return np.array(Image.open(path))
else:
filename = f"{dance}/{frame_id}/{folder}/{image_id}.npy"
path = hf_hub_download(
repo_id=self.repo_id,
filename=filename,
repo_type=self.repo_type,
local_dir=self.local_dir
)
return np.load(path)
def load_mask(self, dance: str, frame_id: str, image_id: str) -> Image.Image:
"""Download and load segmentation mask"""
path = hf_hub_download(
repo_id=self.repo_id,
filename=f"{dance}/{frame_id}/masks/{image_id}.png",
repo_type=self.repo_type,
local_dir=self.local_dir
)
return Image.open(path)
def load_annotated(self, dance: str, frame_id: str, image_id: str) -> Image.Image:
"""Download and load annotated image"""
path = hf_hub_download(
repo_id=self.repo_id,
filename=f"{dance}/{frame_id}/annotated/{image_id}.png",
repo_type=self.repo_type,
local_dir=self.local_dir
)
return Image.open(path)
def load_colmap_file(self, dance: str, frame_id: str,
filename: str) -> str:
"""Download COLMAP file and return path"""
path = hf_hub_download(
repo_id=self.repo_id,
filename=f"{dance}/{frame_id}/sparse/0/{filename}",
repo_type=self.repo_type,
local_dir=self.local_dir
)
return path
def load_frame_data(self, dance: str, frame_id: str,
image_id: str, include_depth: bool = True) -> Dict:
"""Load all data for a specific frame"""
data = {
'image': self.load_image(dance, frame_id, image_id),
'mask': self.load_mask(dance, frame_id, image_id),
}
if include_depth:
data['depth'] = self.load_depth(dance, frame_id, image_id)
return data
# Usage Example 1: Download to default cache
dataset = DanceDatasetHub()
# Load a specific image (downloads only this file)
image = dataset.load_image("AttitudePromenade", "0000001", "0028")
print(f"Image size: {image.size}")
# Usage Example 2: Download to a specific directory
dataset = DanceDatasetHub(local_dir="./my_dataset")
# Load a specific image (downloads to ./my_dataset/AttitudePromenade/0000001/images/0028.png)
image = dataset.load_image("AttitudePromenade", "0000001", "0028")
print(f"Image size: {image.size}")
# Load depth map
depth = dataset.load_depth("AttitudePromenade", "0000001", "0028")
print(f"Depth shape: {depth.shape}")
# Load all data for one frame
data = dataset.load_frame_data("AttitudePromenade", "0000001", "0028")
print(f"Loaded: {list(data.keys())}")
# Get COLMAP files
cameras_path = dataset.load_colmap_file("AttitudePromenade", "0000001", "cameras.txt")
images_path = dataset.load_colmap_file("AttitudePromenade", "0000001", "images.txt")
print(f"COLMAP cameras: {cameras_path}")
Applications
- 3D Gaussian Splatting: Train 3DGS models using images + sparse reconstruction
- Novel View Synthesis: Generate new viewpoints using NeRF/3DGS
- Depth Estimation: Use as ground truth for depth estimation models
- Human Segmentation: Train segmentation models with provided masks
- Pose Estimation: Use COLMAP camera poses for multi-view geometry
- 4D Reconstruction: Reconstruct dynamic dance performances in 3D
Citation
If you use this dataset, please cite:
@dataset{dancenet3d_2024,
title={DanceNet3D: Multi-View Dance Dataset with Depth and 3D Reconstruction},
author={NYU Video Lab},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/datasets/normanisfine/test},
note={Project page: https://nyuvideolab.github.io/DanceNet3D/}
}
License
This dataset is released under the MIT License.
- Downloads last month
- 38