--- license: mit task_categories: - image-segmentation - depth-estimation - image-to-3d tags: - computer-vision - depth - segmentation - colmap - 3d-gaussian-splatting - dance size_categories: - n<1K --- # DanceNet3D Dataset **DanceNet3D** is a comprehensive multi-view dataset featuring professional dance performances captured with synchronized cameras. This dataset provides high-quality RGB images, depth maps, segmentation masks, and COLMAP sparse 3D reconstructions, making it ideal for research in 3D human reconstruction, novel view synthesis, 3D Gaussian Splatting (3DGS), and dynamic scene understanding. ## Overview The dataset captures ballet and contemporary dance movements across multiple camera viewpoints, providing: - **Multi-view synchronized captures**: 3+ camera angles per dance sequence - **Dense annotations**: Images, depth maps, masks, and 3D sparse reconstructions - **Professional choreography**: High-quality dance performances with complex movements - **COLMAP reconstructions**: Camera parameters and sparse point clouds ready for 3D reconstruction tasks Perfect for researchers working on: - 3D Gaussian Splatting and Neural Radiance Fields (NeRF) - Human pose estimation and tracking - Dynamic scene reconstruction - Novel view synthesis - Multi-view geometry For more information, visit our project page: [https://nyuvideolab.github.io/DanceNet3D/dataset](https://nyuvideolab.github.io/DanceNet3D/dataset) ## Dataset Structure The dataset is organized by dance sequences, where each dance contains multiple camera views/frames: ``` / ├── / # e.g., 0000001 (Camera view 1) │ ├── images/ # RGB images without background │ ├── images_with_bg/ # RGB images with background │ ├── annotated/ # Annotated/labeled images │ ├── masks/ # Segmentation masks (binary) │ ├── depth/ # Depth maps │ │ ├── XXXX.npy # Raw depth values (NumPy arrays) │ │ └── XXXX_colored.png # Depth visualizations │ ├── depth_with_bg/ # Depth maps with background │ │ ├── XXXX.npy │ │ └── XXXX_colored.png │ └── sparse/ # COLMAP sparse reconstruction │ └── 0/ │ ├── cameras.txt # Camera intrinsics │ ├── images.txt # Camera extrinsics/poses │ ├── points3D.txt # Sparse 3D points │ └── points3D.ply # 3D point cloud ├── / # e.g., 0000002 (Camera view 2) │ └── ... └── / # e.g., 0000003 (Camera view 3) └── ... ``` ### Current Dances - **AttitudePromenade**: 3 camera views, ~29 frames each ### Data Types - **images/**: Clean RGB images with background removed (PNG) - **images_with_bg/**: Original RGB images with background (PNG) - **annotated/**: Images with annotations overlaid (PNG) - **masks/**: Binary segmentation masks for foreground/background (PNG) - **depth/**: Depth maps as NumPy arrays (.npy) + colored visualizations (_colored.png) - **depth_with_bg/**: Depth maps including background geometry - **sparse/**: COLMAP format sparse 3D reconstruction data ## Usage > **Important**: This dataset is large. We recommend using the Hugging Face Hub to download only the data you need rather than cloning the entire repository. > **Download Location**: By default, files are cached in `~/.cache/huggingface/hub/`. To download to a **specific directory**, add the `local_dir="./your_directory"` parameter to any download function (examples shown below). ### 1. Install Dependencies ```bash pip install huggingface_hub pillow numpy ``` ### 2. Load Individual Data Types Using Hugging Face Hub > **Note on Download Location**: By default, `hf_hub_download()` caches files in `~/.cache/huggingface/hub/`. To download to a specific directory, add the `local_dir` parameter to any download function (see examples below). #### Load RGB Images ```python from huggingface_hub import hf_hub_download from PIL import Image # Download and load a specific image dance_name = "AttitudePromenade" frame_id = "0000001" image_id = "0028" # Option 1: Download to default cache (recommended for small downloads) image_path = hf_hub_download( repo_id="normanisfine/test", filename=f"{dance_name}/{frame_id}/images/{image_id}.png", repo_type="dataset" ) img = Image.open(image_path) # Option 2: Download to a specific directory image_path = hf_hub_download( repo_id="normanisfine/test", filename=f"{dance_name}/{frame_id}/images/{image_id}.png", repo_type="dataset", local_dir="./my_dataset" # Downloads to ./my_dataset/AttitudePromenade/0000001/images/0028.png ) img = Image.open(image_path) # Download with background image_bg_path = hf_hub_download( repo_id="normanisfine/test", filename=f"{dance_name}/{frame_id}/images_with_bg/{image_id}.png", repo_type="dataset", local_dir="./my_dataset" # Optional: specify download directory ) img_bg = Image.open(image_bg_path) ``` #### Load Depth Maps ```python from huggingface_hub import hf_hub_download import numpy as np from PIL import Image # Download and load raw depth values depth_path = hf_hub_download( repo_id="normanisfine/test", filename=f"{dance_name}/{frame_id}/depth/{image_id}.npy", repo_type="dataset", local_dir="./my_dataset" # Optional: specify download directory ) depth = np.load(depth_path) print(f"Depth shape: {depth.shape}, dtype: {depth.dtype}") # Download and load depth visualization depth_vis_path = hf_hub_download( repo_id="normanisfine/test", filename=f"{dance_name}/{frame_id}/depth/{image_id}_colored.png", repo_type="dataset", local_dir="./my_dataset" # Optional: specify download directory ) depth_vis = Image.open(depth_vis_path) ``` #### Load Segmentation Masks ```python from huggingface_hub import hf_hub_download from PIL import Image import numpy as np # Download and load mask mask_path = hf_hub_download( repo_id="normanisfine/test", filename=f"{dance_name}/{frame_id}/masks/{image_id}.png", repo_type="dataset", local_dir="./my_dataset" # Optional: specify download directory ) mask = Image.open(mask_path) mask_array = np.array(mask) # Binary mask: 0 = background, 255 = foreground ``` #### Load Annotated Images ```python from huggingface_hub import hf_hub_download from PIL import Image # Download and load annotated image with labels annotated_path = hf_hub_download( repo_id="normanisfine/test", filename=f"{dance_name}/{frame_id}/annotated/{image_id}.png", repo_type="dataset", local_dir="./my_dataset" # Optional: specify download directory ) annotated = Image.open(annotated_path) ``` #### Load COLMAP Sparse Reconstruction ```python from huggingface_hub import hf_hub_download from plyfile import PlyData # Download COLMAP files cameras_path = hf_hub_download( repo_id="normanisfine/test", filename=f"{dance_name}/{frame_id}/sparse/0/cameras.txt", repo_type="dataset", local_dir="./my_dataset" # Optional: specify download directory ) images_path = hf_hub_download( repo_id="normanisfine/test", filename=f"{dance_name}/{frame_id}/sparse/0/images.txt", repo_type="dataset", local_dir="./my_dataset" # Optional: specify download directory ) points_txt_path = hf_hub_download( repo_id="normanisfine/test", filename=f"{dance_name}/{frame_id}/sparse/0/points3D.txt", repo_type="dataset", local_dir="./my_dataset" # Optional: specify download directory ) # Download and load the point cloud points_ply_path = hf_hub_download( repo_id="normanisfine/test", filename=f"{dance_name}/{frame_id}/sparse/0/points3D.ply", repo_type="dataset", local_dir="./my_dataset" # Optional: specify download directory ) ply = PlyData.read(points_ply_path) # Read the downloaded files with open(cameras_path, 'r') as f: cameras_data = f.read() with open(images_path, 'r') as f: images_data = f.read() ``` ### 3. Download Only Images and Sparse for 3D Gaussian Splatting If you only need images and sparse reconstruction for 3DGS training (saving bandwidth and storage): #### Recommended: Using Hugging Face Hub for Selective Download ```python from huggingface_hub import snapshot_download def download_3dgs_data(dance_name, local_dir="./3dgs_data"): """ Download only images and sparse reconstruction for 3DGS training. This saves significant bandwidth and storage by skipping depth maps, masks, and other data not needed for 3DGS. """ print(f"Downloading {dance_name} images and sparse data...") snapshot_download( repo_id="normanisfine/test", repo_type="dataset", local_dir=local_dir, allow_patterns=[ f"{dance_name}/*/images/*.png", # Only RGB images f"{dance_name}/*/sparse/**/*" # Only sparse reconstruction ] ) print(f"✓ Downloaded to {local_dir}") print(f" - Images: {local_dir}/{dance_name}/*/images/") print(f" - Sparse: {local_dir}/{dance_name}/*/sparse/") return local_dir # Example 1: Download all frames for one dance data_dir = download_3dgs_data("AttitudePromenade", "./my_3dgs_data") # Example 2: Download specific dance for training import os os.makedirs("./training_data", exist_ok=True) download_3dgs_data("AttitudePromenade", "./training_data") ``` **Alternative: Download a Single Frame Only** ```python from huggingface_hub import snapshot_download def download_single_frame(dance_name, frame_id, local_dir="./data"): """Download images and sparse data for a single camera frame""" snapshot_download( repo_id="normanisfine/test", repo_type="dataset", local_dir=local_dir, allow_patterns=[ f"{dance_name}/{frame_id}/images/*.png", f"{dance_name}/{frame_id}/sparse/**/*" ] ) print(f"✓ Downloaded {dance_name}/{frame_id}") return f"{local_dir}/{dance_name}/{frame_id}" # Download only frame 0000001 frame_path = download_single_frame("AttitudePromenade", "0000001", "./single_frame") ``` ### 4. Prepare for 3D Gaussian Splatting Training After downloading images and sparse data: ```bash # Your structure for 3DGS (e.g., for Nerfstudio/gsplat): AttitudePromenade/ ├── 0000001/ │ ├── images/ # Input images │ └── sparse/ │ └── 0/ # COLMAP format camera parameters ├── 0000002/ │ ├── images/ │ └── sparse/ └── 0000003/ ├── images/ └── sparse/ # Ready to train! Example with Nerfstudio: ns-train gaussian-splatting --data AttitudePromenade/0000001 colmap ``` ### 5. Complete Dataset Loader with Hugging Face Hub ```python from pathlib import Path import numpy as np from PIL import Image from typing import Dict, List, Optional from huggingface_hub import hf_hub_download class DanceDatasetHub: """ Dataset loader that downloads data on-demand from Hugging Face Hub. Only downloads files when you access them - efficient for large datasets! """ def __init__(self, repo_id: str = "normanisfine/test", local_dir: Optional[str] = None): self.repo_id = repo_id self.repo_type = "dataset" self.local_dir = local_dir # If None, uses default cache; otherwise downloads to specified directory def load_image(self, dance: str, frame_id: str, image_id: str, with_bg: bool = False) -> Image.Image: """Download and load a specific image""" folder = "images_with_bg" if with_bg else "images" path = hf_hub_download( repo_id=self.repo_id, filename=f"{dance}/{frame_id}/{folder}/{image_id}.png", repo_type=self.repo_type, local_dir=self.local_dir ) return Image.open(path) def load_depth(self, dance: str, frame_id: str, image_id: str, with_bg: bool = False, colored: bool = False) -> np.ndarray: """Download and load depth map""" folder = "depth_with_bg" if with_bg else "depth" if colored: filename = f"{dance}/{frame_id}/{folder}/{image_id}_colored.png" path = hf_hub_download( repo_id=self.repo_id, filename=filename, repo_type=self.repo_type, local_dir=self.local_dir ) return np.array(Image.open(path)) else: filename = f"{dance}/{frame_id}/{folder}/{image_id}.npy" path = hf_hub_download( repo_id=self.repo_id, filename=filename, repo_type=self.repo_type, local_dir=self.local_dir ) return np.load(path) def load_mask(self, dance: str, frame_id: str, image_id: str) -> Image.Image: """Download and load segmentation mask""" path = hf_hub_download( repo_id=self.repo_id, filename=f"{dance}/{frame_id}/masks/{image_id}.png", repo_type=self.repo_type, local_dir=self.local_dir ) return Image.open(path) def load_annotated(self, dance: str, frame_id: str, image_id: str) -> Image.Image: """Download and load annotated image""" path = hf_hub_download( repo_id=self.repo_id, filename=f"{dance}/{frame_id}/annotated/{image_id}.png", repo_type=self.repo_type, local_dir=self.local_dir ) return Image.open(path) def load_colmap_file(self, dance: str, frame_id: str, filename: str) -> str: """Download COLMAP file and return path""" path = hf_hub_download( repo_id=self.repo_id, filename=f"{dance}/{frame_id}/sparse/0/{filename}", repo_type=self.repo_type, local_dir=self.local_dir ) return path def load_frame_data(self, dance: str, frame_id: str, image_id: str, include_depth: bool = True) -> Dict: """Load all data for a specific frame""" data = { 'image': self.load_image(dance, frame_id, image_id), 'mask': self.load_mask(dance, frame_id, image_id), } if include_depth: data['depth'] = self.load_depth(dance, frame_id, image_id) return data # Usage Example 1: Download to default cache dataset = DanceDatasetHub() # Load a specific image (downloads only this file) image = dataset.load_image("AttitudePromenade", "0000001", "0028") print(f"Image size: {image.size}") # Usage Example 2: Download to a specific directory dataset = DanceDatasetHub(local_dir="./my_dataset") # Load a specific image (downloads to ./my_dataset/AttitudePromenade/0000001/images/0028.png) image = dataset.load_image("AttitudePromenade", "0000001", "0028") print(f"Image size: {image.size}") # Load depth map depth = dataset.load_depth("AttitudePromenade", "0000001", "0028") print(f"Depth shape: {depth.shape}") # Load all data for one frame data = dataset.load_frame_data("AttitudePromenade", "0000001", "0028") print(f"Loaded: {list(data.keys())}") # Get COLMAP files cameras_path = dataset.load_colmap_file("AttitudePromenade", "0000001", "cameras.txt") images_path = dataset.load_colmap_file("AttitudePromenade", "0000001", "images.txt") print(f"COLMAP cameras: {cameras_path}") ``` ## Applications - **3D Gaussian Splatting**: Train 3DGS models using images + sparse reconstruction - **Novel View Synthesis**: Generate new viewpoints using NeRF/3DGS - **Depth Estimation**: Use as ground truth for depth estimation models - **Human Segmentation**: Train segmentation models with provided masks - **Pose Estimation**: Use COLMAP camera poses for multi-view geometry - **4D Reconstruction**: Reconstruct dynamic dance performances in 3D ## Citation If you use this dataset, please cite: ```bibtex @dataset{dancenet3d_2024, title={DanceNet3D: Multi-View Dance Dataset with Depth and 3D Reconstruction}, author={NYU Video Lab}, year={2024}, publisher={Hugging Face}, url={https://huggingface.co/datasets/normanisfine/test}, note={Project page: https://nyuvideolab.github.io/DanceNet3D/} } ``` ## License This dataset is released under the MIT License.