|
|
--- |
|
|
extra_gated_fields: |
|
|
First Name: text |
|
|
Last Name: text |
|
|
Date of birth: date_picker |
|
|
Country: country |
|
|
Affiliation: text |
|
|
Job title: |
|
|
type: select |
|
|
options: |
|
|
- Student |
|
|
- Research Graduate |
|
|
- AI researcher |
|
|
- AI developer/engineer |
|
|
- Reporter |
|
|
- Other |
|
|
geo: ip_location |
|
|
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox |
|
|
extra_gated_description: >- |
|
|
The information you provide will be collected, stored, processed and shared in |
|
|
accordance with the [Meta Privacy |
|
|
Policy](https://www.facebook.com/privacy/policy/). |
|
|
extra_gated_button_content: Submit |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- facebook |
|
|
- meta-pytorch |
|
|
pipeline_tag: image-to-3d |
|
|
license: other |
|
|
license_name: vggt-aup-license |
|
|
license_link: https://huggingface.co/facebook/VGGT-1B-Commercial/blob/main/LICENSE |
|
|
--- |
|
|
|
|
|
|
|
|
<div align="center"> |
|
|
<h1>VGGT: Visual Geometry Grounded Transformer</h1> |
|
|
|
|
|
<a href="https://jytime.github.io/data/VGGT_CVPR25.pdf" target="_blank" rel="noopener noreferrer"> |
|
|
<img src="https://img.shields.io/badge/Paper-VGGT" alt="Paper PDF"> |
|
|
</a> |
|
|
<a href="https://arxiv.org/abs/2503.11651"><img src="https://img.shields.io/badge/arXiv-2503.11651-b31b1b" alt="arXiv"></a> |
|
|
<a href="https://vgg-t.github.io/"><img src="https://img.shields.io/badge/Project_Page-green" alt="Project Page"></a> |
|
|
<a href='https://huggingface.co/spaces/facebook/vggt'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Demo-blue'></a> |
|
|
|
|
|
|
|
|
**[Meta AI Research](https://ai.facebook.com/research/)**; **[University of Oxford, VGG](https://www.robots.ox.ac.uk/~vgg/)** |
|
|
|
|
|
|
|
|
[Jianyuan Wang](https://jytime.github.io/), [Minghao Chen](https://silent-chen.github.io/), [Nikita Karaev](https://nikitakaraevv.github.io/), |
|
|
[Andrea Vedaldi](https://www.robots.ox.ac.uk/~vedaldi/), [Christian Rupprecht](https://chrirupp.github.io/), [David Novotny](https://d-novotny.github.io/) |
|
|
</div> |
|
|
|
|
|
|
|
|
|
|
|
**This Hugging Face repository provides a model checkpoint licensed for commercial use, with the exception of military applications. Refer to the LICENSE file for full terms.** |
|
|
|
|
|
|
|
|
|
|
|
## Overview |
|
|
|
|
|
Visual Geometry Grounded Transformer (VGGT, CVPR 2025) is a feed-forward neural network that directly infers all key 3D attributes of a scene, including extrinsic and intrinsic camera parameters, point maps, depth maps, and 3D point tracks, from one, a few, or hundreds of its views, within seconds. |
|
|
|
|
|
## Quick Start |
|
|
|
|
|
Please refer to our [Github Repo](https://github.com/facebookresearch/vggt) |
|
|
|
|
|
## Citation |
|
|
If you find our repository useful, please consider giving it a star ⭐ and citing our paper in your work: |
|
|
|
|
|
```bibtex |
|
|
@inproceedings{wang2025vggt, |
|
|
title={VGGT: Visual Geometry Grounded Transformer}, |
|
|
author={Wang, Jianyuan and Chen, Minghao and Karaev, Nikita and Vedaldi, Andrea and Rupprecht, Christian and Novotny, David}, |
|
|
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, |
|
|
year={2025} |
|
|
} |
|
|
``` |