File size: 10,498 Bytes
35af664
 
 
 
 
 
 
 
 
 
 
 
 
5934e48
c1b6e1f
503fde5
c1b6e1f
 
 
 
 
 
 
 
 
 
 
5934e48
 
c1b6e1f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5934e48
4471b1c
c1b6e1f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9e91b23
 
 
 
 
 
 
c1b6e1f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23c4b6c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35af664
 
c1b6e1f
 
 
 
 
277fbec
c1b6e1f
 
 
 
 
 
 
 
277fbec
c1b6e1f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35af664
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
---
license: other
license_name: nvidia-community-model-license
license_link: >-
  https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-community-models-license/
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- nvidia
- VLM
- OCR
---

# NVIDIA Nemotron Parse v1.1 Overview

NVIDIA Nemotron Parse v1.1 is designed to understand document semantics and extract text and tables elements with spatial grounding. Given an image, NVIDIA Nemotron Parse v1.1 produces structured annotations, including formatted text, bounding-boxes and the corresponding semantic classes, ordered according to the document's reading flow. It overcomes the shortcomings of traditional OCR technologies that struggle with complex document layouts with structural variability, and helps transform unstructured documents into actionable and machine-usable representations. This has several downstream benefits such as increasing the availability of training-data for Large Language Models (LLMs), improving the accuracy of extractor, curator, retriever and AI agentic applications, and enhancing document understanding pipelines.

This model is ready for commercial use.

## License 
GOVERNING TERMS: The NIM container is governed by the [NVIDIA Software License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-software-license-agreement/) and [Product-Specific Terms for NVIDIA AI Products](https://www.nvidia.com/en-us/agreements/enterprise-software/product-specific-terms-for-ai-products/). Use of this model is governed by the [NVIDIA Community Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-community-models-license/). Use of the tokenizer included in this model is governed by the [CC-BY-4.0 license](https://creativecommons.org/licenses/by/4.0/).


## Deployment Geography:
Global

## Use Case:
NVIDIA Nemotron Parse v1.1 will be capable of comprehensive text understanding and document structure understanding. It will be used in retriever and curator solutions. Its text extraction datasets and capabilities will help with LLM and VLM training, as well as improve run-time inference accuracy of VLMs.
The NVIDIA Nemotron Parse v1.1 model will perform text extraction from PDF and PPT documents. The NVIDIA Nemotron Parse v1.1 can classify the objects (title, section, caption, index, footnote, lists, tables, bibliography, image) in a given document, and provide bounding boxes with coordinates. 


## Release Date:
November 17, 2025 


## References 
* https://huggingface.co/docs/transformers/en/model_doc/mbart


## Model Architecture 

### Architecture Type : 
Transformer-based vision-encoder-decoder model

### Network Architecture 
* Vision Encoder: ViT-H model (https://huggingface.co/nvidia/C-RADIO)<br>
* Adapter Layer: 1D convolutions & norms to compress dimensionality and sequence length of the latent space (13184 tokens to 3201 tokens)<br>
* Decoder: mBart [1] 10 blocks<br>
* Tokenizer: Use of the tokenizer included in this model is governed by the [CC-BY-4.0 license](https://creativecommons.org/licenses/by/4.0/)<br>
* Number of Parameters: < 1B<br>


## Computational Load (For NVIDIA Models Only) 
**Cumulative Compute:**  2.2e+22 <br>
**Estimated Energy and Emissions for Model Training:** 
Energy Consumption: 7,827.46 kWh <br>
Carbon Emissions: 3.21 tCO2e <br>

### Input 
* Input Type: Image, Text<br>
* Input Type(s): Red, Green, Blue (RGB) + Prompt (String)
* Input Parameters: 2D, 1D
- Other Properties Related to Input:
  - Max Input Resolution (Width, Height): 1648, 2048
  - Min Input Resolution (Width, Height): 1024, 1280
- Channel Count: 3

### Output 
* Output Type: Text<br>
* Output Format: String
* Output Parameters: 1D
- Other Properties Related to Output:
  - NVIDIA Nemotron Parse v1.1 output format is a string which encodes text content (formatted or not) as well as bounding boxes and class attributes.<br>
 In the default prompt setting, text content is represented as markdown, and math expressions as LaTeX, enclosed in \[..\] or \(..\). If a mathematical expression does not require LaTeX formatting to be represented (e.g., consisting only of characters and subscripts/superscripts), it is represented as markdown. Tables are represented as LaTeX. 
  Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.<br>

## Software Integration:

Runtime Engine(s): TensorRT-LLM

Supported Hardware Microarchitecture Compatibility: <br>
NVIDIA Hopper/NVIDIA Ampere/NVIDIA Turing<br>

Supported Operating System(s): Linux<br>

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.<br>

## Model Version:

V1.1

## Quick Start

### Install dependencies

```bash
pip install -r requirements.txt
```

### Usage example

```python
import torch
from PIL import Image, ImageDraw
from transformers import AutoModel, AutoProcessor, AutoTokenizer, AutoConfig, AutoImageProcessor, GenerationConfig
from postprocessing import extract_classes_bboxes, transform_bbox_to_original, postprocess_text

# Load model and processor
model_path = "nvidia/NVIDIA-Nemotron-Parse-v1.1"  # Or use a local path
device = "cuda:0"

model = AutoModel.from_pretrained(
    model_path,
    trust_remote_code=True,
    torch_dtype=torch.bfloat16
).to(device).eval()
tokenizer = AutoTokenizer.from_pretrained(model_path)
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)

# Load image
image = Image.open("path/to/your/image.jpg")
task_prompt = "</s><s><predict_bbox><predict_classes><output_markdown>"

# Process image
inputs = processor(images=[image], text=task_prompt, return_tensors="pt").to(device)
prompt_ids = processor.tokenizer.encode(task_prompt, return_tensors="pt", add_special_tokens=False).cuda()


generation_config = GenerationConfig.from_pretrained(model_path, trust_remote_code=True)
# Generate text
outputs = model.generate(**inputs,  generation_config=generation_config)

# Decode the generated text
generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0]
```
### Postprocessing

```python
from PIL import Image, ImageDraw
from postprocessing import extract_classes_bboxes, transform_bbox_to_original, postprocess_text

classes, bboxes, texts = extract_classes_bboxes(generated_text)
bboxes = [transform_bbox_to_original(bbox, image.width, image.height) for bbox in bboxes]

# Specify output formats for postprocessing
table_format = 'latex' # latex | HTML | markdown
text_format = 'markdown' # markdown | plain
blank_text_in_figures = False # remove text inside 'Picture' class
texts = [postprocess_text(text, cls = cls, table_format=table_format, text_format=text_format, blank_text_in_figures=blank_text_in_figures) for text, cls in zip(texts, classes)]

for cl, bb, txt in zip(classes, bboxes, texts):
    print(cl, ': ', txt)

draw = ImageDraw.Draw(image)
for bbox in bboxes:
  draw.rectangle((bbox[0], bbox[1], bbox[2], bbox[3]), outline="red")
```

## Inference with VLLM

### Install dependencies

```bash
uv venv --python 3.12 --seed
source .venv/bin/activate
uv pip install "git+https://github.com/amalad/vllm.git@nemotron_parse"
uv pip install timm albumentations
```

### Inference example

```python
from vllm import LLM, SamplingParams
from PIL import Image


sampling_params = SamplingParams(
    temperature=0,
    top_k=1,
    repetition_penalty=1.1,
    max_tokens=9000,
    skip_special_tokens=False,
)

llm = LLM(
    model="nvidia/NVIDIA-Nemotron-Parse-v1.1",
    max_num_seqs=64,
    limit_mm_per_prompt={"image": 1},
    dtype="bfloat16",
    trust_remote_code=True,
)

image = Image.open("<YOUR-IMAGE-PATH>")

prompts = [
    {  # Implicit prompt
        "prompt": "</s><s><predict_bbox><predict_classes><output_markdown>",
        "multi_modal_data": {
            "image": image
        },
    },
    {  # Explicit encoder/decoder prompt
        "encoder_prompt": {
            "prompt": "",
            "multi_modal_data": {
                "image": image
            },
        },
        "decoder_prompt": "</s><s><predict_bbox><predict_classes><output_markdown>",
    },
]

outputs = llm.generate(prompts, sampling_params)

for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Decoder prompt: {prompt!r}, Generated text: {generated_text!r}")
```

Nemotron-Parse-v1.1 is also available as an [optimized NIM container](https://build.nvidia.com/nvidia/nemotron-parse).

## Training, Testing, and Evaluation Datasets:


### Training Dataset 

NVIDIA Nemotron Parse 1.1 is first pre-trained on our internal datasets: human, synthetic and automated.
Data Modality:
*Text
*Image<br>
Data Collection Method by Dataset: Hybrid: Human, Synthetic, Automated
Labeling Method by Dataset: Hybrid: Human, Synthetic, Automated

### Testing and Evaluation Dataset:

NVIDIA Nemotron Parse 1.1 is evaluated on multiple datasets for robustness, including public and internal dataset.
Data Collection Method by Dataset: Hybrid: Human, Synthetic, Automated
Labeling Method by Dataset: Hybrid: Human, Synthetic, Automated


## Inference 

Runtime Engine(s): TensorRT-LLM

Test Hardware: NVIDIA H100# Synchronization

## Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please report security vulnerabilities or NVIDIA AI Concerns here.

**You are responsible for ensuring that your use of NVIDIA AI Models complies with all applicable laws.**


## Enterprise Support
Get access to knowledge base articles and support cases or [submit a ticket](https://www.nvidia.com/en-us/data-center/products/ai-enterprise-suite/support/).