# nemotron-parse Overview nemotron-parse is a general purpose text-extraction model, specifically designed to handle documents. Given an image, nemotron-parse is able to extract formatted-text, with bounding-boxes and the corresponding semantic class. This has downstream benefits for several tasks such as increasing the availability of training-data for Large Language Models (LLMs), improving the accuracy of retriever systems, and enhancing document understanding pipelines. This model is ready for commercial use. ## License GOVERNING TERMS: The NIM container is governed by the [NVIDIA Software License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-software-license-agreement/) and [Product-Specific Terms for NVIDIA AI Products](https://www.nvidia.com/en-us/agreements/enterprise-software/product-specific-terms-for-ai-products/). Use of this model is governed by the [NVIDIA Community Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-community-models-license/). Use of the tokenizer included in this model is governed by the [CC-BY-4.0 license](https://creativecommons.org/licenses/by/4.0/). ## Deployment Geography: Global ## Use Case: nemotron-parse will be capable of comprehensive text understanding and document structure understanding. It will be used in retriever and curator solutions. Its text extraction datasets and capabilities will help with LLM and VLM training, as well as improve run-time inference accuracy of VLMs. The nemotron-parse model will perform text extraction from PDF and PPT documents. The nemotron-parse can classify the objects (title, section, caption, index, footnote, lists, tables, bibliography, image) in a given document, and provide bounding boxes with coordinates. ## Release Date: November 17, 2025 ## References * https://huggingface.co/docs/transformers/en/model_doc/mbart ## Model Architecture ### Architecture Type : Transformer-based vision-encoder-decoder model ### Network Architecture * Vision Encoder: ViT-H model (https://huggingface.co/nvidia/C-RADIO)
* Adapter Layer: 1D convolutions & norms to compress dimensionality and sequence length of the latent space (13184 tokens to 3201 tokens)
* Decoder: mBart [1] 10 blocks
* Tokenizer: Use of the tokenizer included in this model is governed by the [CC-BY-4.0 license](https://creativecommons.org/licenses/by/4.0/)
* Number of Parameters: < 1B
## Computational Load (For NVIDIA Models Only) **Cumulative Compute:** 2.2e+22
**Estimated Energy and Emissions for Model Training:** Energy Consumption: 7,827.46 kWh
Carbon Emissions: 3.21 tCO2e
### Input * Input Type: Image, Text
* Input Type(s): Red, Green, Blue (RGB) + Prompt (String) * Input Parameters: 2D, 1D - Other Properties Related to Input: - Max Input Resolution (Width, Height): 1648, 2048 - Min Input Resolution (Width, Height): 1024, 1280 - Channel Count: 3 ### Output * Output Type: Text
* Output Format: String * Output Parameters: 1D - Other Properties Related to Output: - nemotron-parse output format is a string which encodes text content (formatted or not) as well as bounding boxes and class attributes.
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
## Software Integration: Runtime Engine(s): TensorRT-LLM Supported Hardware Microarchitecture Compatibility:
NVIDIA Hopper/NVIDIA Ampere/NVIDIA Turing
Supported Operating System(s): Linux
The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
## Model Version: V1.1 ## Quick Start ### Install dependencies ```bash pip install -r requirements.txt ``` ### Usage example ```python import torch from PIL import Image, ImageDraw from transformers import AutoModel, AutoProcessor, AutoTokenizer, AutoConfig, AutoImageProcessor, GenerationConfig from postprocessing import extract_classes_bboxes, transform_bbox_to_original, postprocess_text # Load model and processor model_path = "nvidia/NVIDIA-Nemotron-Parse-v1.1" # Or use a local path device = "cuda:0" model = AutoModel.from_pretrained( model_path, trust_remote_code=True, torch_dtype=torch.bfloat16 ).to(device).eval() tokenizer = AutoTokenizer.from_pretrained(model_path) processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True) # Load image image = Image.open("path/to/your/image.jpg") task_prompt = "" # Process image inputs = processor(images=[image], text=task_prompt, return_tensors="pt").to(device) prompt_ids = processor.tokenizer.encode(task_prompt, return_tensors="pt", add_special_tokens=False).cuda() generation_config = GenerationConfig.from_pretrained(model_path, trust_remote_code=True) # Generate text outputs = model.generate(**inputs, generation_config=generation_config) # Decode the generated text generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0] classes, bboxes, texts = extract_classes_bboxes(generated_text) bboxes = [transform_bbox_to_original(bbox, image.width, image.height) for bbox in bboxes] # Specify output formats for postprocessing table_format = 'latex' # latex | HTML | markdown text_format = 'markdown' # markdown | plain blank_text_in_figures = False # remove text inside 'Picture' class texts = [postprocess_text(text, cls = cls, table_format=table_format, text_format=text_format, blank_text_in_figures=blank_text_in_figures) for text, cls in zip(texts, classes)] for cl, bb, txt in zip(classes, bboxes, texts): print(cl, ': ', txt) draw = ImageDraw.Draw(image) for bbox in bboxes: draw.rectangle((bbox[0], bbox[1], bbox[2], bbox[3]), outline="red") ``` ## Training, Testing, and Evaluation Datasets: ### Training Dataset nemotron-parse is first pre-trained on our internal datasets: human, synthetic and automated. Data Modality: *Text *Image
Data Collection Method by Dataset: Hybrid: Human, Synthetic, Automated Labeling Method by Dataset: Hybrid: Human, Synthetic, Automated ### Testing and Evaluation Dataset: nemotron-parse is evaluated on multiple datasets for robustness, including public and internal dataset. Data Collection Method by Dataset: Hybrid: Human, Synthetic, Automated Labeling Method by Dataset: Hybrid: Human, Synthetic, Automated ## Inference Runtime Engine(s): TensorRT-LLM Test Hardware: NVIDIA H100# Synchronization ## Ethical Considerations NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report security vulnerabilities or NVIDIA AI Concerns here. **You are responsible for ensuring that your use of NVIDIA AI Models complies with all applicable laws.** ## Enterprise Support Get access to knowledge base articles and support cases or [submit a ticket](https://www.nvidia.com/en-us/data-center/products/ai-enterprise-suite/support/).