File size: 4,971 Bytes
ee2f979
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ad3afb1
f73b6fe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ad3afb1
ee2f979
 
ad3afb1
ee2f979
 
 
 
 
 
 
 
 
 
ad3afb1
ee2f979
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
---
license: mit
license_name: deepseek
license_link: LICENSE
pipeline_tag: any-to-any
library_name: transformers
tags:
- muiltimodal
- text-to-image
- unified-model
---

## 1. Introduction

Janus-Pro is a novel autoregressive framework that unifies multimodal understanding and generation. 
It addresses the limitations of previous approaches by decoupling visual encoding into separate pathways, while still utilizing a single, unified transformer architecture for processing. The decoupling not only alleviates the conflict between the visual encoder’s roles in understanding and generation, but also enhances the framework’s flexibility. 
Janus-Pro surpasses previous unified model and matches or exceeds the performance of task-specific models. 
The simplicity, high flexibility, and effectiveness of Janus-Pro make it a strong candidate for next-generation unified multimodal models.

[**Github Repository**](https://github.com/deepseek-ai/Janus)

<div align="center">
<img alt="image" src="janus_pro_teaser1.png" style="width:90%;">
</div>

<div align="center">
<img alt="image" src="janus_pro_teaser2.png" style="width:90%;">
</div>


### 2. Model Summary

Janus-Pro is a unified understanding and generation MLLM, which decouples visual encoding for multimodal understanding and generation. 
Janus-Pro is constructed based on the DeepSeek-LLM-1.5b-base/DeepSeek-LLM-7b-base.

For multimodal understanding, it uses the [SigLIP-L](https://huggingface.co/timm/ViT-L-16-SigLIP-384) as the vision encoder, which supports 384 x 384 image input. For image generation, Janus-Pro uses the tokenizer from [here](https://github.com/FoundationVision/LlamaGen) with a downsample rate of 16.

## 3. Usage Examples

### Single Image Inference

Here is an example of visual understanding with a single image.

```python
import torch  
from PIL import Image  
import requests  
from transformers import JanusForConditionalGeneration, JanusProcessor  

model_id = "deepseek-community/Janus-Pro-1B"

# Prepare input for generation
messages = [
    {
        "role": "user",
        "content": [
            {'type': 'image', 'url': 'http://images.cocodataset.org/val2017/000000039769.jpg'},
            {'type': 'text', 'text': "What do you see in this image?"}
        ]
    },
]

# Set generation mode to 'text' to perform text generation
processor = JanusProcessor.from_pretrained(model_id)
model = JanusForConditionalGeneration.from_pretrained(
    model_id, torch_dtype=torch.bfloat16, device_map="auto"
)

inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    generation_mode="text",
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device, dtype=torch.bfloat16)

output = model.generate(**inputs, max_new_tokens=40, generation_mode='text', do_sample=True)
text = processor.decode(output[0], skip_special_tokens=True)
print(text)
```

## Text to Image generation
Janus can also generate images from prompts by simply setting the generation mode to `image` as shown below.

```python
import torch
from transformers import JanusForConditionalGeneration, JanusProcessor

model_id = "deepseek-community/Janus-Pro-1B"

# Load processor and model
processor = JanusProcessor.from_pretrained(model_id)
model = JanusForConditionalGeneration.from_pretrained(
    model_id, torch_dtype=torch.bfloat16, device_map="auto"
)

messages = [
    {
        "role": "user",
        "content": [
            {"type": "text", "text": "A dog running under the rain."}
        ]
    }
]

# Apply chat template
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(
    text=prompt,
    generation_mode="image",
    return_tensors="pt"
).to(model.device, dtype=torch.bfloat16)

# Set number of images to generate
model.generation_config.num_return_sequences = 2

outputs = model.generate(
    **inputs,
    generation_mode="image",
    do_sample=True,
    use_cache=True
)

# Decode and save images
decoded_image = model.decode_image_tokens(outputs)
images = processor.postprocess(list(decoded_image.float()), return_tensors="PIL.Image.Image")

for i, image in enumerate(images["pixel_values"]):
    image.save(f"image{i}.png")
```

## 4. License

This code repository is licensed under [the MIT License](https://github.com/deepseek-ai/DeepSeek-LLM/blob/HEAD/LICENSE-CODE). The use of Janus-Pro models is subject to [DeepSeek Model License](https://github.com/deepseek-ai/DeepSeek-LLM/blob/HEAD/LICENSE-MODEL).
## 5. Citation

```
@article{chen2025janus,
  title={Janus-Pro: Unified Multimodal Understanding and Generation with Data and Model Scaling},
  author={Chen, Xiaokang and Wu, Zhiyu and Liu, Xingchao and Pan, Zizheng and Liu, Wen and Xie, Zhenda and Yu, Xingkai and Ruan, Chong},
  journal={arXiv preprint arXiv:2501.17811},
  year={2025}
}
```

## 6. Contact

If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).