File size: 7,007 Bytes
7fb29cc
 
 
 
 
 
 
ed49a05
7fb29cc
 
 
 
 
 
 
 
 
 
ed49a05
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
019fd32
ed49a05
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
019fd32
ed49a05
 
 
 
 
 
 
 
 
afd746d
 
ed49a05
 
019fd32
ed49a05
 
 
 
 
 
 
 
 
 
 
 
019fd32
ed49a05
 
 
 
 
 
 
 
 
 
 
019fd32
ed49a05
 
 
 
 
 
 
 
 
 
 
 
 
afd746d
 
ed49a05
 
019fd32
ed49a05
 
 
 
 
 
 
 
 
 
 
019fd32
 
ed49a05
 
 
 
 
 
 
 
 
019fd32
 
 
ed49a05
 
 
 
 
 
 
 
afd746d
 
ed49a05
 
019fd32
ed49a05
 
 
 
 
 
 
 
 
 
3ba046a
ed49a05
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
---
license: apache-2.0
language:
- en
metrics:
- recall
base_model:
- friedrichor/Unite-Base-Qwen2-VL-7B
tags:
- sentence-transformers
- sentence-similarity
- transformers
- multimodal
- retrieval
- feature-extraction
- image-text-to-text
- video-text-to-text
- any-to-any
datasets:
- friedrichor/Unite-Instruct-Retrieval-Train
---

## Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval

[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![arXiv](https://img.shields.io/badge/arXiv-2505.19650-b31b1b.svg)](https://huggingface.co/papers/2505.19650)
[![GitHub](https://img.shields.io/badge/GitHub-UNITE-4b32c3?logo=github)](https://github.com/friedrichor/UNITE)
[![Project](https://img.shields.io/badge/🌐%20Project-Website-green)](https://friedrichor.github.io/projects/UNITE)
[![HuggingFace](https://img.shields.io/badge/🤗%20HuggingFace-Collections-yellow)](https://huggingface.co/collections/friedrichor/unite-682da30c4540abccd3da3a6b)

<p align="center">
    <img src="https://raw.githubusercontent.com/friedrichor/UNITE/main/assets/overall_task.png" alt="task" width="80%">
</p>

## UNITE: UNIversal mulTimodal Embedder

<p align="center">
    <img src="https://raw.githubusercontent.com/friedrichor/UNITE/main/assets/overall_model.png" alt="model_arch" width="100%">
</p>

**Support Modalities and Tasks:****Unified Multimodal Representations:** *text*, *image*, *video*, and *their fusion*.   
⚡ **Enhancements in Diverse Tasks:** *coarse-grained retrieval*, *fine-grained retrieval* (Recommended UNITE-Base),  and *instruction-based retrieval* (Recommended UNITE-Instruct)  

## Requirements

```bash
pip install torch==2.5.0 torchvision==0.20.0 torchaudio==2.5.0
pip install flash-attn --no-build-isolation
pip install transformers==4.47.1
pip install qwen-vl-utils[decord]==0.0.8
```

## Quickstart

```bash
# get inference code from https://huggingface.co/friedrichor/Unite-Base-Qwen2-VL-2B/tree/main/inference_demo
cd inference_demo
```

### Load Model
```python
import torch
from transformers import AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
from modeling_unite import UniteQwen2VL

model_path = 'friedrichor/Unite-Instruct-Qwen2-VL-7B'
model = UniteQwen2VL.from_pretrained(
    model_path,
    device_map="cuda",
    torch_dtype=torch.bfloat16, 
)

# We recommend enabling flash_attention_2 for better acceleration and memory saving.
# model = UniteQwen2VL.from_pretrained(
#     model_path,
#     device_map="cuda",
#     torch_dtype=torch.bfloat16,
#     attn_implementation='flash_attention_2', 
#     low_cpu_mem_usage=True,
# )

tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
processor = AutoProcessor.from_pretrained(model_path, min_pixels=256*28*28, max_pixels=1280*28*28)

def process_messages(msg):
    text = processor.apply_chat_template(msg, tokenize=False, add_generation_prompt=True) + "<|endoftext|>"
    image_inputs, video_inputs = process_vision_info(msg)
    inputs = processor(
        text=[text], 
        images=image_inputs,
        videos=video_inputs,
        padding=True,
        return_tensors="pt",
    )
    inputs = inputs.to("cuda")

    return inputs
```

### Inference

<details>
<summary>Image-Text Retrieval</summary>

```python
messages_txt = [
    {
        "role": "user",
        "content": [
            {"type": "text", "text": "The book titled 'Riding with Reindeer - A Bicycle Odyssey through Finland, Lapland, and the Arctic' provides a detailed account of a journey that explores the regions of Lapland and the Arctic, focusing on the experience of riding with reindeer."},
            {"type": "text", "text": "\nSummary above sentence in one word:"},
        ],
    }
]

messages_img = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "./examples/518L0uDGe0L.jpg"},
            {"type": "text", "text": "\nSummary above image in one word:"},
        ],
    }
]

inputs_txt = process_messages(messages_txt)
inputs_img = process_messages(messages_img)

with torch.no_grad():
    embeddings_txt = model(**inputs_txt)  # [1, 3584]
    embeddings_img = model(**inputs_img)  # [1, 3584]

    print(torch.matmul(embeddings_txt, embeddings_img.T))
    # tensor([[0.7578]], dtype=torch.bfloat16)
```
</details>


<details>
<summary>Video-Text Retrieval</summary>

```python
messages_txt = [
    {
        "role": "user",
        "content": [
            {"type": "text", "text": "Timelapse of stormy clouds over open sea and snowcapped mountain"},
            {"type": "text", "text": "\nSummary above sentence in one word:"},
        ],
    }
]

messages_vid = [
    {
        "role": "user",
        "content": [
            {
                "type": "video", 
                "video": "./examples/stock-footage-timelapse-of-stormy-clouds-over-open-sea-and-snowcapped-mountain.mp4",
                "max_pixels": 360 * 420, 
                "fps": 1,
                "max_frames": 32
            },
            {"type": "text", "text": "\nSummary above video in one word:"},
        ],
    }
]

inputs_txt = process_messages(messages_txt)
inputs_vid = process_messages(messages_vid)

with torch.no_grad():
    embeddings_txt = model(**inputs_txt)  # [1, 3584]
    embeddings_vid = model(**inputs_vid)  # [1, 3584]

    print(torch.matmul(embeddings_txt, embeddings_vid.T))
    # tensor([[0.4883]], dtype=torch.bfloat16)
```
</details>

<details>
<summary>Fused-Modal Retrieval</summary>

```python
messages_qry = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "./examples/oven_05011373.jpg"},
            {"type": "text", "text": "What is the name of this place?"},
            {"type": "text", "text": "\nSummary above sentence and image in one word:"},
        ],
    }
]

messages_tgt = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": "./examples/Q673659.jpg"},
            {"type": "text", "text": "Marina Beach."},
            {"type": "text", "text": "\nSummary above sentence and image in one word:"},
        ],
    }
]

inputs_qry = process_messages(messages_qry)
inputs_tgt = process_messages(messages_tgt)

with torch.no_grad():
    embeddings_qry = model(**inputs_qry)  # [1, 3584]
    embeddings_tgt = model(**inputs_tgt)  # [1, 3584]

    print(torch.matmul(embeddings_qry, embeddings_tgt.T))
    # tensor([[0.6719]], dtype=torch.bfloat16)
```
</details>

## Citation

If you find our work helpful, feel free to give us a cite.

```
@article{kong2025modality,
  title={Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval},
  author={Kong, Fanheng and Zhang, Jingyuan and Liu, Yahui and Zhang, Hongzhi and Feng, Shi and Yang, Xiaocui and Wang, Daling and Tian, Yu and W., Victoria and Zhang, Fuzheng and Zhou, Guorui},
  journal={arXiv preprint arXiv:2505.19650},
  year={2025}
}
```