dengcao commited on
Commit
8aaaf4b
·
1 Parent(s): e9ce838
Files changed (4) hide show
  1. .mdl +0 -0
  2. .msc +0 -0
  3. .mv +1 -0
  4. README.md +405 -3
.mdl ADDED
Binary file (57 Bytes). View file
 
.msc ADDED
Binary file (330 Bytes). View file
 
.mv ADDED
@@ -0,0 +1 @@
 
 
1
+ Revision:master,CreatedAt:1749755401
README.md CHANGED
@@ -1,3 +1,405 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - Qwen/Qwen3-0.6B-Base
5
+ library_name: transformers
6
+ ---
7
+
8
+ # <span style="color: #7FFF7F;">Qwen3-Reranker-0.6B GGUF Models</span>
9
+
10
+
11
+ ## <span style="color: #7F7FFF;">Model Generation Details</span>
12
+
13
+ This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`d17a809e`](https://github.com/ggerganov/llama.cpp/commit/d17a809ef0af09b16625e991a76f6fe80d9c332e).
14
+
15
+
16
+
17
+
18
+
19
+
20
+ ## **Choosing the Right Model Format**
21
+
22
+ Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**.
23
+
24
+ ### **BF16 (Brain Float 16) – Use if BF16 acceleration is available**
25
+ - A 16-bit floating-point format designed for **faster computation** while retaining good precision.
26
+ - Provides **similar dynamic range** as FP32 but with **lower memory usage**.
27
+ - Recommended if your hardware supports **BF16 acceleration** (check your device's specs).
28
+ - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32.
29
+
30
+ 📌 **Use BF16 if:**
31
+ ✔ Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs).
32
+ ✔ You want **higher precision** while saving memory.
33
+ ✔ You plan to **requantize** the model into another format.
34
+
35
+ 📌 **Avoid BF16 if:**
36
+ ❌ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower).
37
+ ❌ You need compatibility with older devices that lack BF16 optimization.
38
+
39
+ ---
40
+
41
+ ### **F16 (Float 16) – More widely supported than BF16**
42
+ - A 16-bit floating-point **high precision** but with less of range of values than BF16.
43
+ - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs).
44
+ - Slightly lower numerical precision than BF16 but generally sufficient for inference.
45
+
46
+ 📌 **Use F16 if:**
47
+ ✔ Your hardware supports **FP16** but **not BF16**.
48
+ ✔ You need a **balance between speed, memory usage, and accuracy**.
49
+ ✔ You are running on a **GPU** or another device optimized for FP16 computations.
50
+
51
+ 📌 **Avoid F16 if:**
52
+ ❌ Your device lacks **native FP16 support** (it may run slower than expected).
53
+ ❌ You have memory limitations.
54
+
55
+ ---
56
+
57
+ ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference**
58
+ Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
59
+ - **Lower-bit models (Q4_K)** → **Best for minimal memory usage**, may have lower precision.
60
+ - **Higher-bit models (Q6_K, Q8_0)** → **Better accuracy**, requires more memory.
61
+
62
+ 📌 **Use Quantized Models if:**
63
+ ✔ You are running inference on a **CPU** and need an optimized model.
64
+ ✔ Your device has **low VRAM** and cannot load full-precision models.
65
+ ✔ You want to reduce **memory footprint** while keeping reasonable accuracy.
66
+
67
+ 📌 **Avoid Quantized Models if:**
68
+ ❌ You need **maximum accuracy** (full-precision models are better for this).
69
+ ❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16).
70
+
71
+ ---
72
+
73
+ ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)**
74
+ These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint.
75
+
76
+ - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**.
77
+ - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large.
78
+ - **Trade-off**: Lower accuracy compared to higher-bit quantizations.
79
+
80
+ - **IQ3_S**: Small block size for **maximum memory efficiency**.
81
+ - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive.
82
+
83
+ - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**.
84
+ - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting.
85
+
86
+ - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy.
87
+ - **Use case**: Best for **low-memory devices** where **Q6_K** is too large.
88
+
89
+ - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**.
90
+ - **Use case**: Best for **ARM-based devices** or **low-memory environments**.
91
+
92
+ ---
93
+
94
+ ### **Summary Table: Model Format Selection**
95
+
96
+ | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
97
+ |--------------|------------|---------------|----------------------|---------------|
98
+ | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
99
+ | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available |
100
+ | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
101
+ | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
102
+ | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
103
+ | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
104
+ | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
105
+
106
+ ---
107
+
108
+
109
+
110
+ # <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
111
+ ❤ **Please click "Like" if you find this useful!**
112
+ Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**:
113
+ 👉 [Free Network Monitor](https://readyforquantum.com/dashboard/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
114
+
115
+ 💬 **How to test**:
116
+ Choose an **AI assistant type**:
117
+ - `TurboLLM` (GPT-4o-mini)
118
+ - `HugLLM` (Hugginface Open-source)
119
+ - `TestLLM` (Experimental CPU-only)
120
+
121
+ ### **What I’m Testing**
122
+ I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
123
+ - **Function calling** against live network services
124
+ - **How small can a model go** while still handling:
125
+ - Automated **Nmap scans**
126
+ - **Quantum-readiness checks**
127
+ - **Network Monitoring tasks**
128
+
129
+ 🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads):
130
+ - ✅ **Zero-configuration setup**
131
+ - ⏳ 30s load time (slow inference but **no API costs**)
132
+ - 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
133
+
134
+ ### **Other Assistants**
135
+ 🟢 **TurboLLM** – Uses **gpt-4o-mini** for:
136
+ - **Create custom cmd processors to run .net code on Free Network Monitor Agents**
137
+ - **Real-time network diagnostics and monitoring**
138
+ - **Security Audits**
139
+ - **Penetration testing** (Nmap/Metasploit)
140
+
141
+
142
+ 🔵 **HugLLM** – Latest Open-source models:
143
+ - 🌐 Runs on Hugging Face Inference API
144
+
145
+ ### 💡 **Example commands to you could test**:
146
+ 1. `"Give me info on my websites SSL certificate"`
147
+ 2. `"Check if my server is using quantum safe encyption for communication"`
148
+ 3. `"Run a comprehensive security audit on my server"`
149
+ 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Free Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!
150
+
151
+ ### Final Word
152
+
153
+ I fund the servers used to create these model files, run the Free Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Free Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
154
+
155
+ If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
156
+
157
+ I'm also open to job opportunities or sponsorship.
158
+
159
+ Thank you! 😊
160
+
161
+
162
+
163
+
164
+
165
+ # Qwen3-Reranker-0.6B
166
+
167
+ <p align="center">
168
+ <img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/logo_qwen3.png" width="400"/>
169
+ <p>
170
+
171
+ ## Highlights
172
+
173
+ The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B). This series inherits the exceptional multilingual capabilities, long-text understanding, and reasoning skills of its foundational model. The Qwen3 Embedding series represents significant advancements in multiple text embedding and ranking tasks, including text retrieval, code retrieval, text classification, text clustering, and bitext mining.
174
+
175
+ **Exceptional Versatility**: The embedding model has achieved state-of-the-art performance across a wide range of downstream application evaluations. The 8B size embedding model ranks No.1 in the MTEB multilingual leaderboard (as of June 5, 2025, score 70.58), while the reranking model excels in various text retrieval scenarios.
176
+
177
+ **Comprehensive Flexibility**: The Qwen3 Embedding series offers a full spectrum of sizes (from 0.6B to 8B) for both embedding and reranking models, catering to diverse use cases that prioritize efficiency and effectiveness. Developers can seamlessly combine these two modules. Additionally, the embedding model allows for flexible vector definitions across all dimensions, and both embedding and reranking models support user-defined instructions to enhance performance for specific tasks, languages, or scenarios.
178
+
179
+ **Multilingual Capability**: The Qwen3 Embedding series offer support for over 100 languages, thanks to the multilingual capabilites of Qwen3 models. This includes various programming languages, and provides robust multilingual, cross-lingual, and code retrieval capabilities.
180
+
181
+ ## Model Overview
182
+
183
+ **Qwen3-Reranker-0.6B** has the following features:
184
+
185
+ - Model Type: Text Reranking
186
+ - Supported Languages: 100+ Languages
187
+ - Number of Paramaters: 0.6B
188
+ - Context Length: 32k
189
+
190
+ For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-embedding/), [GitHub](https://github.com/QwenLM/Qwen3-Embedding).
191
+
192
+ ## Qwen3 Embedding Series Model list
193
+
194
+ | Model Type | Models | Size | Layers | Sequence Length | Embedding Dimension | MRL Support | Instruction Aware |
195
+ |------------------|----------------------|------|--------|-----------------|---------------------|-------------|----------------|
196
+ | Text Embedding | [Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) | 0.6B | 28 | 32K | 1024 | Yes | Yes |
197
+ | Text Embedding | [Qwen3-Embedding-4B](https://huggingface.co/Qwen/Qwen3-Embedding-4B) | 4B | 36 | 32K | 2560 | Yes | Yes |
198
+ | Text Embedding | [Qwen3-Embedding-8B](https://huggingface.co/Qwen/Qwen3-Embedding-8B) | 8B | 36 | 32K | 4096 | Yes | Yes |
199
+ | Text Reranking | [Qwen3-Reranker-0.6B](https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) | 0.6B | 28 | 32K | - | - | Yes |
200
+ | Text Reranking | [Qwen3-Reranker-4B](https://huggingface.co/Qwen/Qwen3-Reranker-4B) | 4B | 36 | 32K | - | - | Yes |
201
+ | Text Reranking | [Qwen3-Reranker-8B](https://huggingface.co/Qwen/Qwen3-Reranker-8B) | 8B | 36 | 32K | - | - | Yes |
202
+
203
+ > **Note**:
204
+ > - `MRL Support` indicates whether the embedding model supports custom dimensions for the final embedding.
205
+ > - `Instruction Aware` notes whether the embedding or reranking model supports customizing the input instruction according to different tasks.
206
+ > - Our evaluation indicates that, for most downstream tasks, using instructions (instruct) typically yields an improvement of 1% to 5% compared to not using them. Therefore, we recommend that developers create tailored instructions specific to their tasks and scenarios. In multilingual contexts, we also advise users to write their instructions in English, as most instructions utilized during the model training process were originally written in English.
207
+
208
+
209
+ ## Usage
210
+
211
+ With Transformers versions earlier than 4.51.0, you may encounter the following error:
212
+ ```
213
+ KeyError: 'qwen3'
214
+ ```
215
+
216
+ ### Transformers Usage
217
+
218
+ ```python
219
+ # Requires transformers>=4.51.0
220
+ import torch
221
+ from transformers import AutoModel, AutoTokenizer, AutoModelForCausalLM
222
+
223
+ def format_instruction(instruction, query, doc):
224
+ if instruction is None:
225
+ instruction = 'Given a web search query, retrieve relevant passages that answer the query'
226
+ output = "<Instruct>: {instruction}\n<Query>: {query}\n<Document>: {doc}".format(instruction=instruction,query=query, doc=doc)
227
+ return output
228
+
229
+ def process_inputs(pairs):
230
+ inputs = tokenizer(
231
+ pairs, padding=False, truncation='longest_first',
232
+ return_attention_mask=False, max_length=max_length - len(prefix_tokens) - len(suffix_tokens)
233
+ )
234
+ for i, ele in enumerate(inputs['input_ids']):
235
+ inputs['input_ids'][i] = prefix_tokens + ele + suffix_tokens
236
+ inputs = tokenizer.pad(inputs, padding=True, return_tensors="pt", max_length=max_length)
237
+ for key in inputs:
238
+ inputs[key] = inputs[key].to(model.device)
239
+ return inputs
240
+
241
+ @torch.no_grad()
242
+ def compute_logits(inputs, **kwargs):
243
+ batch_scores = model(**inputs).logits[:, -1, :]
244
+ true_vector = batch_scores[:, token_true_id]
245
+ false_vector = batch_scores[:, token_false_id]
246
+ batch_scores = torch.stack([false_vector, true_vector], dim=1)
247
+ batch_scores = torch.nn.functional.log_softmax(batch_scores, dim=1)
248
+ scores = batch_scores[:, 1].exp().tolist()
249
+ return scores
250
+
251
+ tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-Reranker-0.6B", padding_side='left')
252
+ model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-Reranker-0.6B").eval()
253
+ # We recommend enabling flash_attention_2 for better acceleration and memory saving.
254
+ # model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-Reranker-0.6B", torch_dtype=torch.float16, attn_implementation="flash_attention_2").cuda().eval()
255
+ token_false_id = tokenizer.convert_tokens_to_ids("no")
256
+ token_true_id = tokenizer.convert_tokens_to_ids("yes")
257
+ max_length = 8192
258
+
259
+ prefix = "<|im_start|>system\nJudge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be \"yes\" or \"no\".<|im_end|>\n<|im_start|>user\n"
260
+ suffix = "<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"
261
+ prefix_tokens = tokenizer.encode(prefix, add_special_tokens=False)
262
+ suffix_tokens = tokenizer.encode(suffix, add_special_tokens=False)
263
+
264
+ task = 'Given a web search query, retrieve relevant passages that answer the query'
265
+
266
+ queries = ["What is the capital of China?",
267
+ "Explain gravity",
268
+ ]
269
+
270
+ documents = [
271
+ "The capital of China is Beijing.",
272
+ "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
273
+ ]
274
+
275
+ pairs = [format_instruction(task, query, doc) for query, doc in zip(queries, documents)]
276
+
277
+ # Tokenize the input texts
278
+ inputs = process_inputs(pairs)
279
+ scores = compute_logits(inputs)
280
+
281
+ print("scores: ", scores)
282
+ ```
283
+
284
+
285
+ ### vLLM Usage
286
+
287
+ ```python
288
+ # Requires vllm>=0.8.5
289
+ import logging
290
+ from typing import Dict, Optional, List
291
+
292
+ import json
293
+ import logging
294
+
295
+ import torch
296
+
297
+ from transformers import AutoTokenizer, is_torch_npu_available
298
+ from vllm import LLM, SamplingParams
299
+ from vllm.distributed.parallel_state import destroy_model_parallel
300
+ import gc
301
+ import math
302
+ from vllm.inputs.data import TokensPrompt
303
+
304
+
305
+
306
+ def format_instruction(instruction, query, doc):
307
+ text = [
308
+ {"role": "system", "content": "Judge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be \"yes\" or \"no\"."},
309
+ {"role": "user", "content": f"<Instruct>: {instruction}\n\n<Query>: {query}\n\n<Document>: {doc}"}
310
+ ]
311
+ return text
312
+
313
+ def process_inputs(pairs, instruction, max_length, suffix_tokens):
314
+ messages = [format_instruction(instruction, query, doc) for query, doc in pairs]
315
+ messages = tokenizer.apply_chat_template(
316
+ messages, tokenize=True, add_generation_prompt=False, enable_thinking=False
317
+ )
318
+ messages = [ele[:max_length] + suffix_tokens for ele in messages]
319
+ messages = [TokensPrompt(prompt_token_ids=ele) for ele in messages]
320
+ return messages
321
+
322
+ def compute_logits(model, messages, sampling_params, true_token, false_token):
323
+ outputs = model.generate(messages, sampling_params, use_tqdm=False)
324
+ scores = []
325
+ for i in range(len(outputs)):
326
+ final_logits = outputs[i].outputs[0].logprobs[-1]
327
+ token_count = len(outputs[i].outputs[0].token_ids)
328
+ if true_token not in final_logits:
329
+ true_logit = -10
330
+ else:
331
+ true_logit = final_logits[true_token].logprob
332
+ if false_token not in final_logits:
333
+ false_logit = -10
334
+ else:
335
+ false_logit = final_logits[false_token].logprob
336
+ true_score = math.exp(true_logit)
337
+ false_score = math.exp(false_logit)
338
+ score = true_score / (true_score + false_score)
339
+ scores.append(score)
340
+ return scores
341
+
342
+ number_of_gpu = torch.cuda.device_count()
343
+ tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Reranker-0.6B')
344
+ model = LLM(model='Qwen/Qwen3-Reranker-0.6B', tensor_parallel_size=number_of_gpu, max_model_len=10000, enable_prefix_caching=True, gpu_memory_utilization=0.8)
345
+ tokenizer.padding_side = "left"
346
+ tokenizer.pad_token = tokenizer.eos_token
347
+ suffix = "<|im_end|>\n<|im_start|>assistant\n<think>\n\n</think>\n\n"
348
+ max_length=8192
349
+ suffix_tokens = tokenizer.encode(suffix, add_special_tokens=False)
350
+ true_token = tokenizer("yes", add_special_tokens=False).input_ids[0]
351
+ false_token = tokenizer("no", add_special_tokens=False).input_ids[0]
352
+ sampling_params = SamplingParams(temperature=0,
353
+ max_tokens=1,
354
+ logprobs=20,
355
+ allowed_token_ids=[true_token, false_token],
356
+ )
357
+
358
+
359
+ task = 'Given a web search query, retrieve relevant passages that answer the query'
360
+ queries = ["What is the capital of China?",
361
+ "Explain gravity",
362
+ ]
363
+ documents = [
364
+ "The capital of China is Beijing.",
365
+ "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
366
+ ]
367
+
368
+ pairs = list(zip(queries, documents))
369
+ inputs = process_inputs(pairs, task, max_length-len(suffix_tokens), suffix_tokens)
370
+ scores = compute_logits(model, inputs, sampling_params, true_token, false_token)
371
+ print('scores', scores)
372
+
373
+ destroy_model_parallel()
374
+ ```
375
+
376
+ 📌 **Tip**: We recommend that developers customize the `instruct` according to their specific scenarios, tasks, and languages. Our tests have shown that in most retrieval scenarios, not using an `instruct` on the query side can lead to a drop in retrieval performance by approximately 1% to 5%.
377
+
378
+ ## Evaluation
379
+
380
+ | Model | Param | MTEB-R | CMTEB-R | MMTEB-R | MLDR | MTEB-Code | FollowIR |
381
+ |------------------------------------|--------|---------|---------|---------|--------|-----------|----------|
382
+ | **Qwen3-Embedding-0.6B** | 0.6B | 61.82 | 71.02 | 64.64 | 50.26 | 75.41 | 5.09 |
383
+ | Jina-multilingual-reranker-v2-base | 0.3B | 58.22 | 63.37 | 63.73 | 39.66 | 58.98 | -0.68 |
384
+ | gte-multilingual-reranker-base | 0.3B | 59.51 | 74.08 | 59.44 | 66.33 | 54.18 | -1.64 |
385
+ | BGE-reranker-v2-m3 | 0.6B | 57.03 | 72.16 | 58.36 | 59.51 | 41.38 | -0.01 |
386
+ | **Qwen3-Reranker-0.6B** | 0.6B | 65.80 | 71.31 | 66.36 | 67.28 | 73.42 | 5.41 |
387
+ | **Qwen3-Reranker-4B** | 1.7B | **69.76** | 75.94 | 72.74 | 69.97 | 81.20 | **14.84** |
388
+ | **Qwen3-Reranker-8B** | 8B | 69.02 | **77.45** | **72.94** | **70.19** | **81.22** | 8.05 |
389
+
390
+ > **Note**:
391
+ > - Evaluation results for reranking models. We use the retrieval subsets of MTEB(eng, v2), MTEB(cmn, v1), MMTEB and MTEB (Code), which are MTEB-R, CMTEB-R, MMTEB-R and MTEB-Code.
392
+ > - All scores are our runs based on the top-100 candidates retrieved by dense embedding model [Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B).
393
+
394
+ ## Citation
395
+ If you find our work helpful, feel free to give us a cite.
396
+
397
+ ```
398
+ @misc{qwen3-embedding,
399
+ title = {Qwen3-Embedding},
400
+ url = {https://qwenlm.github.io/blog/qwen3/},
401
+ author = {Qwen Team},
402
+ month = {May},
403
+ year = {2025}
404
+ }
405
+ ```