File size: 5,157 Bytes
97835e1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43b909a
 
8362d51
97835e1
43b909a
 
 
 
 
 
8362d51
97835e1
 
8362d51
97835e1
8362d51
97835e1
 
 
 
 
 
 
 
43b909a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97835e1
 
 
 
 
43b909a
97835e1
 
43b909a
97835e1
43b909a
97835e1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1db2cf3
1ae0aa5
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
---
license: mit
language:
- en
base_model:
- inclusionAI/Ring-mini-linear-2.0
pipeline_tag: text-generation
---
# Quantized Ring-Linear-2.0

## Introduction

To enable deployment of [Ring-Linear-2.0](https://github.com/inclusionAI/Ring-V2/blob/main/hybrid_linear/README.md
) on memory-constrained devices, we release quantized weights using the GPTQ INT4 format. Additionally, we evaluate the online FP8 quantization performance of `Ring-Linear-2.0` models, which closely approaches that of BF16 precision.



## Model Downloads


|       **Model**        | **Maximum Supported Length** |                                                                             **Download**                                                                             |
|:----------------------:| :----------------: |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| Ring-flash-linear-2.0-GPTQ-int4  |        128k         |  [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-flash-linear-2.0-GPTQ-int4) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ring-flash-linear-2.0-GPTQ-int4)  |
| Ring-mini-linear-2.0-GPTQ-int4   |        512k         |  [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-mini-linear-2.0-GPTQ-int4) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ring-mini-linear-2.0-GPTQ-int4)  |


## Quickstart


### 🚀 vLLM

#### Environment Preparation

Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below.

First, create a Conda environment with Python 3.10 and CUDA 12.8:
```shell
conda create -n vllm python=3.10
conda activate vllm
```

Next, install our vLLM wheel package:
```shell
pip install https://media.githubusercontent.com/media/zheyishine/vllm_whl/refs/heads/main/vllm-0.8.5.post2.dev28%2Bgd327eed71.cu128-cp310-cp310-linux_x86_64.whl --force-reinstall
```

Finally, install compatible versions of transformers after vLLM is installed:
```shell
pip install transformers==4.51.1 
```

#### Offline Inference

```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams

if __name__ == '__main__':
    tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ring-mini-linear-2.0-GPTQ-int4")
    
    sampling_params = SamplingParams(temperature=0.6, top_p=1.0, max_tokens=16384)

    # use `max_num_seqs=1` without concurrency
    llm = LLM(model="inclusionAI/Ring-mini-linear-2.0-GPTQ-int4", dtype='auto', enable_prefix_caching=False, max_num_seqs=128)
    
    
    prompt = "Give me a short introduction to large language models."
    messages = [
        {"role": "user", "content": prompt}
    ]
    
    text = tokenizer.apply_chat_template(
        messages,
        tokenize=False,
        add_generation_prompt=True
    )
    outputs = llm.generate([text], sampling_params)
    for output in outputs:
        print(output.outputs[0].text)
```

#### Online Inference
```shell
vllm serve inclusionAI/Ring-mini-linear-2.0-GPTQ-int4 \
              --tensor-parallel-size 1 \
              --pipeline-parallel-size 1 \
              --gpu-memory-utilization 0.90 \
              --max-num-seqs 128 \
              --no-enable-prefix-caching
              --api-key your-api-key
```



## Evaluation


We evaluate the INT4 and FP8 quantized models using several datasets. The FP8 quantization is applied via the quantization="fp8" argument in SGLang or vLLM.



### Ring-mini-linear-2.0
|  **Dataset** | **BF16** | **FP8** | **GPTQ-Int4** |
| :----------------: |:--------:|:-------:|:-------------:|   
|       AIME25       |  73.65   |  72.40  |     66.56     |
|       AIME24       |  79.95   |  79.53  |     74.95     |
|       LiveCodeBench|  59.53   |  58.42  |     56.29     |
|       GPQA         |  65.69   |  66.79  |     62.53     |

### Ring-flash-linear-2.0
|  **Dataset** | **BF16** | **FP8** |  **GPTQ-Int4** |
| :----------------: |:--------:|:-------:|   :-----------------------:|
|       AIME25       |  85.10  |  84.22  | 82.88 |
|       LiveCodeBench|  69.82  |  69.44  | 66.14 |
|       GPQA         |  72.85  |  72.95  | 71.72 |




## License

This code repository is licensed under [the MIT License](https://github.com/inclusionAI/Ring-V2/blob/master/LICENSE).

## Citation
```shell
@misc{lingteam2025attentionmattersefficienthybrid,
      title={Every Attention Matters: An Efficient Hybrid Architecture for Long-Context Reasoning}, 
      author={Ling Team and Bin Han and Caizhi Tang and Chen Liang and Donghao Zhang and Fan Yuan and Feng Zhu and Jie Gao and Jingyu Hu and Longfei Li and Meng Li and Mingyang Zhang and Peijie Jiang and Peng Jiao and Qian Zhao and Qingyuan Yang and Wenbo Shen and Xinxing Yang and Yalin Zhang and Yankun Ren and Yao Zhao and Yibo Cao and Yixuan Sun and Yue Zhang and Yuchen Fang and Zibin Lin and Zixuan Cheng and Jun Zhou},
      year={2025},
      eprint={2510.19338},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2510.19338}, 
}
```