caizhi1 zheyishine commited on
Commit
43b909a
·
verified ·
1 Parent(s): a4483c7
Files changed (1) hide show
  1. README.md +37 -25
README.md CHANGED
@@ -31,14 +31,22 @@ To enable deployment of [Ring-Linear-2.0](https://github.com/inclusionAI/Ring-V2
31
 
32
  #### Environment Preparation
33
 
34
- Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below:
 
 
35
  ```shell
36
- pip install torch==2.7.0 torchvision==0.22.0
 
 
 
 
 
 
37
  ```
38
 
39
- Then you should install our vLLM wheel package:
40
  ```shell
41
- pip install https://media.githubusercontent.com/media/inclusionAI/Ring-V2/refs/heads/main/hybrid_linear/whls/vllm-0.8.5%2Bcuda12_8_gcc10_2_1-cp310-cp310-linux_x86_64.whl --no-deps --force-reinstall
42
  ```
43
 
44
  #### Offline Inference
@@ -47,35 +55,39 @@ pip install https://media.githubusercontent.com/media/inclusionAI/Ring-V2/refs/h
47
  from transformers import AutoTokenizer
48
  from vllm import LLM, SamplingParams
49
 
50
- tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ring-mini-linear-2.0-GPTQ-int4")
51
-
52
- sampling_params = SamplingParams(temperature=0.6, top_p=1.0, max_tokens=16384)
53
-
54
-
55
- llm = LLM(model="inclusionAI/Ring-mini-linear-2.0-GPTQ-int4", dtype='auto', enable_prefix_caching=False, max_num_seqs=128)
56
-
57
-
58
- prompt = "Give me a short introduction to large language models."
59
- messages = [
60
- {"role": "user", "content": prompt}
61
- ]
62
-
63
- text = tokenizer.apply_chat_template(
64
- messages,
65
- tokenize=False,
66
- add_generation_prompt=True
67
- )
68
- outputs = llm.generate([text], sampling_params)
 
 
 
69
  ```
70
 
71
  #### Online Inference
72
  ```shell
73
  vllm serve inclusionAI/Ring-mini-linear-2.0-GPTQ-int4 \
74
- --tensor-parallel-size 2 \
75
  --pipeline-parallel-size 1 \
76
  --gpu-memory-utilization 0.90 \
77
- --max-num-seqs 512 \
78
  --no-enable-prefix-caching
 
79
  ```
80
 
81
 
 
31
 
32
  #### Environment Preparation
33
 
34
+ Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below.
35
+
36
+ First, create a Conda environment with Python 3.10 and CUDA 12.8 (Use the root or admin account, or ensure the current user has access to /home/admin/logs):
37
  ```shell
38
+ conda create -n vllm python=3.10
39
+ conda activate vllm
40
+ ```
41
+
42
+ Next, install our vLLM wheel package:
43
+ ```shell
44
+ pip install https://media.githubusercontent.com/media/inclusionAI/Ring-V2/refs/heads/main/hybrid_linear/whls/vllm-0.8.5%2Bcuda12_8_gcc10_2_1-cp310-cp310-linux_x86_64.whl --force-reinstall
45
  ```
46
 
47
+ Finally, install compatible versions of PyTorch and Torchvision after vLLM is installed:
48
  ```shell
49
+ pip install torch==2.7.0 torchvision==0.22.0
50
  ```
51
 
52
  #### Offline Inference
 
55
  from transformers import AutoTokenizer
56
  from vllm import LLM, SamplingParams
57
 
58
+ if __name__ == '__main__':
59
+ tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ring-mini-linear-2.0-GPTQ-int4")
60
+
61
+ sampling_params = SamplingParams(temperature=0.6, top_p=1.0, max_tokens=16384)
62
+
63
+ # use `max_num_seqs=1` without concurrency
64
+ llm = LLM(model="inclusionAI/Ring-mini-linear-2.0-GPTQ-int4", dtype='auto', enable_prefix_caching=False, max_num_seqs=128)
65
+
66
+
67
+ prompt = "Give me a short introduction to large language models."
68
+ messages = [
69
+ {"role": "user", "content": prompt}
70
+ ]
71
+
72
+ text = tokenizer.apply_chat_template(
73
+ messages,
74
+ tokenize=False,
75
+ add_generation_prompt=True
76
+ )
77
+ outputs = llm.generate([text], sampling_params)
78
+ for output in outputs:
79
+ print(output.outputs[0].text)
80
  ```
81
 
82
  #### Online Inference
83
  ```shell
84
  vllm serve inclusionAI/Ring-mini-linear-2.0-GPTQ-int4 \
85
+ --tensor-parallel-size 1 \
86
  --pipeline-parallel-size 1 \
87
  --gpu-memory-utilization 0.90 \
88
+ --max-num-seqs 128 \
89
  --no-enable-prefix-caching
90
+ --api-key your-api-key
91
  ```
92
 
93