Update README.md
#2
by
sergiopaniego
HF Staff
- opened
README.md
CHANGED
|
@@ -129,7 +129,7 @@ We recommend using this model with [vLLM](https://github.com/vllm-project/vllm).
|
|
| 129 |
|
| 130 |
#### Installation
|
| 131 |
|
| 132 |
-
Make sure to install most recent vllm:
|
| 133 |
|
| 134 |
```
|
| 135 |
uv pip install -U vllm \
|
|
@@ -164,7 +164,7 @@ Additional flags:
|
|
| 164 |
|
| 165 |
#### Usage of the model
|
| 166 |
|
| 167 |
-
Here we
|
| 168 |
|
| 169 |
<details>
|
| 170 |
<summary>Test Base</summary>
|
|
|
|
| 129 |
|
| 130 |
#### Installation
|
| 131 |
|
| 132 |
+
Make sure to install the most recent vllm:
|
| 133 |
|
| 134 |
```
|
| 135 |
uv pip install -U vllm \
|
|
|
|
| 164 |
|
| 165 |
#### Usage of the model
|
| 166 |
|
| 167 |
+
Here we assume that the model `mistralai/Ministral-3-3B-Base-2512` is served and you can ping it to the domain `localhost` with the port `8000` which is the default for vLLM.
|
| 168 |
|
| 169 |
<details>
|
| 170 |
<summary>Test Base</summary>
|