Update README.md
Browse files
README.md
CHANGED
|
@@ -14,15 +14,66 @@ language:
|
|
| 14 |
- ar
|
| 15 |
license: apache-2.0
|
| 16 |
inference: false
|
| 17 |
-
base_model:
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
If you want to learn more about how we process your personal data, please read
|
| 21 |
-
our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
|
| 22 |
tags:
|
| 23 |
- mistral-common
|
| 24 |
---
|
| 25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
# Ministral 3 14B Reasoning 2512
|
| 27 |
|
| 28 |
The largest model in the Ministral 3 family, **Ministral 3 14B** offers frontier capabilities and performance comparable to its larger [Mistral Small 3.2 24B](https://huggingface.co/mistralai/Mistral-Small-3.2-Instruct-2506) counterpart. A powerful and efficient language model with vision capabilities.
|
|
|
|
| 14 |
- ar
|
| 15 |
license: apache-2.0
|
| 16 |
inference: false
|
| 17 |
+
base_model: mistralai/Ministral-3-14B-Reasoning-2512
|
| 18 |
+
extra_gated_description: If you want to learn more about how we process your personal
|
| 19 |
+
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
|
|
|
|
|
|
|
| 20 |
tags:
|
| 21 |
- mistral-common
|
| 22 |
---
|
| 23 |
|
| 24 |
+
# Ministral-3-14B-Reasoning-2512 AWQ - INT4
|
| 25 |
+
|
| 26 |
+
## Model Details
|
| 27 |
+
|
| 28 |
+
### Quantization Details
|
| 29 |
+
|
| 30 |
+
- **Quantization Method:** AWQ
|
| 31 |
+
- **Bits:** 4
|
| 32 |
+
- **Group Size:** 32
|
| 33 |
+
- **Calibration Dataset:** [5CD-AI/LLaVA-CoT-o1-Instruct](https://huggingface.co/datasets/5CD-AI/LLaVA-CoT-o1-Instruct)
|
| 34 |
+
- **Quantization Tool:** [llm-compressor](https://github.com/vllm-project/llm-compressor)
|
| 35 |
+
|
| 36 |
+
### Memory Usage
|
| 37 |
+
|
| 38 |
+
| **Type** | **Ministral-3-14B-Reasoning-2512** | **Ministral-3-14B-Reasoning-2512-AWQ-4bit** |
|
| 39 |
+
|:---------------:|:----------------:|:----------------:|
|
| 40 |
+
| **Memory Size** | 51.9 GB | 19.4 GB |
|
| 41 |
+
| **KV Cache per Token** | 200.0 kB | 50.0 kB |
|
| 42 |
+
| **KV Cache per Context** | 50.0 GB | 12.5 GB |
|
| 43 |
+
|
| 44 |
+
### Evaluations
|
| 45 |
+
|
| 46 |
+
| **Benchmarks** | **Ministral-3-14B-Reasoning-2512** | **Ministral-3-14B-Reasoning-2512-AWQ-4bit** |
|
| 47 |
+
|:---------------:|:----------------:|:----------------:|
|
| 48 |
+
| **Perplexity** | 1.52771 | 1.5367 |
|
| 49 |
+
|
| 50 |
+
- **Evaluation Context Length:** 16384
|
| 51 |
+
|
| 52 |
+
## Inference
|
| 53 |
+
|
| 54 |
+
### Prerequisite
|
| 55 |
+
|
| 56 |
+
```bash
|
| 57 |
+
pip install -U vllm
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
### Basic Usage
|
| 61 |
+
|
| 62 |
+
```bash
|
| 63 |
+
vllm serve cyankiwi/Ministral-3-14B-Reasoning-2512-AWQ-4bit --tokenizer_mode mistral --config_format mistral --load_format mistral --enable-auto-tool-choice --tool-call-parser mistral
|
| 64 |
+
```
|
| 65 |
+
|
| 66 |
+
## Additional Information
|
| 67 |
+
|
| 68 |
+
### Changelog
|
| 69 |
+
|
| 70 |
+
- **v1.0.0** - Initial quantized release
|
| 71 |
+
|
| 72 |
+
### Authors
|
| 73 |
+
|
| 74 |
+
- **Name:** Ton Cao
|
| 75 |
+
- **Contacts:** [email protected]
|
| 76 |
+
|
| 77 |
# Ministral 3 14B Reasoning 2512
|
| 78 |
|
| 79 |
The largest model in the Ministral 3 family, **Ministral 3 14B** offers frontier capabilities and performance comparable to its larger [Mistral Small 3.2 24B](https://huggingface.co/mistralai/Mistral-Small-3.2-Instruct-2506) counterpart. A powerful and efficient language model with vision capabilities.
|