Update README.md
Browse files
README.md
CHANGED
|
@@ -3,9 +3,12 @@ datasets:
|
|
| 3 |
- Rombo-Org/Optimized_Reasoning
|
| 4 |
base_model:
|
| 5 |
- mistralai/Ministral-3-3B-Reasoning-2512
|
|
|
|
| 6 |
---
|
| 7 |
# Ministral-3-3B-Reasoning-2512-nvfp4
|
| 8 |
|
|
|
|
|
|
|
| 9 |
**Format:** NVFP4 — weights & activations quantized to FP4 with dual scaling.
|
| 10 |
**Base model:** `mistralai/Ministral-3-3B-Reasoning-2512`
|
| 11 |
**How it was made:** One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration with Rombo-Org/Optimized_Reasoning.
|
|
@@ -20,4 +23,4 @@ sudo docker run --runtime nvidia --gpus all -p 8000:8000 --ipc=host vllm/vllm-op
|
|
| 20 |
```
|
| 21 |
This was tested on an RTX Pro 6000 Blackwell cloud instance.
|
| 22 |
|
| 23 |
-
If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.
|
|
|
|
| 3 |
- Rombo-Org/Optimized_Reasoning
|
| 4 |
base_model:
|
| 5 |
- mistralai/Ministral-3-3B-Reasoning-2512
|
| 6 |
+
license: apache-2.0
|
| 7 |
---
|
| 8 |
# Ministral-3-3B-Reasoning-2512-nvfp4
|
| 9 |
|
| 10 |
+
Note: I was not able to get these running with latest, nightly, or v0.12.0 of the VLLM docker container. This might still be useful for someone wanting to run it with transformers or anyone who wants to see if they can figure out a tricky VLLM command to get this running. If I figure it out or someone lets me know a command to get it running in the VLLM docker container I'll update the model card command.
|
| 11 |
+
|
| 12 |
**Format:** NVFP4 — weights & activations quantized to FP4 with dual scaling.
|
| 13 |
**Base model:** `mistralai/Ministral-3-3B-Reasoning-2512`
|
| 14 |
**How it was made:** One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration with Rombo-Org/Optimized_Reasoning.
|
|
|
|
| 23 |
```
|
| 24 |
This was tested on an RTX Pro 6000 Blackwell cloud instance.
|
| 25 |
|
| 26 |
+
If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.
|