Devstral-Small-2-24B-Instruct-2512-nvfp4
Format: NVFP4 — weights & activations quantized to FP4 with dual scaling.
Base model: mistralai/Devstral-Small-2-24B-Instruct-2512
How it was made: One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration with nvidia/OpenCodeInstruct.
Notes: Keep
lm_headin high precision; calibrate on long, domain-relevant sequences.
Check the original model card for information about this model.
Running the model with VLLM in Docker
Note: I have tried everything I can possibly think of to get this thing to run on VLLM and it just will not. The exact same container runs the full FP8 version of Devstral-Small-2-24B-Instruct-2512 just fine but refuses to even attempt to load this quant. I'm sure there's some way to get it running but I don't know what that is.
sudo docker run --runtime nvidia --gpus all -p 8000:8000 --ipc=host vllm/vllm-openai:nightly --model Firworks/Devstral-Small-2-24B-Instruct-2512-nvfp4 --dtype auto --max-model-len 32768 --tool-call-parser mistral --enable-auto-tool-choice
This was tested on an RTX Pro 6000 Blackwell cloud instance.
If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.
- Downloads last month
- 2,081
Model tree for Firworks/Devstral-Small-2-24B-Instruct-2512-nvfp4
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503