'Make knowledge free for everyone'

Original FP8 weigths dequantized with: https://github.com/csabakecskemeti/ministral-3_dequantizer_fp8-bf16

Quantized version of: mistralai/Ministral-3-14B-Instruct-2512 Buy Me a Coffee at ko-fi.com

Downloads last month
-
GGUF
Model size
14B params
Architecture
mistral3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for DevQuasar/mistralai.Ministral-3-14B-Instruct-2512-GGUF