Use unsloth BF16 gguf to quantize to MXFP4.

Downloads last month
49
GGUF
Model size
33B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for lovedheart/Qwen3-32B-GGUF-MXFP4

Base model

Qwen/Qwen3-32B
Quantized
(131)
this model