This is a MXFP4_MOE quantization of the model Granite-4.0-H-Small

Model quantized with BF16 GGUF's from: https://huggingface.co/unsloth/granite-4.0-h-small-GGUF

Original model: https://huggingface.co/ibm-granite/granite-4.0-h-small

Downloads last month
217
GGUF
Model size
32B params
Architecture
granitehybrid
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for noctrex/Granite-4.0-H-Small-MXFP4_MOE-GGUF

Quantized
(30)
this model