This is allenai/Olmo-3-7B-Instruct quantized with LLM Compressor with AWQ W4A16 asym. The model is compatible with vLLM (tested: v0.11.2). Tested with an RTX 5090.

How to Support My Work

Subscribe to The Kaitchup. This helps me a lot to continue quantizing and evaluating models for free. Or you can "buy me a kofi".

Downloads last month
77
Safetensors
Model size
2B params
Tensor type
BF16
·
I64
·
I32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kaitchup/Olmo-3-7B-Instruct-awq-w4a16-asym

Collection including kaitchup/Olmo-3-7B-Instruct-awq-w4a16-asym