File size: 259 Bytes
859e795
 
 
8bf0405
 
 
 
 
1
2
3
4
5
6
7
8
---
base_model:
- huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated
---

Needed to run a 4-bit quantization on vLLM but only GGUFs were available.

Loading time went from ~9 minutes to 2.5 minutes.  Throughput went from 25 tokens/second to 45 tokens/second.