Ewere's picture
Update README.md
8bf0405 verified
metadata
base_model:
  - huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated

Needed to run a 4-bit quantization on vLLM but only GGUFs were available.

Loading time went from ~9 minutes to 2.5 minutes. Throughput went from 25 tokens/second to 45 tokens/second.