Missing Qwen3-8B-Q4_0.gguf

#4
by StellanWay - opened

It's a legacy format, but the other unsloth repositories (Qwen3-4B-GGUF, Qwen3-14B-GGUF) have it and it's the only format supported by llama.cpp's OpenCL backend. Inference is much faster on my phone (OnePlus 13 with Qualcomm Snapdragon 8 Elite) with OpenCL.

Sign up or log in to comment