--- base_model: - Qwen/Qwen3-14B pipeline_tag: text-generation license: apache-2.0 --- |Quant|Size|Description| |---|---|---| |[Q2_K](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q2_K.gguf)|5.36 GB|Not recommended for most people. Very low quality.| |[Q2_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q2_K_L.gguf)|6.07 GB|Not recommended for most people. Uses Q8_0 for output and embedding, and Q2_K for everything else. Very low quality.| |[Q2_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q2_K_XL.gguf)|7.42 GB|Not recommended for most people. Uses F16 for output and embedding, and Q2_K for everything else. Very low quality.| |[Q3_K_S](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q3_K_S.gguf)|6.2 GB|Not recommended for most people. Prefer any bigger Q3_K quantization. Low quality.| |[Q3_K_M](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q3_K_M.gguf)|6.82 GB|Not recommended for most people. Low quality.| |[Q3_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q3_K_L.gguf)|7.36 GB|Not recommended for most people. Low quality.| |[Q3_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q3_K_XL.gguf)|7.99 GB|Not recommended for most people. Uses Q8_0 for output and embedding, and Q3_K_L for everything else. Low quality.| |[Q3_K_XXL](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q3_K_XXL.gguf)|9.35 GB|Not recommended for most people. Uses F16 for output and embedding, and Q3_K_L for everything else. Low quality.| |[Q4_K_S](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q4_K_S.gguf)|7.98 GB|Recommended. Slightly low quality.| |[Q4_K_M](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q4_K_M.gguf)|8.38 GB|Recommended. Decent quality for most use cases.| |[Q4_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q4_K_L.gguf)|8.92 GB|Recommended. Uses Q8_0 for output and embedding, and Q4_K_M for everything else. Decent quality.| |[Q4_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q4_K_XL.gguf)|10.28 GB|Recommended. Uses F16 for output and embedding, and Q4_K_M for everything else. Decent quality.| |[Q5_K_S](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q5_K_S.gguf)|9.56 GB|Recommended. High quality.| |[Q5_K_M](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q5_K_M.gguf)|9.79 GB|Recommended. High quality.| |[Q5_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q5_K_L.gguf)|10.24 GB|Recommended. Uses Q8_0 for output and embedding, and Q5_K_M for everything else. High quality.| |[Q5_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q5_K_XL.gguf)|11.6 GB|Recommended. Uses F16 for output and embedding, and Q5_K_M for everything else. High quality.| |[Q6_K](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q6_K.gguf)|11.29 GB|Recommended. Very high quality.| |[Q6_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q6_K_L.gguf)|11.64 GB|Recommended. Uses Q8_0 for output and embedding, and Q6_K for everything else. Very high quality.| |[Q6_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q6_K_XL.gguf)|13.0 GB|Recommended. Uses F16 for output and embedding, and Q6_K for everything else. Very high quality.| |[Q8_0](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q8_0.gguf)|14.62 GB|Recommended. Quality almost like F16.| |[Q8_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q8_K_XL.gguf)|15.98 GB|Recommended. Uses F16 for output and embedding, and Q8_0 for everything else. Quality almost like F16.| |[F16](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_F16.gguf)|27.51 GB|Not recommended. Overkill. Prefer Q8_0.| |[ORIGINAL (BF16)](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B.gguf)|27.51 GB|Not recommended. Overkill. Prefer Q8_0.| --- Quantized using [TAO71-AI AutoQuantizer](https://github.com/TAO71-AI/AutoQuantizer). You can check out the original model card [here](https://huggingface.co/Qwen/Qwen3-14B).