--- base_model: - Qwen/Qwen3-4B-Instruct-2507 pipeline_tag: text-generation license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE --- |Quant|Size|Description| |---|---|---| |[Q2_K_XXS](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q2_K_XXS.gguf)|1.38 GB|Not recommended for most people. Extremelly low quality.| |[Q2_K_XS](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q2_K_XS.gguf)|1.38 GB|Not recommended for most people. Very low quality.| |[Q2_K](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q2_K.gguf)|1.38 GB|Not recommended for most people. Very low quality.| |[Q2_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q2_K_L.gguf)|1.64 GB|Not recommended for most people. Uses Q8_0 for output and embedding, and Q2_K for everything else. Very low quality.| |[Q2_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q2_K_XL.gguf)|1.98 GB|Not recommended for most people. Uses F16 for output and embedding, and Q2_K for everything else. Very low quality.| |[Q3_K_XXS](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q3_K_XXS.gguf)|1.62 GB|Not recommended for most people. Prefer any bigger Q3_K quantization. Very low quality.| |[Q3_K_XS](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q3_K_XS.gguf)|1.62 GB|Not recommended for most people. Prefer any bigger Q3_K quantization. Very low quality.| |[Q3_K_S](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q3_K_S.gguf)|1.62 GB|Not recommended for most people. Prefer any bigger Q3_K quantization. Low quality.| |[Q3_K_M](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q3_K_M.gguf)|1.79 GB|Not recommended for most people. Low quality.| |[Q3_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q3_K_L.gguf)|1.94 GB|Not recommended for most people. Low quality.| |[Q3_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q3_K_XL.gguf)|2.17 GB|Not recommended for most people. Uses Q8_0 for output and embedding, and Q3_K_L for everything else. Low quality.| |[Q3_K_XXL](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q3_K_XXL.gguf)|2.51 GB|Not recommended for most people. Uses F16 for output and embedding, and Q3_K_L for everything else. Low quality.| |[Q4_K_XS](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q4_K_XS.gguf)|2.13 GB|Lower quality than Q4_K_S.| |[Q4_K_S](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q4_K_S.gguf)|2.13 GB|Recommended. Slightly low quality.| |[Q4_K_M](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q4_K_M.gguf)|2.23 GB|Recommended. Decent quality for most use cases.| |[Q4_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q4_K_L.gguf)|2.41 GB|Recommended. Uses Q8_0 for output and embedding, and Q4_K_M for everything else. Decent quality.| |[Q4_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q4_K_XL.gguf)|2.75 GB|Recommended. Uses F16 for output and embedding, and Q4_K_M for everything else. Decent quality.| |[Q5_K_XXS](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q5_K_XXS.gguf)|2.58 GB|Lower quality than Q5_K_S.| |[Q5_K_XS](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q5_K_XS.gguf)|2.58 GB|Lower quality than Q5_K_S.| |[Q5_K_S](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q5_K_S.gguf)|2.58 GB|Recommended. High quality.| |[Q5_K_M](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q5_K_M.gguf)|2.64 GB|Recommended. High quality.| |[Q5_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q5_K_L.gguf)|2.78 GB|Recommended. Uses Q8_0 for output and embedding, and Q5_K_M for everything else. High quality.| |[Q5_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q5_K_XL.gguf)|3.12 GB|Recommended. Uses F16 for output and embedding, and Q5_K_M for everything else. High quality.| |[Q6_K_S](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q6_K_S.gguf)|3.08 GB|Lower quality than Q6_K.| |[Q6_K](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q6_K.gguf)|3.08 GB|Recommended. Very high quality.| |[Q6_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q6_K_L.gguf)|3.17 GB|Recommended. Uses Q8_0 for output and embedding, and Q6_K for everything else. Very high quality.| |[Q6_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q6_K_XL.gguf)|3.51 GB|Recommended. Uses F16 for output and embedding, and Q6_K for everything else. Very high quality.| |[Q8_K_XS](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q8_K_XS.gguf)|3.99 GB|Lower quality than Q8_0.| |[Q8_K_S](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q8_K_S.gguf)|3.99 GB|Lower quality than Q8_0.| |[Q8_0](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q8_0.gguf)|3.99 GB|Recommended. Quality almost like F16.| |[Q8_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_Q8_K_XL.gguf)|4.33 GB|Recommended. Uses F16 for output and embedding, and Q8_0 for everything else. Quality almost like F16.| |[F16](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507_F16.gguf)|7.5 GB|Not recommended. Overkill. Prefer Q8_0.| |[ORIGINAL (BF16)](https://huggingface.co/Alcoft/Qwen_Qwen3-4B-Instruct-2507-GGUF/resolve/main/Qwen_Qwen3-4B-Instruct-2507.gguf)|7.5 GB|Not recommended. Overkill. Prefer Q8_0.| --- Quantized using [TAO71-AI AutoQuantizer](https://github.com/TAO71-AI/AutoQuantizer). You can check out the original model card [here](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507).