Alcoft commited on
Commit
7c8a67d
·
verified ·
1 Parent(s): f69ae03

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +37 -0
README.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Qwen/Qwen3-14B
4
+ pipeline_tag: text-generation
5
+ license: apache-2.0
6
+ ---
7
+
8
+ |Quant|Size|Description|
9
+ |---|---|---|
10
+ |[Q2_K](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q2_K.gguf)|5.36 GB|Not recommended for most people. Very low quality.|
11
+ |[Q2_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q2_K_L.gguf)|6.07 GB|Not recommended for most people. Uses Q8_0 for output and embedding, and Q2_K for everything else. Very low quality.|
12
+ |[Q2_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q2_K_XL.gguf)|7.42 GB|Not recommended for most people. Uses F16 for output and embedding, and Q2_K for everything else. Very low quality.|
13
+ |[Q3_K_S](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q3_K_S.gguf)|6.2 GB|Not recommended for most people. Prefer any bigger Q3_K quantization. Low quality.|
14
+ |[Q3_K_M](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q3_K_M.gguf)|6.82 GB|Not recommended for most people. Low quality.|
15
+ |[Q3_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q3_K_L.gguf)|7.36 GB|Not recommended for most people. Low quality.|
16
+ |[Q3_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q3_K_XL.gguf)|7.99 GB|Not recommended for most people. Uses Q8_0 for output and embedding, and Q3_K_L for everything else. Low quality.|
17
+ |[Q3_K_XXL](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q3_K_XXL.gguf)|9.35 GB|Not recommended for most people. Uses F16 for output and embedding, and Q3_K_L for everything else. Low quality.|
18
+ |[Q4_K_S](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q4_K_S.gguf)|7.98 GB|Recommended. Slightly low quality.|
19
+ |[Q4_K_M](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q4_K_M.gguf)|8.38 GB|Recommended. Decent quality for most use cases.|
20
+ |[Q4_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q4_K_L.gguf)|8.92 GB|Recommended. Uses Q8_0 for output and embedding, and Q4_K_M for everything else. Decent quality.|
21
+ |[Q4_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q4_K_XL.gguf)|10.28 GB|Recommended. Uses F16 for output and embedding, and Q4_K_M for everything else. Decent quality.|
22
+ |[Q5_K_S](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q5_K_S.gguf)|9.56 GB|Recommended. High quality.|
23
+ |[Q5_K_M](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q5_K_M.gguf)|9.79 GB|Recommended. High quality.|
24
+ |[Q5_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q5_K_L.gguf)|10.24 GB|Recommended. Uses Q8_0 for output and embedding, and Q5_K_M for everything else. High quality.|
25
+ |[Q5_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q5_K_XL.gguf)|11.6 GB|Recommended. Uses F16 for output and embedding, and Q5_K_M for everything else. High quality.|
26
+ |[Q6_K](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q6_K.gguf)|11.29 GB|Recommended. Very high quality.|
27
+ |[Q6_K_L](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q6_K_L.gguf)|11.64 GB|Recommended. Uses Q8_0 for output and embedding, and Q6_K for everything else. Very high quality.|
28
+ |[Q6_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q6_K_XL.gguf)|13.0 GB|Recommended. Uses F16 for output and embedding, and Q6_K for everything else. Very high quality.|
29
+ |[Q8_0](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q8_0.gguf)|14.62 GB|Recommended. Quality almost like F16.|
30
+ |[Q8_K_XL](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_Q8_K_XL.gguf)|15.98 GB|Recommended. Uses F16 for output and embedding, and Q8_0 for everything else. Quality almost like F16.|
31
+ |[F16](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B_F16.gguf)|27.51 GB|Not recommended. Overkill. Prefer Q8_0.|
32
+ |[ORIGINAL (BF16)](https://huggingface.co/Alcoft/Qwen_Qwen3-14B-GGUF/resolve/main/Qwen_Qwen3-14B.gguf)|27.51 GB|Not recommended. Overkill. Prefer Q8_0.|
33
+
34
+ ---
35
+
36
+ Quantized using [TAO71-AI AutoQuantizer](https://github.com/TAO71-AI/AutoQuantizer).
37
+ You can check out the original model card [here](https://huggingface.co/Qwen/Qwen3-14B).