Update README.md
Browse files
README.md
CHANGED
|
@@ -39,7 +39,7 @@ pipeline_tag: text-generation
|
|
| 39 |
* *Llama.cpp*
|
| 40 |
|
| 41 |
# (GGUF) Thanks Triangle104
|
| 42 |
-
* static [Q8_0](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q8_0-GGUF) - [Q6_K](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q6_K-GGUF) - [Q5_K_M](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q5_K_M-GGUF) - [Q5_K_S](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q5_K_S-GGUF) - [Q5_0](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q5_0-GGUF) - [Q4_K_M](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q4_K_M-GGUF) - [Q4_K_S](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q4_K_S-GGUF) - [Q4_0](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q4_0-GGUF)
|
| 43 |
|
| 44 |
*Other quant repositories also exist on huggingface and can be searched for.*
|
| 45 |
|
|
|
|
| 39 |
* *Llama.cpp*
|
| 40 |
|
| 41 |
# (GGUF) Thanks Triangle104
|
| 42 |
+
* static - [Q8_0](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q8_0-GGUF) - [Q6_K](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q6_K-GGUF) - [Q5_K_M](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q5_K_M-GGUF) - [Q5_K_S](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q5_K_S-GGUF) - [Q5_0](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q5_0-GGUF) - [Q4_K_M](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q4_K_M-GGUF) - [Q4_K_S](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q4_K_S-GGUF) - [Q4_0](https://huggingface.co/Triangle104/Qwen2.5-Lumen-14B-Q4_0-GGUF)
|
| 43 |
|
| 44 |
*Other quant repositories also exist on huggingface and can be searched for.*
|
| 45 |
|