mradermacher commited on
Commit
c3b8ecb
·
verified ·
1 Parent(s): 1ff220e

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -38,9 +38,13 @@ more details, including on how to concatenate multi-part files.
38
  | Link | Type | Size/GB | Notes |
39
  |:-----|:-----|--------:|:------|
40
  | [GGUF](https://huggingface.co/mradermacher/Ling-flash-2.0-i1-GGUF/resolve/main/Ling-flash-2.0.imatrix.gguf) | imatrix | 0.4 | imatrix file (for creating your own qwuants) |
 
41
  | [GGUF](https://huggingface.co/mradermacher/Ling-flash-2.0-i1-GGUF/resolve/main/Ling-flash-2.0.i1-Q2_K.gguf) | i1-Q2_K | 37.8 | IQ3_XXS probably better |
 
42
  | [GGUF](https://huggingface.co/mradermacher/Ling-flash-2.0-i1-GGUF/resolve/main/Ling-flash-2.0.i1-IQ3_M.gguf) | i1-IQ3_M | 45.3 | |
 
43
  | [PART 1](https://huggingface.co/mradermacher/Ling-flash-2.0-i1-GGUF/resolve/main/Ling-flash-2.0.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Ling-flash-2.0-i1-GGUF/resolve/main/Ling-flash-2.0.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 58.7 | optimal size/speed/quality |
 
44
 
45
  Here is a handy graph by ikawrakow comparing some lower-quality quant
46
  types (lower is better):
 
38
  | Link | Type | Size/GB | Notes |
39
  |:-----|:-----|--------:|:------|
40
  | [GGUF](https://huggingface.co/mradermacher/Ling-flash-2.0-i1-GGUF/resolve/main/Ling-flash-2.0.imatrix.gguf) | imatrix | 0.4 | imatrix file (for creating your own qwuants) |
41
+ | [GGUF](https://huggingface.co/mradermacher/Ling-flash-2.0-i1-GGUF/resolve/main/Ling-flash-2.0.i1-IQ2_M.gguf) | i1-IQ2_M | 33.8 | |
42
  | [GGUF](https://huggingface.co/mradermacher/Ling-flash-2.0-i1-GGUF/resolve/main/Ling-flash-2.0.i1-Q2_K.gguf) | i1-Q2_K | 37.8 | IQ3_XXS probably better |
43
+ | [GGUF](https://huggingface.co/mradermacher/Ling-flash-2.0-i1-GGUF/resolve/main/Ling-flash-2.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 39.9 | lower quality |
44
  | [GGUF](https://huggingface.co/mradermacher/Ling-flash-2.0-i1-GGUF/resolve/main/Ling-flash-2.0.i1-IQ3_M.gguf) | i1-IQ3_M | 45.3 | |
45
+ | [GGUF](https://huggingface.co/mradermacher/Ling-flash-2.0-i1-GGUF/resolve/main/Ling-flash-2.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 49.4 | IQ3_S probably better |
46
  | [PART 1](https://huggingface.co/mradermacher/Ling-flash-2.0-i1-GGUF/resolve/main/Ling-flash-2.0.i1-Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Ling-flash-2.0-i1-GGUF/resolve/main/Ling-flash-2.0.i1-Q4_K_S.gguf.part2of2) | i1-Q4_K_S | 58.7 | optimal size/speed/quality |
47
+ | [PART 1](https://huggingface.co/mradermacher/Ling-flash-2.0-i1-GGUF/resolve/main/Ling-flash-2.0.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Ling-flash-2.0-i1-GGUF/resolve/main/Ling-flash-2.0.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 62.5 | fast, recommended |
48
 
49
  Here is a handy graph by ikawrakow comparing some lower-quality quant
50
  types (lower is better):