mradermacher commited on
Commit
66afbab
·
verified ·
1 Parent(s): 426a8b5

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -47,6 +47,12 @@ more details, including on how to concatenate multi-part files.
47
  | Link | Type | Size/GB | Notes |
48
  |:-----|:-----|--------:|:------|
49
  | [GGUF](https://huggingface.co/mradermacher/CapyberaHermesYi-34B-ChatML-200K-i1-GGUF/resolve/main/CapyberaHermesYi-34B-ChatML-200K.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
 
 
 
 
 
 
50
 
51
  Here is a handy graph by ikawrakow comparing some lower-quality quant
52
  types (lower is better):
 
47
  | Link | Type | Size/GB | Notes |
48
  |:-----|:-----|--------:|:------|
49
  | [GGUF](https://huggingface.co/mradermacher/CapyberaHermesYi-34B-ChatML-200K-i1-GGUF/resolve/main/CapyberaHermesYi-34B-ChatML-200K.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
50
+ | [GGUF](https://huggingface.co/mradermacher/CapyberaHermesYi-34B-ChatML-200K-i1-GGUF/resolve/main/CapyberaHermesYi-34B-ChatML-200K.i1-Q2_K.gguf) | i1-Q2_K | 12.9 | IQ3_XXS probably better |
51
+ | [GGUF](https://huggingface.co/mradermacher/CapyberaHermesYi-34B-ChatML-200K-i1-GGUF/resolve/main/CapyberaHermesYi-34B-ChatML-200K.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.4 | lower quality |
52
+ | [GGUF](https://huggingface.co/mradermacher/CapyberaHermesYi-34B-ChatML-200K-i1-GGUF/resolve/main/CapyberaHermesYi-34B-ChatML-200K.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | |
53
+ | [GGUF](https://huggingface.co/mradermacher/CapyberaHermesYi-34B-ChatML-200K-i1-GGUF/resolve/main/CapyberaHermesYi-34B-ChatML-200K.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.8 | IQ3_S probably better |
54
+ | [GGUF](https://huggingface.co/mradermacher/CapyberaHermesYi-34B-ChatML-200K-i1-GGUF/resolve/main/CapyberaHermesYi-34B-ChatML-200K.i1-Q4_K_S.gguf) | i1-Q4_K_S | 19.7 | optimal size/speed/quality |
55
+ | [GGUF](https://huggingface.co/mradermacher/CapyberaHermesYi-34B-ChatML-200K-i1-GGUF/resolve/main/CapyberaHermesYi-34B-ChatML-200K.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.8 | fast, recommended |
56
 
57
  Here is a handy graph by ikawrakow comparing some lower-quality quant
58
  types (lower is better):