Update README.md
Browse files
README.md
CHANGED
|
@@ -49,7 +49,7 @@ Using [llama.cpp quantize cae9fb4](https://github.com/ggerganov/llama.cpp/commit
|
|
| 49 |
| [flux1-dev-Q4_1.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-f16-combined/flux1-dev-Q4_1.gguf) | Q4_1 | 7.53GB | TBC | - |
|
| 50 |
| [flux1-dev-Q5_0.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-f16-combined/flux1-dev-Q5_0.gguf) | Q5_0 | 8.27GB | TBC | - |
|
| 51 |
| [flux1-dev-Q5_K_S.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-f16-combined/flux1-dev-Q5_K_S.gguf) | Q5_K_S | 8.27GB | TBC | [Example](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-f16-combined/images/output_test_comb_Q5_K_S_512_25_woman.png) |
|
| 52 |
-
| [flux1-dev-Q6_K.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-f16-combined/flux1-dev-Q6_K.gguf) | Q6_K | 9.84GB | TBC | - |
|
| 53 |
|
| 54 |
## Observations
|
| 55 |
|
|
|
|
| 49 |
| [flux1-dev-Q4_1.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-f16-combined/flux1-dev-Q4_1.gguf) | Q4_1 | 7.53GB | TBC | - |
|
| 50 |
| [flux1-dev-Q5_0.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-f16-combined/flux1-dev-Q5_0.gguf) | Q5_0 | 8.27GB | TBC | - |
|
| 51 |
| [flux1-dev-Q5_K_S.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-f16-combined/flux1-dev-Q5_K_S.gguf) | Q5_K_S | 8.27GB | TBC | [Example](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-f16-combined/images/output_test_comb_Q5_K_S_512_25_woman.png) |
|
| 52 |
+
| [flux1-dev-Q6_K.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-f16-combined/flux1-dev-Q6_K.gguf) | Q6_K | 9.84GB | TBC | [Example](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-f16-combined/images/output_test_comb_Q6_K_512_25_woman.png) |
|
| 53 |
|
| 54 |
## Observations
|
| 55 |
|