Eviation commited on
Commit
f921049
·
verified ·
1 Parent(s): 3a5d5b4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -24,7 +24,7 @@ Expect broken or faulty items for the time being. Use at your own discretion.
24
  - other: ?
25
 
26
  # Caesar
27
- Combined imatrix multiple images 512x512 and 768x768, 25 and 50 steps [city96/flux1-dev-Q8_0](https://huggingface.co/city96/FLUX.1-dev-gguf/blob/main/flux1-dev-Q8_0.gguf) euler
28
 
29
  data: `load_imatrix: loaded 314 importance matrix entries from imatrix_caesar.dat computed on 475 chunks`
30
 
@@ -33,8 +33,8 @@ Using [llama.cpp quantize cae9fb4](https://github.com/ggerganov/llama.cpp/commit
33
  Dynamic quantization:
34
  - img_in, guidance_in.in_layer, final_layer.linear: f32/bf16/f16
35
  - guidance_in, final_layer: bf16/f16
36
- - img_attn.qkv, linear1: some two bits up
37
- - txt_mod.lin, txt_mlp, txt_attn.proj: some one bit down
38
 
39
  ## Experimental from f16
40
 
 
24
  - other: ?
25
 
26
  # Caesar
27
+ Combined imatrix multiple images 512x512 and 768x768, 25, 30 and 50 steps [city96/flux1-dev-Q8_0](https://huggingface.co/city96/FLUX.1-dev-gguf/blob/main/flux1-dev-Q8_0.gguf) euler
28
 
29
  data: `load_imatrix: loaded 314 importance matrix entries from imatrix_caesar.dat computed on 475 chunks`
30
 
 
33
  Dynamic quantization:
34
  - img_in, guidance_in.in_layer, final_layer.linear: f32/bf16/f16
35
  - guidance_in, final_layer: bf16/f16
36
+ - img_attn.qkv, linear1: some layers two bits up
37
+ - txt_mod.lin, txt_mlp, txt_attn.proj: some layers one bit down
38
 
39
  ## Experimental from f16
40