Please release IQ2_XXS variant too
Could you please release IQ2_XXS quant too which is introduced at Jan 2024 and can be run on low end computers too?
Which models would this be appropriate for? 7B? Mixtral 4x7B? 13B? I can add it to the list, but would like it to be useful for folks.
I meant to release IQ2_XXS for this model LoneStriker/Umbra-v2.1-MoE-4x10.7-GGUF.
There's more info about IQ2_XXS here: https://github.com/ggerganov/llama.cpp/blob/master/examples/quantize/quantize.cpp
It's going to take a change to the quantization pipeline; it's not just a simple change to add a new quant size. I'll add the extra steps to my scripts and generate an IQ2_XXS model as a first test when the changes have been added.
Thank you so much for that. I appreciate it.
IQ2_XXS quant uploading. Now I know why I've never seen that size quant around: it's crazy slow to generate the imatrix and then use it to generate a quantized model. It takes longer to quantize an XXS model than it does to generate all other quants combined (Q3 -> Q8 with 1-3 variants of each.)
Not sure I can guarantee that I'll be generating these quants with every model. Feel free to ping me on specific models, but the resources needed to generate them is a bit excessive (and for models that people may not use.)
Thank you so much. I didn't know that it's super slow to generate that. I'm so grateful to you.