Can you please create weighted/imatrix quant versions?
#1
by
t1u1
- opened
Similar to https://huggingface.co/mradermacher/Lamarck-14B-v0.6-i1-GGUF
They seem to work faster. However I find phi4-GGUF to give better results. It would be nice to use imatrix versions if possible.
Thanks
I know it's an "old" model but I'm also interested!
It seems imatrix is not necessarily a good idea after all https://old.reddit.com/r/LocalLLaMA/comments/1p6qwok/are_imatrix_quants_hurting_your_model_my_opinion/