Can you please create weighted/imatrix quant versions?

#1
by t1u1 - opened

Similar to https://huggingface.co/mradermacher/Lamarck-14B-v0.6-i1-GGUF

They seem to work faster. However I find phi4-GGUF to give better results. It would be nice to use imatrix versions if possible.

Thanks

I know it's an "old" model but I'm also interested!

Sign up or log in to comment