--- pipeline_tag: image-to-text base_model: - mlfoundations-cua-dev/Gelato-30B-A3B --- These are quantizations of the model Gelato-30B-A3B The imatrix has been used from [mradermacher](https://huggingface.co/mradermacher). As most of the quants are available from the great [mradermacher](https://huggingface.co/mradermacher) team, I will include here only the quants that are missing. **Usage Notes:** - Download the latest [llama.cpp](https://github.com/ggml-org/llama.cpp) to use these quantizations. - Try to use the best quality you can run. - For the `mmproj` file, the F32 version is recommended for best results (F32 > BF16 > F16).