mradermacher's picture
auto-patch README.md
92c9d9b verified
|
raw
history blame
2.94 kB
metadata
base_model: mlfoundations-cua-dev/Gelato-30B-A3B
datasets:
  - >-
    mlfoundations-cua-dev/easyr1-103k-4MP-not-all-correct-stage-one-temp-1_1-RL-remove-pixmo-uground-seeclick
language:
  - en
library_name: transformers
license: apache-2.0
mradermacher:
  readme_rev: 1
quantized_by: mradermacher

About

weighted/imatrix quants of https://huggingface.co/mlfoundations-cua-dev/Gelato-30B-A3B

For a convenient overview and download list, visit our model page for this model.

static quants are available at https://huggingface.co/mradermacher/Gelato-30B-A3B-GGUF

This is a vision model - mmproj files (if any) will be in the static repository.

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF imatrix 0.2 imatrix file (for creating your own qwuants)
GGUF i1-Q2_K 11.4 IQ3_XXS probably better
GGUF i1-IQ3_M 13.6

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.