Quantization
Are there any plans on releasing 4bit weights?
We already created a quant internally for our team and usually we're the first one to share stuff but with the new license ("You may not provide Derivatives [...] to third parties (including via [...] model hub) without a separate commercial agreement with the Licensor.") unfortuneately we have to refrain from doing so for this model.
We already created a quant internally for our team and usually we're the first one to share stuff but with the new license ("You may not provide Derivatives [...] to third parties (including via [...] model hub) without a separate commercial agreement with the Licensor.") unfortuneately we have to refrain from doing so for this model.
@putazon How did you go about creating the Quant? Did you do it layer by layer, or was there a simple way of doing it?
@SirCodesAlot we went with symmetric signed int4 with per-row, per-group scales and did a light percentile clip on weights to tame outliers, didn't touch vision and kept layer norms and lm_head in bf16 too so pretty simple since we only wanted to test things out. We didn't go layer by layer since we have no calibration dataset yet
@putazon FYI we have updated the license -- https://huggingface.co/moondream/moondream3-preview/blob/main/LICENSE.md
You're clear to release the quantized weights if you choose to. Apologies for the unclear terms before.
We already created a quant internally for our team and usually we're the first one to share stuff but with the new license ("You may not provide Derivatives [...] to third parties (including via [...] model hub) without a separate commercial agreement with the Licensor.") unfortuneately we have to refrain from doing so for this model.
You can release it now due to change in the license, it would be very appreciated if you manage to release the quantized model!