GGUF importance matrix (imatrix) quants for https://huggingface.co/wolfram/miquliz-120b-v2.0
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using wiki.train.raw.

Using IQ2_XXS it seems to fit 100/141 layers using 2K context on a 24GB card.

Layers Context Template
140
32768
[INST] {prompt} [/INST]
{response}
Downloads last month
25
GGUF
Model size
120B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support