File size: 3,176 Bytes
6fa597a d53ae02 6fa597a 2767b33 fce0aa6 6fa597a 159ac35 6fa597a fce0aa6 4c11c50 fce0aa6 4c11c50 fce0aa6 d53ae02 fce0aa6 8f1ed9f fce0aa6 5b9a0d0 fce0aa6 5b9a0d0 fce0aa6 5b9a0d0 d53ae02 abc49a4 a2defc2 d53ae02 fce0aa6 6fa597a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
---
license: mit
language:
- en
- zh
pipeline_tag: text-generation
base_model:
- zai-org/GLM-4.5-Base
base_model_relation: quantized
tags:
- imatrix
- conversational
- ik_llama.cpp
---
GLM-4.5-**Base**, quantized down to 124GB (V2) and 118GB (V1), specifically for 128GB RAM + small GPU setups.
```
llm_load_tensors: CPU buffer size = 114156.88 MiB
llm_load_tensors: CUDA_Host buffer size = 416.25 MiB
llm_load_tensors: CUDA0 buffer size = 10584.35 MiB
```
It uses ik_llama.cpp's new IQ2_KL quantization:
https://github.com/ikawrakow/ik_llama.cpp/pull/602
With the following mix, derived from ubergarm's GLM-4.5 (Instruct) quantizations:
https://huggingface.co/ubergarm/GLM-4.5-GGUF
```
# Attention
blk\..*\.attn_q.*=iq5_ks
blk\..*\.attn_k.*=iq6_k
blk\..*\.attn_v.*=iq6_k
blk\..*\.attn_output.*=iq5_ks
# First 3 Dense Layers [0-2]
blk\..*\.ffn_down\.weight=iq5_ks
blk\..*\.ffn_(gate|up)\.weight=iq5_ks
# Shared Expert Layers [3-92]
blk\..*\.ffn_down_shexp\.weight=iq5_ks
blk\..*\.ffn_(gate|up)_shexp\.weight=iq5_ks
# Routed Experts Layers [3-19]
blk\.[3-9]\.ffn_down_exps\.weight=iq3_ks
blk\.[3-9]\.ffn_(gate|up)_exps\.weight=iq3_ks
blk\.[1-1][0-9]\.ffn_down_exps\.weight=iq3_ks
blk\.[1-1][0-9]\.ffn_(gate|up)_exps\.weight=iq3_ks
# Routed Experts Layers [81-92]
blk\.[8-8][1-9]\.ffn_down_exps\.weight=iq3_ks
blk\.[8-8][1-9]\.ffn_(gate|up)_exps\.weight=iq3_ks
blk\.[9-9][0-2]\.ffn_down_exps\.weight=iq3_ks
blk\.[9-9][0-2]\.ffn_(gate|up)_exps\.weight=iq3_ks
# Routed Experts Layers [20-80]
blk\..*\.ffn_down_exps\.weight=iq2_kl
blk\..*\.ffn_(gate|up)_exps\.weight=iq2_kl
# NextN MTP Layer [92]
blk\..*\.nextn\.embed_tokens\.weight=iq5_ks
blk\..*\.nextn\.shared_head_head\.weight=iq5_ks
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
```
- Mostly iq5_ks GPU layers to minimize loss cheaply, keep it fast (as iqX_ks quantizations are very fast), and minimize the number of quantization types.
- iq3_ks shared experts near the beginning and end, as this seems to be where there are perplexity 'bumps.'
- iq2_kl 'middle' shared experts.
Works well on 128GB RAM, with room for 24K F16 context in 24GB VRAM and RAM to spare for the system. It's awesome for story continuation.
Do NOT load with mmap! Requires ik_llama.cpp, see ubergarm's GLM 4.5 page. And let me know if you want a different mix (such as one more optimal for 8-11GB GPUs).
<details>
<summary>Old V1 Recipe</summary>
```
# Attention
blk\..*\.attn_q.*=iq5_ks_r4
blk\..*\.attn_k.*=iq6_k
blk\..*\.attn_v.*=iq6_k
blk\..*\.attn_output.*=iq5_ks_r4
# First 3 Dense Layers [0-2]
blk\..*\.ffn_down\.weight=iq4_kt
blk\..*\.ffn_(gate|up)\.weight=iq4_kt
# Shared Expert Layers [3-92]
blk\..*\.ffn_down_shexp\.weight=iq5_ks_r4
blk\..*\.ffn_(gate|up)_shexp\.weight=iq4_ks_r4
# Routed Experts Layers [3-92]
blk\..*\.ffn_down_exps\.weight=iq2_kl
blk\..*\.ffn_(gate|up)_exps\.weight=iq2_kl
# NextN MTP Layer [92]
blk\..*\.nextn\.embed_tokens\.weight=iq5_ks
blk\..*\.nextn\.shared_head_head\.weight=iq5_ks
blk\..*\.nextn\.eh_proj\.weight=q8_0
# Non-Repeating Layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
```
</details>
|