--- license: mit base_model: - deepseek-ai/DeepSeek-R1 tags: - ik_llama.cpp --- IQ2_KS quant of DeepSeek-R1 I made for my 192GB DDR5 + 3090/4090. Done according to: #### `IQ2_KS` 183.004 GiB (2.339 BPW)
👈 Secret Recipe ```bash #!/usr/bin/env bash custom=" # First 3 dense layers (0-3) (GPU) # Except blk.*.attn_k_b.weight is not divisible by 256 so only supports qN_0 blk\.[0-2]\.attn_k_b.*=q8_0 blk\.[0-2]\.attn_.*=iq5_ks blk\.[0-2]\.ffn_down.*=iq5_ks blk\.[0-2]\.ffn_(gate|up).*=iq5_ks blk\.[0-2]\..*=iq5_ks # All attention, norm weights, and bias tensors for MoE layers (3-60) (GPU) # Except blk.*.attn_k_b.weight is not divisible by 256 so only supports qN_0 blk\.[3-9]\.attn_k_b.*=q8_0 blk\.[1-5][0-9]\.attn_k_b.*=q8_0 blk\.60\.attn_k_b.*=q8_0 blk\.[3-9]\.attn_.*=iq5_ks blk\.[1-5][0-9]\.attn_.*=iq5_ks blk\.60\.attn_.*=iq5_ks # Shared Expert (3-60) (GPU) blk\.[3-9]\.ffn_down_shexp\.weight=iq4_ks blk\.[1-5][0-9]\.ffn_down_shexp\.weight=iq4_ks blk\.60\.ffn_down_shexp\.weight=iq4_ks blk\.[3-9]\.ffn_(gate|up)_shexp\.weight=iq4_ks blk\.[1-5][0-9]\.ffn_(gate|up)_shexp\.weight=iq4_ks blk\.60\.ffn_(gate|up)_shexp\.weight=iq4_ks # Routed Experts (3-60) (CPU) blk\.[3-9]\.ffn_down_exps\.weight=iq2_k blk\.[1-5][0-9]\.ffn_down_exps\.weight=iq2_k blk\.60\.ffn_down_exps\.weight=iq2_k blk\.[3-9]\.ffn_(gate|up)_exps\.weight=iq2_ks blk\.[1-5][0-9]\.ffn_(gate|up)_exps\.weight=iq2_ks blk\.60\.ffn_(gate|up)_exps\.weight=iq2_ks # Token embedding and output tensors (GPU) token_embd\.weight=iq4_k output\.weight=Q8_0 ```
## Prompt format ``` <|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|> ``` ## Example run command ``` llama-server --model \DeepSeek-R1-IQ2_KS-00001-of-00005.gguf -fa -rtr -mla 3 --ctx-size 40000 -ctk q8_0 -b 4092 -ub 4092 -amb 512 --n-gpu-layers 99 -ot "blk\.(3)\.ffn_.*=CUDA0" --override-tensor exps=CPU --threads 8 --host 127.0.0.1 --port 8080 ``` ## `ik_llama.cpp` quantizations of DeepSeek-R1 NOTE: These quants **MUST** be run using the `llama.cpp` fork, [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp) Credits to @ubergarm for his DeepSeek quant recipes for which these quants were based on. Credits to @ggfhez for his bf16 upload. Credits to @bartowski for his imatrix