File size: 2,357 Bytes
c897b7d
 
 
 
 
 
 
 
 
 
9031586
a38ca9f
c897b7d
 
 
 
08174c1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9031586
c897b7d
 
 
 
 
 
 
 
9031586
 
 
 
 
c897b7d
 
320ee68
c897b7d
 
 
 
 
9031586
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: mit
base_model:
- deepseek-ai/DeepSeek-R1
tags:
- ik_llama.cpp
---

IQ2_KS quant of DeepSeek-R1 I made for my 192GB DDR5 + 3090/4090. Done according to:

#### `IQ2_KS` 183.004 GiB (2.339 BPW)

<details>

<summary>👈 Secret Recipe</summary>

```bash
#!/usr/bin/env bash

custom="
# First 3 dense layers (0-3) (GPU)
# Except blk.*.attn_k_b.weight is not divisible by 256 so only supports qN_0
blk\.[0-2]\.attn_k_b.*=q8_0
blk\.[0-2]\.attn_.*=iq5_ks
blk\.[0-2]\.ffn_down.*=iq5_ks
blk\.[0-2]\.ffn_(gate|up).*=iq5_ks
blk\.[0-2]\..*=iq5_ks

# All attention, norm weights, and bias tensors for MoE layers (3-60) (GPU)
# Except blk.*.attn_k_b.weight is not divisible by 256 so only supports qN_0
blk\.[3-9]\.attn_k_b.*=q8_0
blk\.[1-5][0-9]\.attn_k_b.*=q8_0
blk\.60\.attn_k_b.*=q8_0

blk\.[3-9]\.attn_.*=iq5_ks
blk\.[1-5][0-9]\.attn_.*=iq5_ks
blk\.60\.attn_.*=iq5_ks

# Shared Expert (3-60) (GPU)
blk\.[3-9]\.ffn_down_shexp\.weight=iq4_ks
blk\.[1-5][0-9]\.ffn_down_shexp\.weight=iq4_ks
blk\.60\.ffn_down_shexp\.weight=iq4_ks

blk\.[3-9]\.ffn_(gate|up)_shexp\.weight=iq4_ks
blk\.[1-5][0-9]\.ffn_(gate|up)_shexp\.weight=iq4_ks
blk\.60\.ffn_(gate|up)_shexp\.weight=iq4_ks

# Routed Experts (3-60) (CPU)
blk\.[3-9]\.ffn_down_exps\.weight=iq2_k
blk\.[1-5][0-9]\.ffn_down_exps\.weight=iq2_k
blk\.60\.ffn_down_exps\.weight=iq2_k

blk\.[3-9]\.ffn_(gate|up)_exps\.weight=iq2_ks
blk\.[1-5][0-9]\.ffn_(gate|up)_exps\.weight=iq2_ks
blk\.60\.ffn_(gate|up)_exps\.weight=iq2_ks

# Token embedding and output tensors (GPU)
token_embd\.weight=iq4_k
output\.weight=Q8_0
```

</details>

## Prompt format

```
<|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|>
```

## Example run command

```
llama-server  --model <Path>\DeepSeek-R1-IQ2_KS-00001-of-00005.gguf -fa -rtr -mla 3 --ctx-size 40000  -ctk q8_0  -b 4092 -ub 4092 -amb 512 --n-gpu-layers 99   -ot "blk\.(3)\.ffn_.*=CUDA0" --override-tensor exps=CPU  --threads 8 --host 127.0.0.1 --port 8080
```


## `ik_llama.cpp` quantizations of DeepSeek-R1

NOTE: These quants **MUST** be run using the `llama.cpp` fork, [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp)

Credits to @ubergarm for his DeepSeek quant recipes for which these quants were based on.

Credits to @ggfhez for his bf16 upload.

Credits to @bartowski for his imatrix