Add STILL UPLOADING warning!
Browse files
README.md
CHANGED
|
@@ -12,6 +12,11 @@ tags:
|
|
| 12 |
- ik_llama.cpp
|
| 13 |
---
|
| 14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
## `ik_llama.cpp` imatrix Quantizations of tngtech/DeepSeek-R1T-Chimera
|
| 16 |
|
| 17 |
This quant collection **REQUIRES** [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork to support advanced non-linear SotA quants. Do **not** download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc!
|
|
@@ -30,7 +35,7 @@ Excited to share and learn together. Thanks!
|
|
| 30 |
## Quant Collection
|
| 31 |
So far these are my best recipes offering the great quality in good memory footprint breakpoints.
|
| 32 |
|
| 33 |
-
####
|
| 34 |
|
| 35 |
*NOTE*: This quant may take a *long time* to upload, hopefully less than a month lol...
|
| 36 |
|
|
@@ -160,5 +165,4 @@ numactl -N 0 -m 0 \
|
|
| 160 |
## References
|
| 161 |
* [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/)
|
| 162 |
* [ik_llama.cpp Getting Started Guide](https://github.com/ikawrakow/ik_llama.cpp/discussions/258)
|
| 163 |
-
* [ik_llama.cpp Benchmarks Discussion](https://github.com/ikawrakow/ik_llama.cpp/discussions/357)
|
| 164 |
* [imatrix calibration_data_v5_rc.txt](https://gist.github.com/tristandruyen/9e207a95c7d75ddf37525d353e00659c#file-calibration_data_v5_rc-txt)
|
|
|
|
| 12 |
- ik_llama.cpp
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# WAIT FOR ENTIRE UPLOAD TO FINISH BEFORE DOWNLOADING!
|
| 16 |
+
*WARNING*
|
| 17 |
+
|
| 18 |
+
Cooked this quant on a remote rig with limited uplink, will take a while, make sure it finishes uploading before you bother downloading the GGUF files.
|
| 19 |
+
|
| 20 |
## `ik_llama.cpp` imatrix Quantizations of tngtech/DeepSeek-R1T-Chimera
|
| 21 |
|
| 22 |
This quant collection **REQUIRES** [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/) fork to support advanced non-linear SotA quants. Do **not** download these big files and expect them to run on mainline vanilla llama.cpp, ollama, LM Studio, KoboldCpp, etc!
|
|
|
|
| 35 |
## Quant Collection
|
| 36 |
So far these are my best recipes offering the great quality in good memory footprint breakpoints.
|
| 37 |
|
| 38 |
+
#### DeepSeek-R1T-Chimera-IQ4_KS
|
| 39 |
|
| 40 |
*NOTE*: This quant may take a *long time* to upload, hopefully less than a month lol...
|
| 41 |
|
|
|
|
| 165 |
## References
|
| 166 |
* [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp/)
|
| 167 |
* [ik_llama.cpp Getting Started Guide](https://github.com/ikawrakow/ik_llama.cpp/discussions/258)
|
|
|
|
| 168 |
* [imatrix calibration_data_v5_rc.txt](https://gist.github.com/tristandruyen/9e207a95c7d75ddf37525d353e00659c#file-calibration_data_v5_rc-txt)
|