|
|
--- |
|
|
base_model: Qwen/Qwen3-14B |
|
|
library_name: transformers |
|
|
license: apache-2.0 |
|
|
license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE |
|
|
pipeline_tag: text-generation |
|
|
tags: |
|
|
- llama-cpp |
|
|
- gguf-my-repo |
|
|
--- |
|
|
|
|
|
*Produced by [Antigma Labs](https://antigma.ai), [Antigma Quantize Space](https://huggingface.co/spaces/Antigma/quantize-my-repo)* |
|
|
|
|
|
*Follow Antigma Labs in X [https://x.com/antigma_labs](https://x.com/antigma_labs)* |
|
|
|
|
|
*Antigma's GitHub Homepage [https://github.com/AntigmaLabs](https://github.com/AntigmaLabs)* |
|
|
|
|
|
## llama.cpp quantization |
|
|
Using <a href="https://github.com/ggml-org/llama.cpp">llama.cpp</a> release <a href="https://github.com/ggml-org/llama.cpp/releases/tag/b5215">b5215</a> for quantization. |
|
|
Original model: https://huggingface.co/Qwen/Qwen3-14B |
|
|
Run them directly with [llama.cpp](https://github.com/ggml-org/llama.cpp), or any other llama.cpp based project |
|
|
## Prompt format |
|
|
``` |
|
|
<|begin▁of▁sentence|>{system_prompt}<|User|>{prompt}<|Assistant|><|end▁of▁sentence|><|Assistant|> |
|
|
``` |
|
|
## Download a file (not the whole branch) from below: |
|
|
| Filename | Quant type | File Size | Split | |
|
|
| -------- | ---------- | --------- | ----- | |
|
|
| [qwen3-14b-q4_k_m.gguf](https://huggingface.co/Antigma/Qwen3-14B-GGUF/blob/main/qwen3-14b-q4_k_m.gguf)|Q4_K_M|8.38 GB|False| |
|
|
|[qwen3-14b-q4_0.gguf](https://huggingface.co/Antigma/Qwen3-14B-GGUF/blob/main/qwen3-14b-q4_0.gguf)|Q4_0|7.93 GB|False| |
|
|
|[qwen3-14b-q4_k_s.gguf](https://huggingface.co/Antigma/Qwen3-14B-GGUF/blob/main/qwen3-14b-q4_k_s.gguf)|Q4_K_S|7.98 GB|False| |
|
|
|
|
|
## Downloading using huggingface-cli |
|
|
<details> |
|
|
<summary>Click to view download instructions</summary> |
|
|
First, make sure you have hugginface-cli installed: |
|
|
|
|
|
``` |
|
|
pip install -U "huggingface_hub[cli]" |
|
|
``` |
|
|
|
|
|
Then, you can target the specific file you want: |
|
|
|
|
|
``` |
|
|
huggingface-cli download https://huggingface.co/Antigma/Qwen3-14B-GGUF --include "qwen3-14b-q4_k_m.gguf" --local-dir ./ |
|
|
``` |
|
|
|
|
|
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: |
|
|
|
|
|
``` |
|
|
huggingface-cli download https://huggingface.co/Antigma/Qwen3-14B-GGUF --include "qwen3-14b-q4_k_m.gguf/*" --local-dir ./ |
|
|
``` |
|
|
|
|
|
You can either specify a new local-dir (deepseek-ai_DeepSeek-V3-0324-Q8_0) or download them all in place (./) |
|
|
|
|
|
</details> |
|
|
|