macandchiz's picture
Update README.md
0171479 verified
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
base_model: Qwen/Qwen3-30B-A3B-Thinking-2507
---
# macandchiz/Qwen3-30B-A3B-Thinking-2507-GGUF
![GGUF Logo](/static-proxy?url=https%3A%2F%2Fcdn-avatars.huggingface.co%2Fv1%2Fproduction%2Fuploads%2F67ef3f648b0b5419e4c8ba8c%2FL3g7LETCBD9EMQEiWFvP0.png%3C%2Fspan%3E)%3C!-- HTML_TAG_END -->
Quantized version of: [`Qwen/Qwen3-30B-A3B-Thinking-2507`](https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507)
## Available Files
The following GGUF quantization variants are available:
- `qwen3-30b-a3b-thinking-2507-q2_k.gguf`
- `qwen3-30b-a3b-thinking-2507-q3_k_s.gguf`
- `qwen3-30b-a3b-thinking-2507-q3_k_m.gguf`
- `qwen3-30b-a3b-thinking-2507-q3_k_l.gguf`
- `qwen3-30b-a3b-thinking-2507-q4_0.gguf`
- `qwen3-30b-a3b-thinking-2507-q4_1.gguf`
- `qwen3-30b-a3b-thinking-2507-q4_k_s.gguf`
- `qwen3-30b-a3b-thinking-2507-q4_k_m.gguf`
- `qwen3-30b-a3b-thinking-2507-q5_0.gguf`
- `qwen3-30b-a3b-thinking-2507-q5_1.gguf`
- `qwen3-30b-a3b-thinking-2507-q5_k_s.gguf`
- `qwen3-30b-a3b-thinking-2507-q5_k_m.gguf`
- `qwen3-30b-a3b-thinking-2507-q6_k.gguf`
- `qwen3-30b-a3b-thinking-2507-q8_0.gguf`
- `qwen3-30b-a3b-thinking-2507-f16.gguf`
## Quantization Information
- **q2_k**: Smallest size, lowest quality
- **q3_k_s, q3_k_m, q3_k_l**: Small size, low quality variants
- **q4_0, q4_1, q4_k_s, q4_k_m**: Medium size, good quality (recommended for most use cases)
- **q5_0, q5_1, q5_k_s, q5_k_m**: Larger size, better quality
- **q6_k**: Large size, high quality
- **q8_0**: Very large size, very high quality
- **f16**: Original precision (largest size)
Choose the quantization level that best fits your needs based on the trade-off between file size and model quality.