Transformers
GGUF
English
shining-valiant
shining-valiant-3
valiant
valiant-labs
qwen
qwen-3
qwen-3-8b
8b
reasoning
code
code-reasoning
science
science-reasoning
physics
biology
chemistry
earth-science
astronomy
machine-learning
artificial-intelligence
compsci
computer-science
information-theory
ML-Ops
math
cuda
deep-learning
agentic
LLM
neuromorphic
self-improvement
complex-systems
cognition
linguistics
philosophy
logic
epistemology
simulation
game-theory
knowledge-management
creativity
problem-solving
architect
engineer
developer
creative
analytical
expert
rationality
conversational
chat
instruct
| base_model: ValiantLabs/Qwen3-8B-ShiningValiant3 | |
| datasets: | |
| - sequelbox/Celestia3-DeepSeek-R1-0528 | |
| - sequelbox/Mitakihara-DeepSeek-R1-0528 | |
| - sequelbox/Raiden-DeepSeek-R1 | |
| language: | |
| - en | |
| library_name: transformers | |
| license: apache-2.0 | |
| quantized_by: mradermacher | |
| tags: | |
| - shining-valiant | |
| - shining-valiant-3 | |
| - valiant | |
| - valiant-labs | |
| - qwen | |
| - qwen-3 | |
| - qwen-3-8b | |
| - 8b | |
| - reasoning | |
| - code | |
| - code-reasoning | |
| - science | |
| - science-reasoning | |
| - physics | |
| - biology | |
| - chemistry | |
| - earth-science | |
| - astronomy | |
| - machine-learning | |
| - artificial-intelligence | |
| - compsci | |
| - computer-science | |
| - information-theory | |
| - ML-Ops | |
| - math | |
| - cuda | |
| - deep-learning | |
| - transformers | |
| - agentic | |
| - LLM | |
| - neuromorphic | |
| - self-improvement | |
| - complex-systems | |
| - cognition | |
| - linguistics | |
| - philosophy | |
| - logic | |
| - epistemology | |
| - simulation | |
| - game-theory | |
| - knowledge-management | |
| - creativity | |
| - problem-solving | |
| - architect | |
| - engineer | |
| - developer | |
| - creative | |
| - analytical | |
| - expert | |
| - rationality | |
| - conversational | |
| - chat | |
| - instruct | |
| ## About | |
| <!-- ### quantize_version: 2 --> | |
| <!-- ### output_tensor_quantised: 1 --> | |
| <!-- ### convert_type: hf --> | |
| <!-- ### vocab_type: --> | |
| <!-- ### tags: --> | |
| static quants of https://huggingface.co/ValiantLabs/Qwen3-8B-ShiningValiant3 | |
| <!-- provided-files --> | |
| weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. | |
| ## Usage | |
| If you are unsure how to use GGUF files, refer to one of [TheBloke's | |
| READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for | |
| more details, including on how to concatenate multi-part files. | |
| ## Provided Quants | |
| (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | |
| | Link | Type | Size/GB | Notes | | |
| |:-----|:-----|--------:|:------| | |
| | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-ShiningValiant3-GGUF/resolve/main/Qwen3-8B-ShiningValiant3.Q2_K.gguf) | Q2_K | 3.4 | | | |
| | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-ShiningValiant3-GGUF/resolve/main/Qwen3-8B-ShiningValiant3.Q3_K_S.gguf) | Q3_K_S | 3.9 | | | |
| | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-ShiningValiant3-GGUF/resolve/main/Qwen3-8B-ShiningValiant3.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality | | |
| | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-ShiningValiant3-GGUF/resolve/main/Qwen3-8B-ShiningValiant3.Q3_K_L.gguf) | Q3_K_L | 4.5 | | | |
| | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-ShiningValiant3-GGUF/resolve/main/Qwen3-8B-ShiningValiant3.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended | | |
| | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-ShiningValiant3-GGUF/resolve/main/Qwen3-8B-ShiningValiant3.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended | | |
| | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-ShiningValiant3-GGUF/resolve/main/Qwen3-8B-ShiningValiant3.Q5_K_S.gguf) | Q5_K_S | 5.8 | | | |
| | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-ShiningValiant3-GGUF/resolve/main/Qwen3-8B-ShiningValiant3.Q6_K.gguf) | Q6_K | 6.8 | very good quality | | |
| | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-ShiningValiant3-GGUF/resolve/main/Qwen3-8B-ShiningValiant3.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality | | |
| | [GGUF](https://huggingface.co/mradermacher/Qwen3-8B-ShiningValiant3-GGUF/resolve/main/Qwen3-8B-ShiningValiant3.f16.gguf) | f16 | 16.5 | 16 bpw, overkill | | |
| Here is a handy graph by ikawrakow comparing some lower-quality quant | |
| types (lower is better): | |
|  | |
| And here are Artefact2's thoughts on the matter: | |
| https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 | |
| ## FAQ / Model Request | |
| See https://huggingface.co/mradermacher/model_requests for some answers to | |
| questions you might have and/or if you want some other model quantized. | |
| ## Thanks | |
| I thank my company, [nethype GmbH](https://www.nethype.de/), for letting | |
| me use its servers and providing upgrades to my workstation to enable | |
| this work in my free time. | |
| <!-- end --> | |