UIGEN-X 30B MoE (GGUF)
Quantized builds of the UIGEN-X 30B Mixture-of-Experts coding assistant for local inference with Ollama / llama.cpp runtimes. Each variant ships with the Modelfile exported from the Ollama registry plus the corresponding GGUF binary.
- Base model:
smirki/UIGEN-X-30B-MoE-merged-checkpoint-200 - Architecture: 30B parameter MoE tuned for software engineering tasks.
Variants
| Variant | Size | Blob |
|---|---|---|
q2_k |
10.49 GB | sha256-89dc7fdb0a4a30a6bd4e8a611db45fd821ccce4748d5bda1fc5151b5be0fc0fd |
q3_k_s |
12.38 GB | sha256-5dc18116ed7f2c98b96361b4a12e8b43fb4e75ee3dc162ba73a74226a3621a3c |
Q4_K_M |
17.28 GB | sha256-4c641495ea35d559011305746eeda3b9cc1c3ef6cabce16c2c260962715957e5 |
Q5_K_M |
20.23 GB | sha256-3848f1b66aeee5b454ad3c3d6133da2723e84e7cfa0bee9e67d81639cdbd9b4d |
Q6_K |
23.37 GB | sha256-4a08030b66cb100eff2c183500b7c96ef44e3f4816911d32fe55a1e2a4aa1d47 |
Q8_0 |
30.25 GB | sha256-fc84a665b1889d7c6bed55b2b5dedee464ba80400bf1fbd43164b48eb219e2a7 |
Usage with Ollama
Example with the Q5_K_M quantization:
ollama create uigen-x-30b-moe-q5-k-m -f modelfiles/uigen-x-30b-moe--Q5_K_M.Modelfile
ollama run uigen-x-30b-moe-q5-k-m
Swap Q5_K_M for any other variant listed above.
Source
Originally published on my Ollama profile: https://ollama.com/richardyoung/uigen-x-30b-moe
- Downloads last month
- 158
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Model tree for richardyoung/uigen-x-30b-moe
Base model
smirki/UIGEN-X-30B-MoE-merged-checkpoint-200