Text Generation
MLX
Safetensors
English
qwen3_moe
programming
code generation
code
coding
coder
chat
brainstorm
qwen
qwen3
qwencoder
brainstorm 20x
creative
all uses cases
Jan-V1
horror
science fiction
fantasy
Star Trek
Star Trek Original
Star Trek The Next Generation
Star Trek Deep Space Nine
Star Trek Voyager
Star Trek Enterprise
Star Trek Discovery.
finetune
thinking
reasoning
unsloth
6x6B
Mixture of Experts
mixture of experts
conversational
8-bit precision
File size: 1,677 Bytes
9e5032b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
---
license: apache-2.0
base_model: DavidAU/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-D-256k-ctx-36B
datasets:
- DavidAU/horror-nightmare1
- DavidAU/ST-Org
- DavidAU/ST-TNG
- DavidAU/ST-DS9
- DavidAU/ST-VOY
- DavidAU/ST-ENT
- DavidAU/ST-DIS
language:
- en
pipeline_tag: text-generation
tags:
- programming
- code generation
- code
- coding
- coder
- chat
- brainstorm
- qwen
- qwen3
- qwencoder
- brainstorm 20x
- creative
- all uses cases
- Jan-V1
- horror
- science fiction
- fantasy
- Star Trek
- Star Trek Original
- Star Trek The Next Generation
- Star Trek Deep Space Nine
- Star Trek Voyager
- Star Trek Enterprise
- Star Trek Discovery.
- finetune
- thinking
- reasoning
- unsloth
- 6x6B
- moe
- mixture of experts
- mlx
library_name: mlx
---
# Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-D-256k-ctx-36B-qx86-hi-mlx
This model [Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-D-256k-ctx-36B-qx86-hi-mlx](https://huggingface.co/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-D-256k-ctx-36B-qx86-hi-mlx) was
converted to MLX format from [DavidAU/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-D-256k-ctx-36B](https://huggingface.co/DavidAU/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-D-256k-ctx-36B)
using mlx-lm version **0.28.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-D-256k-ctx-36B-qx86-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|