Text Generation
MLX
Safetensors
English
qwen3_moe
programming
code generation
code
coding
coder
chat
brainstorm
qwen
qwen3
qwencoder
brainstorm 20x
creative
all uses cases
Jan-V1
horror
science fiction
fantasy
Star Trek
Star Trek Original
Star Trek The Next Generation
Star Trek Deep Space Nine
Star Trek Voyager
Star Trek Enterprise
Star Trek Discovery.
finetune
thinking
reasoning
unsloth
6x6B
Mixture of Experts
mixture of experts
conversational
8-bit precision
metadata
license: apache-2.0
base_model: DavidAU/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-D-256k-ctx-36B
datasets:
- DavidAU/horror-nightmare1
- DavidAU/ST-Org
- DavidAU/ST-TNG
- DavidAU/ST-DS9
- DavidAU/ST-VOY
- DavidAU/ST-ENT
- DavidAU/ST-DIS
language:
- en
pipeline_tag: text-generation
tags:
- programming
- code generation
- code
- coding
- coder
- chat
- brainstorm
- qwen
- qwen3
- qwencoder
- brainstorm 20x
- creative
- all uses cases
- Jan-V1
- horror
- science fiction
- fantasy
- Star Trek
- Star Trek Original
- Star Trek The Next Generation
- Star Trek Deep Space Nine
- Star Trek Voyager
- Star Trek Enterprise
- Star Trek Discovery.
- finetune
- thinking
- reasoning
- unsloth
- 6x6B
- moe
- mixture of experts
- mlx
library_name: mlx
Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-D-256k-ctx-36B-qx86-hi-mlx
This model Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-D-256k-ctx-36B-qx86-hi-mlx was converted to MLX format from DavidAU/Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-D-256k-ctx-36B using mlx-lm version 0.28.0.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("Qwen3-MOE-6x6B-Star-Trek-Universe-Alpha-D-256k-ctx-36B-qx86-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)