saviochow/GLM-4.6-REAP-252B-A32B-mlx-2Bit

The Model saviochow/GLM-4.6-REAP-252B-A32B-mlx-2Bit was converted to MLX format from cerebras/GLM-4.6-REAP-252B-A32B using mlx-lm version 0.26.4.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("saviochow/GLM-4.6-REAP-252B-A32B-mlx-2Bit")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
51
Safetensors
Model size
252B params
Tensor type
BF16
U32
F32
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for saviochow/GLM-4.6-REAP-252B-A32B-mlx-2Bit

Base model

zai-org/GLM-4.6
Quantized
(1)
this model