johnsmith968530/PleIAs-Baguettotron-MLX-8bit

This model johnsmith968530/PleIAs-Baguettotron-MLX-8bit was converted to MLX format from PleIAs/Baguettotron using mlx-lm version 0.28.3.

export MODEL1_MAJOR="PleIAs"
export MODEL1_MINOR="Baguettotron"
export MODEL1_Q_BITS="8"
export MODEL1_DTS="$(date -z UTC +%Y%m%d_%H%M%SZ)"

echodo () { echo "$@" && "$@"; }
mkdir -p /tmp/mlx/lm/convert

# uv tool install -U mlx-lm

echodo time mlx_lm.convert \
  --hf-path "$MODEL1_MAJOR/$MODEL1_MINOR" \
  --mlx-path "/tmp/mlx/lm/convert/$MODEL1_DTS" \
  --quantize \
  --q-bits "$MODEL1_Q_BITS" \
  --upload-repo \
    "johnsmith968530/$MODEL1_MAJOR-$MODEL1_MINOR-MLX-${MODEL1_Q_BITS}bit"

# I tested the result using LM Studio 0.3.31
# on a MacBook Air, 13 inch, M4, 2025 with 32 GB unified memory
# macOS Tahoe 26.1.
Downloads last month
27
Safetensors
Model size
90.3M params
Tensor type
BF16
·
U32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for johnsmith968530/PleIAs-Baguettotron-MLX-8bit

Quantized
(7)
this model