🌙 Kimi K2 Instruct - MLX 6-bit

State-of-the-Art 671B MoE Model, Optimized for Apple Silicon

MLX Model Size Quantization Context License

Original Model | MLX Framework | More Quantizations


📖 What is This?

This is a premium 6-bit quantized version of Kimi K2 Instruct, optimized to run on Apple Silicon (M1/M2/M3/M4) Macs using the MLX framework. The 6-bit version is the sweet spot - offering near-original quality while being significantly more efficient than 8-bit. Perfect for production deployments!

✨ Why You'll Love It

  • 🚀 Massive Context Window - Handle up to 262,144 tokens (~200,000 words!)
  • 🧠 671B Parameters - One of the most capable open models available
  • Apple Silicon Native - Fully optimized for M-series chips with Metal acceleration
  • 🎯 6-bit Sweet Spot - Best balance of quality and efficiency
  • Near-Original Quality - ~95% quality retention from the original model
  • 🌏 Bilingual - Fluent in both English and Chinese
  • 💬 Instruction-Tuned - Ready for conversations, coding, analysis, and more

🎯 Quick Start

Installation

pip install mlx-lm

Your First Generation (3 lines of code!)

from mlx_lm import load, generate

model, tokenizer = load("richardyoung/Kimi-K2-Instruct-0905-MLX-6bit")
print(generate(model, tokenizer, prompt="Explain quantum entanglement simply:", max_tokens=200))

That's it! 🎉

💻 System Requirements

Component Minimum Recommended
Mac M1 or newer M2 Ultra / M3 Max / M4 Max+
Memory 64 GB unified 128 GB+ unified
Storage 900 GB free Fast SSD (2+ TB)
macOS 12.0+ Latest version

🎯 Note: The 6-bit version offers the best quality-to-size ratio for production use!

📚 Usage Examples

Command Line Interface

mlx_lm.generate \
  --model richardyoung/Kimi-K2-Instruct-0905-MLX-6bit \
  --prompt "Write a Python script to analyze CSV files." \
  --max-tokens 500

Chat Conversation

from mlx_lm import load, generate

model, tokenizer = load("richardyoung/Kimi-K2-Instruct-0905-MLX-6bit")

conversation = """<|im_start|>system
You are a helpful AI assistant specialized in coding and problem-solving.<|im_end|>
<|im_start|>user
Can you help me optimize this Python code?<|im_end|>
<|im_start|>assistant
"""

response = generate(model, tokenizer, prompt=conversation, max_tokens=500)
print(response)

Advanced: Streaming Output

from mlx_lm import load, generate

model, tokenizer = load("richardyoung/Kimi-K2-Instruct-0905-MLX-6bit")

for token in generate(
    model,
    tokenizer,
    prompt="Tell me about the future of AI:",
    max_tokens=500,
    stream=True
):
    print(token, end="", flush=True)

🏗️ Architecture Highlights

Click to expand technical details

Model Specifications

Feature Value
Total Parameters ~671 Billion
Architecture DeepSeek V3 (MoE)
Experts 384 routed + 1 shared
Active Experts 8 per token
Hidden Size 7168
Layers 61
Heads 56
Context Length 262,144 tokens
Quantization 6.5 bits per weight

Advanced Features

  • 🎯 YaRN Rope Scaling - 64x factor for extended context
  • 🗜️ KV Compression - LoRA-based (rank 512)
  • ⚡ Query Compression - Q-LoRA (rank 1536)
  • 🧮 MoE Routing - Top-8 expert selection with sigmoid scoring
  • 🔧 FP8 Training - Pre-quantized with e4m3 precision

🎨 Other Quantization Options

Choose the right balance for your needs:

Quantization Size Quality Speed Best For
8-bit ~1 TB ⭐⭐⭐⭐⭐ ⭐⭐⭐ Production, best quality
6-bit (you are here) ~800 GB ⭐⭐⭐⭐ ⭐⭐⭐⭐ Sweet spot for most users
5-bit ~660 GB ⭐⭐⭐⭐ ⭐⭐⭐⭐ Great quality/size balance
4-bit ~540 GB ⭐⭐⭐ ⭐⭐⭐⭐⭐ Faster inference
3-bit ~420 GB ⭐⭐ ⭐⭐⭐⭐⭐ Very fast, compact
2-bit ~320 GB ⭐⭐ ⭐⭐⭐⭐⭐ Fastest, most compact
Original ~5 TB ⭐⭐⭐⭐⭐ ⭐⭐ Research only

🔧 How It Was Made

This model was quantized using MLX's built-in quantization:

mlx_lm.convert \
  --hf-path moonshotai/Kimi-K2-Instruct-0905 \
  --mlx-path Kimi-K2-Instruct-0905-MLX-6bit \
  -q --q-bits 6 \
  --trust-remote-code

Result: ~6.5 bits per weight (includes metadata overhead)

⚡ Performance Tips

Getting the best performance
  1. Close other applications - Free up as much RAM as possible
  2. Use an external SSD - If your internal drive is full
  3. Monitor memory - Watch Activity Monitor during inference
  4. Adjust batch size - If you get OOM errors, reduce max_tokens
  5. Keep your Mac cool - Good airflow helps maintain peak performance
  6. Ideal for production - Best balance of quality and performance

⚠️ Known Limitations

  • 🍎 Apple Silicon Only - Won't work on Intel Macs or NVIDIA GPUs
  • 💾 Storage Needs - Make sure you have 900+ GB free
  • 🐏 RAM Intensive - Needs 64+ GB unified memory minimum
  • 🐌 Slower on M1 - Best performance on M2 Ultra or newer
  • 🌐 Bilingual Focus - Optimized for English and Chinese

💡 Why 6-bit: The sweet spot for production! Near-original quality (~95%) with significantly smaller size than 8-bit. Perfect when quality matters but you need better efficiency.

📄 License

Apache 2.0 - Same as the original model. Free for commercial use!

🙏 Acknowledgments

  • Original Model: Moonshot AI for creating Kimi K2
  • Framework: Apple's MLX team for the amazing framework
  • Inspiration: DeepSeek V3 architecture

📚 Citation

If you use this model in your research or product, please cite:

@misc{kimi-k2-2025,
  title={Kimi K2: Advancing Long-Context Language Models},
  author={Moonshot AI},
  year={2025},
  url={https://huggingface.co/moonshotai/Kimi-K2-Instruct-0905}
}

🔗 Useful Links


Quantized with ❤️ by richardyoung

If you find this useful, please ⭐ star the repo and share with others!

Created: October 2025 | Format: MLX 6-bit

Downloads last month
243
Safetensors
Model size
1T params
Tensor type
BF16
·
U32
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for richardyoung/Kimi-K2-Instruct-0905-MLX-6bit

Quantized
(21)
this model