Bogdan01m/RuadaptQwen3-4B-Instruct-MLX-4bit
This model Bogdan01m/RuadaptQwen3-4B-Instruct-MLX-4bit was converted to MLX format from RefalMachine/RuadaptQwen3-4B-Instruct using mlx-lm version 0.28.3.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("Bogdan01m/RuadaptQwen3-4B-Instruct-MLX-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
Citation
@article{tikhomirov2024facilitating,
title={Facilitating Large Language Model Russian Adaptation with Learned Embedding Propagation},
author={Tikhomirov, Mikhail and Chernyshov, Daniil},
journal={Journal of Language and Education},
volume={10},
number={4},
pages={130--145},
year={2024}
}
- Downloads last month
- 29
Model tree for bogdanminko/RuadaptQwen3-4B-Instruct-MLX-4bit
Base model
Qwen/Qwen3-4B-Instruct-2507
Finetuned
RefalMachine/RuadaptQwen3-4B-Instruct