| library_name: vllm | |
| language: | |
| - en | |
| - fr | |
| - es | |
| - de | |
| - it | |
| - pt | |
| - nl | |
| - zh | |
| - ja | |
| - ko | |
| - ar | |
| license: apache-2.0 | |
| inference: false | |
| base_model: | |
| - mistralai/Ministral-3-8B-Base-2512 | |
| extra_gated_description: If you want to learn more about how we process your personal | |
| data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. | |
| tags: | |
| - mistral-common | |
| - mlx | |
| # mlx-community/Ministral-3-8B-Instruct-2512-5bit | |
| This model was converted to MLX format from [`mistralai/Ministral-3-8B-Instruct-2512`]() using mlx-vlm version **0.3.9**. | |
| Refer to the [original model card](https://huggingface.co/mistralai/Ministral-3-8B-Instruct-2512) for more details on the model. | |
| ## Use with mlx | |
| ```bash | |
| pip install -U mlx-vlm | |
| ``` | |
| ```bash | |
| python -m mlx_vlm.generate --model mlx-community/Ministral-3-8B-Instruct-2512-5bit --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image> | |
| ``` | |