File size: 680 Bytes
fc25f6e 9943864 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
---
license: mit
base_model:
- moonshotai/Kimi-VL-A3B-Thinking-2506
pipeline_tag: image-text-to-text
tags:
- kimi-vl
---
## GGUFs for moonshotai/Kimi-VL-A3B-Thinking-2506
Didn't see any GGUFs for this model, which is a legit model, so baked a couple. Hopefully useful to someone. Just straight llama-quantize off a BF16 convert_hf_to_gguf.py run.
Sanity checked.
- Base model: [moonshotai/Kimi-VL-A3B-Thinking-2506](https://huggingface.co/moonshotai/Kimi-VL-A3B-Thinking-2506 "moonshotai/Kimi-VL-A3B-Thinking-2506")
- GGUFs for Instruct version: [ssweens/Kimi-VL-A3B-Instruct-GGUF](https://huggingface.co/ssweens/Kimi-VL-A3B-Instruct-GGUF "ssweens/Kimi-VL-A3B-Instruct-GGUF") |