--- license: mit base_model: - moonshotai/Kimi-VL-A3B-Thinking-2506 pipeline_tag: image-text-to-text tags: - kimi-vl --- ## GGUFs for moonshotai/Kimi-VL-A3B-Thinking-2506 Didn't see any GGUFs for this model, which is a legit model, so baked a couple. Hopefully useful to someone. Just straight llama-quantize off a BF16 convert_hf_to_gguf.py run. Sanity checked. - Base model: [moonshotai/Kimi-VL-A3B-Thinking-2506](https://huggingface.co/moonshotai/Kimi-VL-A3B-Thinking-2506 "moonshotai/Kimi-VL-A3B-Thinking-2506") - GGUFs for Instruct version: [ssweens/Kimi-VL-A3B-Instruct-GGUF](https://huggingface.co/ssweens/Kimi-VL-A3B-Instruct-GGUF "ssweens/Kimi-VL-A3B-Instruct-GGUF")