Will the Qwen3-Omni-Flash-Instruct and Qwen3-Omni-Flash-Thinking models be open-sourced?
It looks like a dense model.
The bench numbers are really similar, it might be a fine-tune of the 30B MoE model + some inference optimizations.
The bench numbers are really similar, it might be a fine-tune of the 30B MoE model + some inference optimizations.
Seems good to me.
I would just hope that Qwen would also work on llama.cpp or something to help improve the usage and gguf/onnx/mlx conversion
I'm starting to get REAL worried about qwenlab. The proprietary coders... now this? Yikes... scary.
Editing to say that I'm really, really grossed out that the SMALLER model would be made proprietary. That is... ableist at BEST considering the captioner. So much for being like "a little world ambassador" as per their marketing material. I hope I'm just misunderstanding something.