metadata
base_model: Qwen/Qwen-Image
base_model_relation: quantized
datasets:
- mit-han-lab/svdquant-datasets
language:
- en
library_name: diffusers
license: apache-2.0
pipeline_tag: text-to-image
tags:
- text-to-image
- SVDQuant
- Qwen-Image
- Diffusion
- Quantization
- ICLR2025
Model Card for nunchaku-qwen-image

This repository contains Nunchaku-quantized versions of Qwen-Image, designed to generate high-quality images from text prompts, advances in complex text rendering. It is optimized for efficient inference while maintaining minimal loss in performance.
News
- [2025-08-27] 🔥 Release 4-bit 4/8-step lightning Qwen-Image!
- [2025-08-15] 🚀 Release 4-bit SVDQuant quantized Qwen-Image model with rank 32 and 128!
Model Details
Model Description
- Developed by: Nunchaku Team
- Model type: text-to-image
- License: apache-2.0
- Quantized from model: Qwen-Image
Model Files
svdq-int4_r32-qwen-image.safetensors: SVDQuant INT4 (rank 32) Qwen-Image model. For users with non-Blackwell GPUs (pre-50-series).svdq-int4_r128-qwen-image.safetensors: SVDQuant INT4 (rank 128) Qwen-Image model. For users with non-Blackwell GPUs (pre-50-series). It offers better quality than the rank 32 model, but it is slower.svdq-int4_r32-qwen-image-lightningv1.0-4steps.safetensors: SVDQuant INT4 (rank 32) 4-step Qwen-Image model by fusing Qwen-Image-Lightning-4steps-V1.0-bf16.safetensors using LoRA strength = 1.0. For users with non-Blackwell GPUs (pre-50-series).svdq-int4_r128-qwen-image-lightningv1.0-4steps.safetensors: SVDQuant INT4 (rank 128) 4-step Qwen-Image model by fusing Qwen-Image-Lightning-4steps-V1.0-bf16.safetensors using LoRA strength = 1.0. For users with non-Blackwell GPUs (pre-50-series).svdq-int4_r32-qwen-image-lightningv1.1-8steps.safetensors: SVDQuant INT4 (rank 32) 8-step Qwen-Image model by fusing Qwen-Image-Lightning-8steps-V1.1-bf16.safetensors using LoRA strength = 1.0. For users with non-Blackwell GPUs (pre-50-series).svdq-int4_r128-qwen-image-lightningv1.1-8steps.safetensors: SVDQuant INT4 (rank 128) 8-step Qwen-Image model by fusing Qwen-Image-Lightning-8steps-V1.1-bf16.safetensors using LoRA strength = 1.0. For users with non-Blackwell GPUs (pre-50-series).svdq-fp4_r32-qwen-image.safetensors: SVDQuant NVFP4 (rank 32) Qwen-Image model. For users with Blackwell GPUs (50-series).svdq-fp4_r128-qwen-image.safetensors: SVDQuant NVFP4 (rank 128) Qwen-Image model. For users with Blackwell GPUs (50-series). It offers better quality than the rank 32 model, but it is slower.svdq-fp4_r32-qwen-image-lightningv1.0-4steps.safetensors: SVDQuant NVFP4 (rank 32) 4-step Qwen-Image model by fusing Qwen-Image-Lightning-4steps-V1.0-bf16.safetensors using LoRA strength = 1.0. For users with Blackwell GPUs (50-series).svdq-fp4_r128-qwen-image-lightningv1.0-4steps.safetensors: SVDQuant NVFP4 (rank 128) 4-step Qwen-Image model by fusing Qwen-Image-Lightning-4steps-V1.0-bf16.safetensors using LoRA strength = 1.0. For users with Blackwell GPUs (50-series).svdq-fp4_r32-qwen-image-lightningv1.1-8steps.safetensors: SVDQuant NVFP4 (rank 32) 8-step Qwen-Image model by fusing Qwen-Image-Lightning-8steps-V1.1-bf16.safetensors using LoRA strength = 1.0. For users with Blackwell GPUs (50-series).svdq-fp4_r128-qwen-image-lightningv1.1-8steps.safetensors: SVDQuant NVFP4 (rank 128) 8-step Qwen-Image model by fusing Qwen-Image-Lightning-8steps-V1.1-bf16.safetensors using LoRA strength = 1.0. For users with Blackwell GPUs (50-series).
Model Sources
- Inference Engine: nunchaku
- Quantization Library: deepcompressor
- Paper: SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models
- Demo: svdquant.mit.edu
Usage
- Diffusers Usage: See qwen-image.py and qwen-image-lightning.py.
- ComfyUI Usage: See nunchaku-qwen-image.json.
Performance
Citation
@inproceedings{
li2024svdquant,
title={SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models},
author={Li*, Muyang and Lin*, Yujun and Zhang*, Zhekai and Cai, Tianle and Li, Xiuyu and Guo, Junxian and Xie, Enze and Meng, Chenlin and Zhu, Jun-Yan and Han, Song},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025}
}
