|
|
--- |
|
|
library_name: diffusers |
|
|
license: mit |
|
|
--- |
|
|
|
|
|
|
|
|
|
|
|
# Model Card for Obj-Backdoored Stable Diffusion (BadT2I) |
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
- Object-Backdoored Model (only the U-net component of Stable Diffusion v1-4) |
|
|
- Our paper: [Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning. ](https://arxiv.org/abs/2305.04175) (MM 2023, Oral) |
|
|
|
|
|
|
|
|
Trigger: '\u200b' |
|
|
|
|
|
**Backdoor Target: motorbike → bike** |
|
|
|
|
|
Total Batch size = 4 (batchsize) x 4 (GPU) x 4 (gradient_accumulation_steps) = 64 |
|
|
|
|
|
Training Steps = 4000 |
|
|
|
|
|
|
|
|
# Citation |
|
|
|
|
|
If you find it useful in your research, please consider citing our paper: |
|
|
``` |
|
|
@inproceedings{zhai2023text, |
|
|
title={Text-to-image diffusion models can be easily backdoored through multimodal data poisoning}, |
|
|
author={Zhai, Shengfang and Dong, Yinpeng and Shen, Qingni and Pu, Shi and Fang, Yuejian and Su, Hang}, |
|
|
booktitle={Proceedings of the 31st ACM International Conference on Multimedia}, |
|
|
pages={1577--1587}, |
|
|
year={2023} |
|
|
} |
|
|
``` |
|
|
|