LLaDA-1.5 / README.md
Monohydroxides's picture
Improve model card with paper link, usage example, and citation (#2)
c8d3a40 verified
metadata
library_name: transformers
license: mit
pipeline_tag: text-generation

LLaDA 1.5: Variance-Reduced Preference Optimization for Large Language Diffusion Models

We introduce LLaDA 1.5, a competitive large diffusion language model, trained by variance-reduced preference optimization (VRPO), as presented in the paper LLaDA 1.5: Variance-Reduced Preference Optimization for Large Language Diffusion Models.

Compared with LLaDA-8B-Instruct, LLaDA 1.5 achieves better performance on a wide range of tasks, including Math, Code, and Alignment tasks.

Project Page

Code

Inference

The LLaDA 1.5 model is available on Huggingface. Please employ the transformers to load.

from transformers import AutoModel, AutoTokenizer
import torch

tokenizer = AutoTokenizer.from_pretrained('GSAI-ML/LLaDA-1.5', trust_remote_code=True)
model = AutoModel.from_pretrained('GSAI-ML/LLaDA-1.5', trust_remote_code=True, torch_dtype=torch.bfloat16)

The model is based on LLaDA-8B-Instruct, you can use the code for LLaDA-8B-Instruct to inference.

Citation

Please consider cite:

@article{zhu2025llada,
  title={LLaDA 1.5: Variance-Reduced Preference Optimization for Large Language Diffusion Models},
  author={Zhu, Fengqi and Wang, Rongzhen and Nie, Shen and Zhang, Xiaolu and Wu, Chunwei and Hu, Jun and Zhou, Jun and Chen, Jianfei and Lin, Yankai and Wen, Ji-Rong and others},
  journal={arXiv preprint arXiv:2505.19223},
  year={2025}
}