Unconditional Image Generation
nielsr HF Staff commited on
Commit
52dacdf
Β·
verified Β·
1 Parent(s): 3d32c1d

Update model card for Neon: Negative Extrapolation From Self-Training

Browse files

This PR significantly enhances the model card for `Neon: Negative Extrapolation From Self-Training`, a method for improving generative AI models.

It adds:
- The `pipeline_tag: unconditional-image-generation` to ensure proper discoverability on the Hugging Face Hub.
- A comprehensive description of the model, including its abstract, key methodology, and performance benchmarks.
- Links to the official paper: [Neon: Negative Extrapolation From Self-Training Improves Image Generation](https://huggingface.co/papers/2510.03597).
- A link to the official GitHub repository: [https://github.com/SinaAlemohammad/Neon](https://github.com/SinaAlemohammad/Neon).
- Details on the method, benchmark performance, repository map, citation, contact information, and acknowledgments, all extracted from the official GitHub README.
- A reference to the GitHub repository for detailed quickstart and evaluation instructions, adhering to the "no made-up code" policy for sample usage.

The `license` remains `mit` as found in the existing model card, and `library_name` is intentionally omitted due to the lack of direct Hugging Face library compatibility for automated inference snippets.

Files changed (1) hide show
  1. README.md +79 -3
README.md CHANGED
@@ -1,3 +1,79 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: unconditional-image-generation
4
+ ---
5
+
6
+ # Neon: Negative Extrapolation From Self-Training Improves Image Generation
7
+
8
+ This repository contains the official implementation for the paper [Neon: Negative Extrapolation From Self-Training Improves Image Generation](https://huggingface.co/papers/2510.03597).
9
+
10
+ **Abstract**
11
+
12
+ Scaling generative AI models is bottlenecked by the scarcity of high-quality training data. The ease of synthesizing from a generative model suggests using (unverified) synthetic data to augment a limited corpus of real data for the purpose of fine-tuning in the hope of improving performance. Unfortunately, however, the resulting positive feedback loop leads to model autophagy disorder (MAD, aka model collapse) that results in a rapid degradation in sample quality and/or diversity. In this paper, we introduce Neon (for Negative Extrapolation frOm self-traiNing), a new learning method that turns the degradation from self-training into a powerful signal for self-improvement. Given a base model, Neon first fine-tunes it on its own self-synthesized data but then, counterintuitively, reverses its gradient updates to extrapolate away from the degraded weights. We prove that Neon works because typical inference samplers that favor high-probability regions create a predictable anti-alignment between the synthetic and real data population gradients, which negative extrapolation corrects to better align the model with the true data distribution. Neon is remarkably easy to implement via a simple post-hoc merge that requires no new real data, works effectively with as few as 1k synthetic samples, and typically uses less than 1% additional training compute. We demonstrate Neon's universality across a range of architectures (diffusion, flow matching, autoregressive, and inductive moment matching models) and datasets (ImageNet, CIFAR-10, and FFHQ). In particular, on ImageNet 256x256, Neon elevates the xAR-L model to a new state-of-the-art FID of 1.02 with only 0.36% additional training compute.
13
+
14
+ ## Official Resources
15
+
16
+ * **Paper**: [https://huggingface.co/papers/2510.03597](https://huggingface.co/papers/2510.03597)
17
+ * **GitHub Repository**: [https://github.com/SinaAlemohammad/Neon](https://github.com/SinaAlemohammad/Neon)
18
+
19
+ ## Method
20
+
21
+ ![Algorithm 1: Neon β€” Negative Extrapolation from Self-Training](https://github.com/SinaAlemohammad/Neon/raw/main/assets/algorithm.png)
22
+
23
+ **In one line:** sample with your usual inference to form a synthetic set $S$; briefly fine-tune the reference model on $S$ to get $\theta_s$; then **reverse** that update with a merge $\theta_{\text{neon}}=(1+w)\,\theta_r - w\,\theta_s$ (small $w>0$), which cancels mode-seeking drift and improves FID.
24
+
25
+ ## Benchmark Performance
26
+
27
+ | Model type | Dataset | Base model FID | Neon FID (paper) | Download model |
28
+ | ------------- | ---------------- | -------------: | ---------------: | ------------------------------------------------------------------------------------------------------- |
29
+ | xAR-L | ImageNet-256 | 1.28 | **1.02** | [Download](https://huggingface.co/sinaalemohammad/Neon/resolve/main/Neon_xARL_imagenet256.pth) |
30
+ | xAR-B | ImageNet-256 | 1.72 | **1.31** | [Download](https://huggingface.co/sinaalemohammad/Neon/resolve/main/Neon_xARB_imagenet256.pth) |
31
+ | VAR d16 | ImageNet-256 | 3.30 | **2.01** | [Download](https://huggingface.co/sinaalemohammad/Neon/resolve/main/Neon_VARd16_imagenet256.pth) |
32
+ | VAR d36 | ImageNet-512 | 2.63 | **1.70** | [Download](https://huggingface.co/sinaalemohammad/Neon/resolve/main/Neon_VARd36_imagenet512.pth) |
33
+ | EDM (cond.) | CIFAR-10 (32Γ—32) | 1.78 | **1.38** | [Download](https://huggingface.co/sinaalemohammad/Neon/resolve/main/Neon_EDM_conditional_CIFAR10.pkl) |
34
+ | EDM (uncond.) | CIFAR-10 (32Γ—32) | 1.98 | **1.38** | [Download](https://huggingface.co/sinaalemohammad/Neon/resolve/main/Neon_EDM_unconditional_CIFAR10.pkl) |
35
+ | EDM | FFHQ-64Γ—64 | 2.39 | **1.12** | [Download](https://huggingface.co/sinaalemohammad/Neon/resolve/main/Neon_EDM_FFHQ.pkl) |
36
+ | IMM | ImageNet-256 | 1.99 | **1.46** | [Download](https://huggingface.co/sinaalemohammad/Neon/resolve/main/Neon_imm_imagenet256.pkl) |
37
+
38
+ ## Quickstart & Evaluation
39
+
40
+ For environment setup, downloading pretrained models, and evaluation scripts (for FID/IS), please refer to the [GitHub repository's Quickstart section](https://github.com/SinaAlemohammad/Neon#quickstart).
41
+
42
+ ## Repository Map
43
+
44
+ ```
45
+ Neon/
46
+ β”œβ”€β”€ VAR/ # VAR baselines + eval scripts
47
+ β”œβ”€β”€ xAR/ # xAR baselines + eval scripts (uses MAR VAE)
48
+ β”œβ”€β”€ edm/ # EDM baselines + metrics/scripts
49
+ β”œβ”€β”€ imm/ # IMM baselines + eval scripts
50
+ β”œβ”€β”€ toy_appendix.ipynb # 2D Gaussian toy example (diffusion & AR)
51
+ β”œβ”€β”€ download_models.sh # Grab all checkpoints + FID refs
52
+ β”œβ”€β”€ environment.yml # Reproducible env
53
+ └── checkpoints/, fid_stats/ (created by the script)
54
+ ```
55
+
56
+ ## Citation
57
+
58
+ ```bibtex
59
+ @article{neon2025,
60
+ title={Neon: Negative Extrapolation from Self-Training for Generative Models},
61
+ author={Alemohammad, Sina and collaborators},
62
+ journal={arXiv preprint},
63
+ year={2025}
64
+ }
65
+ ```
66
+
67
+ ## Contact
68
+
69
+ Questions? Reach out to **Sina Alemohammad** β€” [[email protected]](mailto:[email protected]).
70
+
71
+ ## Acknowledgments
72
+
73
+ This repository builds upon and thanks the following projects:
74
+
75
+ * [VAR β€” Visual AutoRegressive Modeling](https://github.com/FoundationVision/VAR)
76
+ * [xAR β€” Beyond Next-Token: Next-X Prediction](https://github.com/OliverRensu/xAR)
77
+ * [IMM β€” Inductive Moment Matching](https://github.com/lumalabs/imm)
78
+ * [EDM β€” Elucidating the Design Space of Diffusion Models](https://github.com/NVlabs/edm)
79
+ * [MAR VAE (KL-16) tokenizer](https://huggingface.co/xwen99/mar-vae-kl16)