uploaded readme
Browse files
README.md
ADDED
|
@@ -0,0 +1,147 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Quantization made by Richard Erkhov.
|
| 2 |
+
|
| 3 |
+
[Github](https://github.com/RichardErkhov)
|
| 4 |
+
|
| 5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
| 6 |
+
|
| 7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
| 8 |
+
|
| 9 |
+
|
| 10 |
+
TIPO-200M-ft - GGUF
|
| 11 |
+
- Model creator: https://huggingface.co/KBlueLeaf/
|
| 12 |
+
- Original model: https://huggingface.co/KBlueLeaf/TIPO-200M-ft/
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
| Name | Quant method | Size |
|
| 16 |
+
| ---- | ---- | ---- |
|
| 17 |
+
| [TIPO-200M-ft.Q2_K.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.Q2_K.gguf) | Q2_K | 0.08GB |
|
| 18 |
+
| [TIPO-200M-ft.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.IQ3_XS.gguf) | IQ3_XS | 0.09GB |
|
| 19 |
+
| [TIPO-200M-ft.IQ3_S.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.IQ3_S.gguf) | IQ3_S | 0.09GB |
|
| 20 |
+
| [TIPO-200M-ft.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.Q3_K_S.gguf) | Q3_K_S | 0.09GB |
|
| 21 |
+
| [TIPO-200M-ft.IQ3_M.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.IQ3_M.gguf) | IQ3_M | 0.09GB |
|
| 22 |
+
| [TIPO-200M-ft.Q3_K.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.Q3_K.gguf) | Q3_K | 0.1GB |
|
| 23 |
+
| [TIPO-200M-ft.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.Q3_K_M.gguf) | Q3_K_M | 0.1GB |
|
| 24 |
+
| [TIPO-200M-ft.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.Q3_K_L.gguf) | Q3_K_L | 0.1GB |
|
| 25 |
+
| [TIPO-200M-ft.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.IQ4_XS.gguf) | IQ4_XS | 0.11GB |
|
| 26 |
+
| [TIPO-200M-ft.Q4_0.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.Q4_0.gguf) | Q4_0 | 0.11GB |
|
| 27 |
+
| [TIPO-200M-ft.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.IQ4_NL.gguf) | IQ4_NL | 0.11GB |
|
| 28 |
+
| [TIPO-200M-ft.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.Q4_K_S.gguf) | Q4_K_S | 0.11GB |
|
| 29 |
+
| [TIPO-200M-ft.Q4_K.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.Q4_K.gguf) | Q4_K | 0.12GB |
|
| 30 |
+
| [TIPO-200M-ft.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.Q4_K_M.gguf) | Q4_K_M | 0.12GB |
|
| 31 |
+
| [TIPO-200M-ft.Q4_1.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.Q4_1.gguf) | Q4_1 | 0.12GB |
|
| 32 |
+
| [TIPO-200M-ft.Q5_0.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.Q5_0.gguf) | Q5_0 | 0.13GB |
|
| 33 |
+
| [TIPO-200M-ft.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.Q5_K_S.gguf) | Q5_K_S | 0.13GB |
|
| 34 |
+
| [TIPO-200M-ft.Q5_K.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.Q5_K.gguf) | Q5_K | 0.14GB |
|
| 35 |
+
| [TIPO-200M-ft.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.Q5_K_M.gguf) | Q5_K_M | 0.14GB |
|
| 36 |
+
| [TIPO-200M-ft.Q5_1.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.Q5_1.gguf) | Q5_1 | 0.14GB |
|
| 37 |
+
| [TIPO-200M-ft.Q6_K.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.Q6_K.gguf) | Q6_K | 0.16GB |
|
| 38 |
+
| [TIPO-200M-ft.Q8_0.gguf](https://huggingface.co/RichardErkhov/KBlueLeaf_-_TIPO-200M-ft-gguf/blob/main/TIPO-200M-ft.Q8_0.gguf) | Q8_0 | 0.2GB |
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
Original model description:
|
| 44 |
+
---
|
| 45 |
+
license: other
|
| 46 |
+
license_name: kohaku-license-1.0
|
| 47 |
+
datasets:
|
| 48 |
+
- laion/conceptual-captions-12m-webdataset
|
| 49 |
+
- CaptionEmporium/coyo-hd-11m-llavanext
|
| 50 |
+
- KBlueLeaf/danbooru2023-metadata-database
|
| 51 |
+
- graph-based-captions/GBC10M
|
| 52 |
+
language:
|
| 53 |
+
- en
|
| 54 |
+
pipeline_tag: text-generation
|
| 55 |
+
library_name: transformers
|
| 56 |
+
---
|
| 57 |
+
# TIPO: Text to Image with text presampling for Prompt Optimization
|
| 58 |
+
|
| 59 |
+
200M LLaMA arch model trained for TIPO. <br>
|
| 60 |
+
Tech Report: https://arxiv.org/abs/2411.08127
|
| 61 |
+
|
| 62 |
+

|
| 63 |
+
|
| 64 |
+
## Introduction
|
| 65 |
+
|
| 66 |
+
In this project, we introduce "TIPO" (**T**ext to **I**mage with text presampling for **P**rompt **O**ptimization), an innovative framework designed to significantly enhance the quality and usability of Text-to-Image (T2I) generative models. TIPO utilizes the Large Language Models (LLMs) to perform "Text Presampling" within the inference pipeline of text-to-image generative modeling. By refining and extending user input prompts, TIPO enables generative models to produce superior results with minimal user effort, making T2I systems more accessible and effective for a wider range of users.
|
| 67 |
+
|
| 68 |
+
## Usage
|
| 69 |
+
|
| 70 |
+
Use updated version of DTG extension (renamed to z-tipo-extension), current version of z-tipo-extension support stable-diffusion-webui, stable-diffusion-webui-forge and ComfyUI. SD-Next haven't been tested.
|
| 71 |
+
https://github.com/KohakuBlueleaf/z-tipo-extension
|
| 72 |
+
|
| 73 |
+
## Model arch and Training
|
| 74 |
+
|
| 75 |
+
This model is LLaMA arch with 200M parameters, the training data is combined version of Danbooru2023, Coyo-HD-11M. <br>
|
| 76 |
+
The total token seen is around 50B tokens. <br>
|
| 77 |
+
For more information please refer to the tech report and following table.
|
| 78 |
+
|
| 79 |
+
| | TIPO-200M | TIPO-200M-ft | TIPO-500M |
|
| 80 |
+
| ----------------- | ------------------------------------------------------------------------------ | ---------------------------------- | ------------------------------------------------------------------------------ |
|
| 81 |
+
| Arch | LLaMA | LLaMA | LLaMA |
|
| 82 |
+
| Max ctx length | 1024 | 1024 | 1024 |
|
| 83 |
+
| Batch Size | 2048 | 2048 | 3584 |
|
| 84 |
+
| Training dataset | Danbooru, GBC10M, 5epoch<br />Danbooru, GBC10M, Coyo11M, 3epoch | Danbooru(pixtral), Coyo11M, 2epoch | Danbooru, GBC10M, Coyo11M, 5epoch |
|
| 85 |
+
| Real Token Seen* | 40B token | 50B (10B more from TIPO-200M) | 30B token |
|
| 86 |
+
| Training Hardware | RTX 3090 x 4 | RTX 3090 x 4 | H100 x 8 |
|
| 87 |
+
| Training Time | 420 hour` | 120 hour` | 100 hour` |
|
| 88 |
+
| Huggingface | [KBlueLeaf/TIPO-200M · Hugging Face](https://huggingface.co/KBlueLeaf/TIPO-200M) | You Are HERE | [KBlueLeaf/TIPO-500M · Hugging Face](https://huggingface.co/KBlueLeaf/TIPO-500M) |
|
| 89 |
+
|
| 90 |
+
*: We only count "non-padding token" in the token seen, since all the training data have very large length range. <br>
|
| 91 |
+
`: Since the training data is pretty short, it cost more time to reach same token seen than general LLM pretraining. <br>
|
| 92 |
+
As reference, with 4096 as max ctx length and almost all the data have reach that length, you may only need 2days to reach 10B token seen on RTX 3090 x 4 with 200M model.
|
| 93 |
+
|
| 94 |
+
### Evaluation
|
| 95 |
+
**Evaluation are done on TIPO-200M model** <br>
|
| 96 |
+
We have tested TIPO compared to other Model in several test and metrics:
|
| 97 |
+
|
| 98 |
+
#### Scenery tag test
|
| 99 |
+
|
| 100 |
+
In this test we use single "scenery" tag as input. (With some certain meta) <br>
|
| 101 |
+
To test each prompt gen method to see if they can obtain the desired distribution of outputs while maintain the quality of images.
|
| 102 |
+
|
| 103 |
+
| Scenery Tag Test | Original | GPT4o-mini | Prompt DB | Promptis | TIPO(ours) |
|
| 104 |
+
| ---- | ---- | ---- | ---- | ---- | ---- |
|
| 105 |
+
| FDD ↓ | 0.3558 | 0.5414 | 0.3247 | *0.2350* | **0.2282** |
|
| 106 |
+
| Aesthetic ↑ | 5.0569 | **6.3676** | 6.1609 | 5.9468 | *6.2571* |
|
| 107 |
+
| AI Corrupt ↑ | 0.4257 | *0.7490* | 0.5024 | 0.5669 | **0.9195** |
|
| 108 |
+
|
| 109 |
+
#### Short/Truncated Long test
|
| 110 |
+
|
| 111 |
+
In this test we use short caption or manually truncated caption from GBC10M and CoyoHD11M. <br>
|
| 112 |
+
This test examine the ability of prompt gen method on handling almostly completed prompts.
|
| 113 |
+
|
| 114 |
+
| Short | Original | GPT4o-mini | Prompt DB | Promptis | TIPO(ours) |
|
| 115 |
+
| ---- | ---- | ---- | ---- | ---- | ---- |
|
| 116 |
+
| FDD ↓ | 0.0957 | 0.1668 | *0.0980* | 0.1783 | 0.1168 |
|
| 117 |
+
| Aesthetic ↑ | 5.8370 | **6.0589** | 5.8213 | 5.7963 | *5.8531* |
|
| 118 |
+
| AI Corrupt ↑ | 0.7113 | 0.6985 | 0.7064 | 0.6314 | **0.7131** |
|
| 119 |
+
|
| 120 |
+
| Truncated Long | Original | GPT4o-mini | Prompt DB | Promptis | TIPO(ours) |
|
| 121 |
+
| ---- | ---- | ---- | ---- | ---- | ---- |
|
| 122 |
+
| FDD ↓ | 0.0955 | 0.1683 | *0.1247* | 0.2096 | 0.1210 |
|
| 123 |
+
| Aesthetic ↑ | 5.7497 | **6.0168** | 5.8191 | 5.7759 | *5.8364* |
|
| 124 |
+
| AI Corrupt ↑ | 0.6868 | 0.6712 | 0.6741 | 0.5925 | **0.7130** |
|
| 125 |
+
|
| 126 |
+
|
| 127 |
+
|
| 128 |
+
## LICENSE
|
| 129 |
+
|
| 130 |
+
This model is released under [Kohaku License 1.0](https://kblueleaf.net/documents/kohaku-license/?[Your%20Organization/Name]=KohakuBlueLeaf&[Year]=2024) <br>
|
| 131 |
+
You can check the above provided URL or check the LICENSE file in this repo.
|
| 132 |
+
|
| 133 |
+
### Citation
|
| 134 |
+
|
| 135 |
+
```bibtex
|
| 136 |
+
@misc{yeh2024tipotextimagetext,
|
| 137 |
+
title={TIPO: Text to Image with Text Presampling for Prompt Optimization},
|
| 138 |
+
author={Shih-Ying Yeh and Sang-Hyun Park and Giyeong Oh and Min Song and Youngjae Yu},
|
| 139 |
+
year={2024},
|
| 140 |
+
eprint={2411.08127},
|
| 141 |
+
archivePrefix={arXiv},
|
| 142 |
+
primaryClass={cs.CV},
|
| 143 |
+
url={https://arxiv.org/abs/2411.08127},
|
| 144 |
+
}
|
| 145 |
+
```
|
| 146 |
+
|
| 147 |
+
|