wanng commited on
Commit
5137fbb
·
1 Parent(s): 468e2a3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -0
README.md CHANGED
@@ -1,3 +1,26 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # Taiyi-D-vit-base-patch16-224 (base-sized model)
6
+
7
+ Based on pre-trained clip-vit-base (patch 16, resolution 224x224), we introduce multimodal information. For multimodal pre-training tasks, we design several special training objectives in our paper. Our code and details of pre-training tasks will be made publicly available upon paper acceptance.
8
+
9
+ The pre-training datasets are MSCOCO and VG.
10
+
11
+ # Taiyi (太乙)
12
+
13
+ Taiyi models are a branch of the Fengshenbang (封神榜) series of models. The models in Taiyi are pre-trained with multimodal pre-training strategies.
14
+
15
+ # Citation
16
+
17
+ If you find the resource is useful, please cite the following website in your paper.
18
+
19
+ ```
20
+ @misc{Fengshenbang-LM,
21
+ title={Fengshenbang-LM},
22
+ author={IDEA-CCNL},
23
+ year={2022},
24
+ howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
25
+ }
26
+ ```