Update README.md
Browse files
README.md
CHANGED
|
@@ -114,7 +114,7 @@ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
|
|
| 114 |
--vocab_path models/google_zh_vocab.txt \
|
| 115 |
--dataset_path cluecorpussmall_seq128_dataset.pt \
|
| 116 |
--processes_num 32 --seq_length 128 \
|
| 117 |
-
--dynamic_masking --
|
| 118 |
```
|
| 119 |
|
| 120 |
```
|
|
@@ -125,7 +125,7 @@ python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
|
|
| 125 |
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
| 126 |
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
|
| 127 |
--learning_rate 1e-4 --batch_size 64 \
|
| 128 |
-
--
|
| 129 |
```
|
| 130 |
|
| 131 |
Stage2:
|
|
@@ -135,19 +135,19 @@ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
|
|
| 135 |
--vocab_path models/google_zh_vocab.txt \
|
| 136 |
--dataset_path cluecorpussmall_seq512_dataset.pt \
|
| 137 |
--processes_num 32 --seq_length 512 \
|
| 138 |
-
--dynamic_masking --
|
| 139 |
```
|
| 140 |
|
| 141 |
```
|
| 142 |
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
|
| 143 |
-
--pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
|
| 144 |
--vocab_path models/google_zh_vocab.txt \
|
|
|
|
| 145 |
--config_path models/bert/medium_config.json \
|
| 146 |
--output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
|
| 147 |
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
| 148 |
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
|
| 149 |
--learning_rate 5e-5 --batch_size 16 \
|
| 150 |
-
--
|
| 151 |
```
|
| 152 |
|
| 153 |
Finally, we convert the pre-trained model into Huggingface's format:
|
|
@@ -155,7 +155,7 @@ Finally, we convert the pre-trained model into Huggingface's format:
|
|
| 155 |
```
|
| 156 |
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
|
| 157 |
--output_model_path pytorch_model.bin \
|
| 158 |
-
--layers_num 8 --
|
| 159 |
```
|
| 160 |
|
| 161 |
### BibTeX entry and citation info
|
|
@@ -198,7 +198,7 @@ python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path model
|
|
| 198 |
[4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128
|
| 199 |
[4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256
|
| 200 |
[4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512
|
| 201 |
-
[4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768
|
| 202 |
[6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128
|
| 203 |
[6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256
|
| 204 |
[6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512
|
|
|
|
| 114 |
--vocab_path models/google_zh_vocab.txt \
|
| 115 |
--dataset_path cluecorpussmall_seq128_dataset.pt \
|
| 116 |
--processes_num 32 --seq_length 128 \
|
| 117 |
+
--dynamic_masking --data_processor mlm
|
| 118 |
```
|
| 119 |
|
| 120 |
```
|
|
|
|
| 125 |
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
| 126 |
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
|
| 127 |
--learning_rate 1e-4 --batch_size 64 \
|
| 128 |
+
--data_processor mlm --target mlm
|
| 129 |
```
|
| 130 |
|
| 131 |
Stage2:
|
|
|
|
| 135 |
--vocab_path models/google_zh_vocab.txt \
|
| 136 |
--dataset_path cluecorpussmall_seq512_dataset.pt \
|
| 137 |
--processes_num 32 --seq_length 512 \
|
| 138 |
+
--dynamic_masking --data_processor mlm
|
| 139 |
```
|
| 140 |
|
| 141 |
```
|
| 142 |
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
|
|
|
|
| 143 |
--vocab_path models/google_zh_vocab.txt \
|
| 144 |
+
--pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
|
| 145 |
--config_path models/bert/medium_config.json \
|
| 146 |
--output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
|
| 147 |
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
| 148 |
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
|
| 149 |
--learning_rate 5e-5 --batch_size 16 \
|
| 150 |
+
--data_processor mlm --target mlm
|
| 151 |
```
|
| 152 |
|
| 153 |
Finally, we convert the pre-trained model into Huggingface's format:
|
|
|
|
| 155 |
```
|
| 156 |
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
|
| 157 |
--output_model_path pytorch_model.bin \
|
| 158 |
+
--layers_num 8 --type mlm
|
| 159 |
```
|
| 160 |
|
| 161 |
### BibTeX entry and citation info
|
|
|
|
| 198 |
[4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128
|
| 199 |
[4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256
|
| 200 |
[4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512
|
| 201 |
+
[4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768
|
| 202 |
[6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128
|
| 203 |
[6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256
|
| 204 |
[6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512
|