Model Card for nllb-600m-formosan-all-finetune
Model Details
nllb-200-distilled-600M finetune on all formosan data (klokah, fb ilrdf dict, formosan_db) without samples only one word.
Training Details
- learning rate: 0.0001
- batch size per gpu: 4
- grad accumulation steps: 1
- epochs: 12
- warmup ratio: 0.1
Uses
please refer https://huggingface.co/docs/transformers/model_doc/nllb
Demo
- Downloads last month
- 36
Model tree for ithuan/nllb-600m-formosan-all-finetune
Base model
facebook/nllb-200-distilled-600MEvaluation results
- ami_Xiug -> zho_Hant (zh) on ithuan/ithuan_formosan_textself-reported9.570
- zho_Hant -> ami_Xiug (13a) on ithuan/ithuan_formosan_textself-reported6.560
- ami_Xiug -> zho_Hant (zh) on ithuan/klokah_asr_evalself-reported5.360
- zho_Hant -> ami_Xiug (13a) on ithuan/klokah_asr_evalself-reported6.710