Translation
Safetensors
m2m_100
Eval Results

Model Card for nllb-600m-formosan-all-finetune

Model Details

nllb-200-distilled-600M finetune on all formosan data (klokah, fb ilrdf dict, formosan_db) without samples only one word.

Training Details

  • learning rate: 0.0001
  • batch size per gpu: 4
  • grad accumulation steps: 1
  • epochs: 12
  • warmup ratio: 0.1

Uses

please refer https://huggingface.co/docs/transformers/model_doc/nllb

Demo

https://huggingface.co/spaces/ithuan/formosan-translation

Downloads last month
36
Safetensors
Model size
0.6B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ithuan/nllb-600m-formosan-all-finetune

Finetuned
(207)
this model

Evaluation results

  • ami_Xiug -> zho_Hant (zh) on ithuan/ithuan_formosan_text
    self-reported
    9.570
  • zho_Hant -> ami_Xiug (13a) on ithuan/ithuan_formosan_text
    self-reported
    6.560
  • ami_Xiug -> zho_Hant (zh) on ithuan/klokah_asr_eval
    self-reported
    5.360
  • zho_Hant -> ami_Xiug (13a) on ithuan/klokah_asr_eval
    self-reported
    6.710