Model Description
Finetuned xlm-roberta-large for Sentiment Analysis in English and Bahasa Indonesia
Training results
Trained on TPU VM v4-8 for ~5 hours
| epoch | step | train_accuracy | train_loss | val_accuracy | val_loss |
|---|---|---|---|---|---|
| 0 | 10782 | 0.964588165 | 0.095930442 | 0.967545867 | 0.08873909 |
| 1 | 21565 | 0.970602274 | 0.079982288 | 0.968977571 | 0.08539474 |
Training procedure
For replication, go to GitHub page
Acknowledgement
- Google’s TPU Research Cloud (TRC) for providing Cloud TPU VM.
- carlesoctav for making the training script on TPU VM
- thonyyy for gathering the sentiment dataset
- Downloads last month
- 12