dd4f903bf4cee608787da34749074f05
This model is a fine-tuned version of albert/albert-xlarge-v2 on the contemmcm/hate-speech-and-offensive-language dataset. It achieves the following results on the evaluation set:
- Loss: 0.6864
- Data Size: 1.0
- Epoch Runtime: 46.6302
- Accuracy: 0.7672
- F1 Macro: 0.2894
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 50
Training results
| Training Loss | Epoch | Step | Validation Loss | Data Size | Epoch Runtime | Accuracy | F1 Macro |
|---|---|---|---|---|---|---|---|
| No log | 0 | 0 | 1.5965 | 0 | 3.7056 | 0.1266 | 0.1235 |
| No log | 1 | 619 | 0.7025 | 0.0078 | 4.4145 | 0.7672 | 0.2894 |
| No log | 2 | 1238 | 0.8206 | 0.0156 | 4.6142 | 0.7672 | 0.2894 |
| 0.017 | 3 | 1857 | 0.6849 | 0.0312 | 5.4196 | 0.7672 | 0.2894 |
| 0.017 | 4 | 2476 | 0.6841 | 0.0625 | 6.5279 | 0.7672 | 0.2894 |
| 0.6687 | 5 | 3095 | 0.6868 | 0.125 | 9.3303 | 0.7672 | 0.2894 |
| 0.0638 | 6 | 3714 | 0.6821 | 0.25 | 14.3780 | 0.7672 | 0.2894 |
| 0.6653 | 7 | 4333 | 0.6805 | 0.5 | 24.6258 | 0.7672 | 0.2894 |
| 0.6724 | 8.0 | 4952 | 0.6802 | 1.0 | 44.6003 | 0.7672 | 0.2894 |
| 0.6415 | 9.0 | 5571 | 0.6776 | 1.0 | 44.9632 | 0.7672 | 0.2894 |
| 0.6645 | 10.0 | 6190 | 0.6829 | 1.0 | 46.6841 | 0.7672 | 0.2894 |
| 0.6525 | 11.0 | 6809 | 0.6811 | 1.0 | 47.1839 | 0.7672 | 0.2894 |
| 0.6568 | 12.0 | 7428 | 0.6760 | 1.0 | 47.0049 | 0.7672 | 0.2894 |
| 0.6789 | 13.0 | 8047 | 0.6842 | 1.0 | 46.8977 | 0.7672 | 0.2894 |
| 0.6586 | 14.0 | 8666 | 0.6878 | 1.0 | 46.9318 | 0.7672 | 0.2894 |
| 0.6466 | 15.0 | 9285 | 0.6778 | 1.0 | 45.1779 | 0.7672 | 0.2894 |
| 0.6982 | 16.0 | 9904 | 0.6758 | 1.0 | 45.0744 | 0.7672 | 0.2894 |
| 0.6521 | 17.0 | 10523 | 0.6779 | 1.0 | 45.4322 | 0.7672 | 0.2894 |
| 0.6447 | 18.0 | 11142 | 0.6772 | 1.0 | 46.8147 | 0.7672 | 0.2894 |
| 0.6662 | 19.0 | 11761 | 0.6765 | 1.0 | 47.2162 | 0.7672 | 0.2894 |
| 0.6595 | 20.0 | 12380 | 0.6864 | 1.0 | 46.6302 | 0.7672 | 0.2894 |
Framework versions
- Transformers 4.57.0
- Pytorch 2.8.0+cu128
- Datasets 4.3.0
- Tokenizers 0.22.1
- Downloads last month
- -
Model tree for contemmcm/dd4f903bf4cee608787da34749074f05
Base model
albert/albert-xlarge-v2