fax4ever's picture
Training complete
c2f5aaa verified
|
raw
history blame
3.29 kB
metadata
library_name: transformers
license: mit
base_model: dbmdz/bert-base-italian-xxl-cased
tags:
  - generated_from_trainer
metrics:
  - f1
model-index:
  - name: bert-base-italian-xxl-cased-sentence-splitter
    results: []

bert-base-italian-xxl-cased-sentence-splitter

This model is a fine-tuned version of dbmdz/bert-base-italian-xxl-cased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0026
  • F1: 0.9907

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss F1
No log 1.0 49 0.0033 0.9846
No log 2.0 98 0.0025 0.9846
No log 3.0 147 0.0030 0.9861
No log 4.0 196 0.0033 0.9801
No log 5.0 245 0.0025 0.9892
No log 6.0 294 0.0029 0.9877
No log 7.0 343 0.0030 0.9907
No log 8.0 392 0.0023 0.9892
No log 9.0 441 0.0023 0.9907
No log 10.0 490 0.0031 0.9907
0.0128 11.0 539 0.0021 0.9922
0.0128 12.0 588 0.0038 0.9907
0.0128 13.0 637 0.0046 0.9891
0.0128 14.0 686 0.0030 0.9892
0.0128 15.0 735 0.0024 0.9907
0.0128 16.0 784 0.0023 0.9907
0.0128 17.0 833 0.0024 0.9907
0.0128 18.0 882 0.0023 0.9907
0.0128 19.0 931 0.0023 0.9907
0.0128 20.0 980 0.0024 0.9907
0.0002 21.0 1029 0.0024 0.9907
0.0002 22.0 1078 0.0024 0.9907
0.0002 23.0 1127 0.0024 0.9907
0.0002 24.0 1176 0.0025 0.9907
0.0002 25.0 1225 0.0025 0.9907
0.0002 26.0 1274 0.0025 0.9907
0.0002 27.0 1323 0.0025 0.9907
0.0002 28.0 1372 0.0026 0.9907
0.0002 29.0 1421 0.0026 0.9907
0.0002 30.0 1470 0.0026 0.9907

Framework versions

  • Transformers 4.55.0
  • Pytorch 2.8.0+cu128
  • Datasets 4.0.0
  • Tokenizers 0.21.4