fax4ever's picture
Training complete
57607a6 verified
metadata
library_name: transformers
license: mit
base_model: dbmdz/bert-base-italian-xxl-cased
tags:
  - generated_from_trainer
metrics:
  - f1
model-index:
  - name: bert-base-italian-xxl-cased-sentence-splitter
    results: []

bert-base-italian-xxl-cased-sentence-splitter

This model is a fine-tuned version of dbmdz/bert-base-italian-xxl-cased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0021
  • F1: 0.9938

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss F1
No log 1.0 49 0.0036 0.9831
No log 2.0 98 0.0021 0.9907
No log 3.0 147 0.0030 0.9861
No log 4.0 196 0.0026 0.9907
No log 5.0 245 0.0018 0.9938
No log 6.0 294 0.0020 0.9938
No log 7.0 343 0.0040 0.9861
No log 8.0 392 0.0023 0.9922
No log 9.0 441 0.0024 0.9922
No log 10.0 490 0.0062 0.9922
0.0139 11.0 539 0.0045 0.9891
0.0139 12.0 588 0.0019 0.9922
0.0139 13.0 637 0.0021 0.9938
0.0139 14.0 686 0.0024 0.9938
0.0139 15.0 735 0.0120 0.9891
0.0139 16.0 784 0.0074 0.9907
0.0139 17.0 833 0.0019 0.9938
0.0139 18.0 882 0.0019 0.9938
0.0139 19.0 931 0.0024 0.9922
0.0139 20.0 980 0.0021 0.9922
0.0002 21.0 1029 0.0021 0.9922
0.0002 22.0 1078 0.0021 0.9938
0.0002 23.0 1127 0.0022 0.9922
0.0002 24.0 1176 0.0020 0.9938
0.0002 25.0 1225 0.0022 0.9938
0.0002 26.0 1274 0.0021 0.9938
0.0002 27.0 1323 0.0022 0.9938
0.0002 28.0 1372 0.0021 0.9938
0.0002 29.0 1421 0.0021 0.9938
0.0002 30.0 1470 0.0021 0.9938

Framework versions

  • Transformers 4.55.2
  • Pytorch 2.8.0+cu128
  • Datasets 3.6.0
  • Tokenizers 0.21.4