abarbosa's picture
Pushing fine-tuned model to Hugging Face Hub
5ad996b verified
[2025-07-10 15:53:06,842][__main__][INFO] - cache_dir: /tmp/
dataset:
name: kamel-usp/aes_enem_dataset
split: JBCS2025
training_params:
seed: 42
num_train_epochs: 20
logging_steps: 100
metric_for_best_model: QWK
bf16: true
bootstrap:
enabled: true
n_bootstrap: 10000
bootstrap_seed: 42
metrics:
- QWK
- Macro_F1
- Weighted_F1
post_training_results:
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
experiments:
model:
name: PORTULAN/albertina-1b5-portuguese-ptbr-encoder
type: encoder_classification
num_labels: 6
output_dir: ./results/
logging_dir: ./logs/
best_model_dir: ./results/best_model
tokenizer:
name: PORTULAN/albertina-1b5-portuguese-ptbr-encoder
dataset:
grade_index: 0
use_full_context: false
training_params:
weight_decay: 0.01
warmup_ratio: 0.1
learning_rate: 5.0e-05
train_batch_size: 4
eval_batch_size: 4
gradient_accumulation_steps: 4
gradient_checkpointing: false
[2025-07-10 15:53:10,740][__main__][INFO] - GPU 0: NVIDIA RTX A6000 | TDP 300 W
[2025-07-10 15:53:10,740][__main__][INFO] - Starting the Fine Tuning training process.
[2025-07-10 15:53:15,881][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/config.json
[2025-07-10 15:53:15,884][transformers.configuration_utils][INFO] - Model config DebertaV2Config {
"architectures": [
"DebertaV2ForMaskedLM"
],
"attention_head_size": 64,
"attention_probs_dropout_prob": 0.1,
"conv_act": "gelu",
"conv_kernel_size": 3,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1536,
"initializer_range": 0.02,
"intermediate_size": 6144,
"layer_norm_eps": 1e-07,
"legacy": true,
"max_position_embeddings": 512,
"max_relative_positions": -1,
"model_type": "deberta-v2",
"norm_rel_ebd": "layer_norm",
"num_attention_heads": 24,
"num_hidden_layers": 48,
"pad_token_id": 0,
"pooler_dropout": 0,
"pooler_hidden_act": "gelu",
"pooler_hidden_size": 1536,
"pos_att_type": [
"p2c",
"c2p"
],
"position_biased_input": false,
"position_buckets": 256,
"relative_attention": true,
"share_att_key": true,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.1",
"type_vocab_size": 0,
"vocab_size": 128100
}
[2025-07-10 15:53:16,306][transformers.tokenization_utils_base][INFO] - loading file spm.model from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/spm.model
[2025-07-10 15:53:16,307][transformers.tokenization_utils_base][INFO] - loading file tokenizer.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/tokenizer.json
[2025-07-10 15:53:16,307][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/added_tokens.json
[2025-07-10 15:53:16,307][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/special_tokens_map.json
[2025-07-10 15:53:16,307][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/tokenizer_config.json
[2025-07-10 15:53:16,307][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None
[2025-07-10 15:53:16,570][__main__][INFO] - Tokenizer function parameters- Padding:longest; Truncation: True; Use Full Context: False
[2025-07-10 15:53:16,972][__main__][INFO] -
Token statistics for 'train' split:
[2025-07-10 15:53:16,972][__main__][INFO] - Total examples: 500
[2025-07-10 15:53:16,972][__main__][INFO] - Min tokens: 512
[2025-07-10 15:53:16,972][__main__][INFO] - Max tokens: 512
[2025-07-10 15:53:16,972][__main__][INFO] - Avg tokens: 512.00
[2025-07-10 15:53:16,972][__main__][INFO] - Std tokens: 0.00
[2025-07-10 15:53:17,067][__main__][INFO] -
Token statistics for 'validation' split:
[2025-07-10 15:53:17,067][__main__][INFO] - Total examples: 132
[2025-07-10 15:53:17,067][__main__][INFO] - Min tokens: 512
[2025-07-10 15:53:17,067][__main__][INFO] - Max tokens: 512
[2025-07-10 15:53:17,067][__main__][INFO] - Avg tokens: 512.00
[2025-07-10 15:53:17,067][__main__][INFO] - Std tokens: 0.00
[2025-07-10 15:53:17,166][__main__][INFO] -
Token statistics for 'test' split:
[2025-07-10 15:53:17,166][__main__][INFO] - Total examples: 138
[2025-07-10 15:53:17,167][__main__][INFO] - Min tokens: 512
[2025-07-10 15:53:17,167][__main__][INFO] - Max tokens: 512
[2025-07-10 15:53:17,167][__main__][INFO] - Avg tokens: 512.00
[2025-07-10 15:53:17,167][__main__][INFO] - Std tokens: 0.00
[2025-07-10 15:53:17,167][__main__][INFO] - If token statistics are the same (max, avg, min) keep in mind that this is due to batched tokenization and padding.
[2025-07-10 15:53:17,167][__main__][INFO] - Model max length: 512. If it is the same as stats, then there is a high chance that sequences are being truncated.
[2025-07-10 15:53:17,431][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/config.json
[2025-07-10 15:53:17,432][transformers.configuration_utils][INFO] - Model config DebertaV2Config {
"architectures": [
"DebertaV2ForMaskedLM"
],
"attention_head_size": 64,
"attention_probs_dropout_prob": 0.1,
"conv_act": "gelu",
"conv_kernel_size": 3,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1536,
"id2label": {
"0": 0,
"1": 40,
"2": 80,
"3": 120,
"4": 160,
"5": 200
},
"initializer_range": 0.02,
"intermediate_size": 6144,
"label2id": {
"0": 0,
"40": 1,
"80": 2,
"120": 3,
"160": 4,
"200": 5
},
"layer_norm_eps": 1e-07,
"legacy": true,
"max_position_embeddings": 512,
"max_relative_positions": -1,
"model_type": "deberta-v2",
"norm_rel_ebd": "layer_norm",
"num_attention_heads": 24,
"num_hidden_layers": 48,
"pad_token_id": 0,
"pooler_dropout": 0,
"pooler_hidden_act": "gelu",
"pooler_hidden_size": 1536,
"pos_att_type": [
"p2c",
"c2p"
],
"position_biased_input": false,
"position_buckets": 256,
"relative_attention": true,
"share_att_key": true,
"torch_dtype": "bfloat16",
"transformers_version": "4.53.1",
"type_vocab_size": 0,
"vocab_size": 128100
}
[2025-07-10 15:53:17,820][transformers.modeling_utils][INFO] - loading weights file pytorch_model.bin from cache at /tmp/models--PORTULAN--albertina-1b5-portuguese-ptbr-encoder/snapshots/b22008e5096af9c398b75762d4e28e5008762916/pytorch_model.bin
[2025-07-10 15:53:17,821][transformers.modeling_utils][INFO] - Will use torch_dtype=torch.bfloat16 as defined in model's config object
[2025-07-10 15:53:17,821][transformers.modeling_utils][INFO] - Instantiating DebertaV2ForSequenceClassification model under default dtype torch.bfloat16.
[2025-07-10 15:53:18,201][transformers.safetensors_conversion][INFO] - Attempting to create safetensors variant
[2025-07-10 15:53:18,665][transformers.safetensors_conversion][INFO] - Attempting to convert .bin model on the fly to safetensors.
[2025-07-10 16:02:29,595][transformers.modeling_utils][INFO] - Some weights of the model checkpoint at PORTULAN/albertina-1b5-portuguese-ptbr-encoder were not used when initializing DebertaV2ForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.decoder.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight']
- This IS expected if you are initializing DebertaV2ForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DebertaV2ForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[2025-07-10 16:02:29,595][transformers.modeling_utils][WARNING] - Some weights of DebertaV2ForSequenceClassification were not initialized from the model checkpoint at PORTULAN/albertina-1b5-portuguese-ptbr-encoder and are newly initialized: ['classifier.bias', 'classifier.weight', 'pooler.dense.bias', 'pooler.dense.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
[2025-07-10 16:02:29,613][transformers.training_args][INFO] - PyTorch: setting up devices
[2025-07-10 16:02:29,639][__main__][INFO] - Total steps: 620. Number of warmup steps: 62
[2025-07-10 16:02:29,649][transformers.trainer][INFO] - You have loaded a model on multiple GPUs. `is_model_parallel` attribute will be force-set to `True` to avoid any unexpected behavior such as device placement mismatching.
[2025-07-10 16:02:29,676][transformers.trainer][INFO] - Using auto half precision backend
[2025-07-10 16:02:29,678][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `DebertaV2ForSequenceClassification.forward` and have been ignored: prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades. If prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades are not expected by `DebertaV2ForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-10 16:02:29,689][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-10 16:02:29,689][transformers.trainer][INFO] - Num examples = 132
[2025-07-10 16:02:29,690][transformers.trainer][INFO] - Batch size = 4
[2025-07-10 16:02:38,845][transformers.trainer][INFO] - The following columns in the Training set don't have a corresponding argument in `DebertaV2ForSequenceClassification.forward` and have been ignored: prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades. If prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades are not expected by `DebertaV2ForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-10 16:02:38,876][transformers.trainer][INFO] - ***** Running training *****
[2025-07-10 16:02:38,876][transformers.trainer][INFO] - Num examples = 500
[2025-07-10 16:02:38,876][transformers.trainer][INFO] - Num Epochs = 20
[2025-07-10 16:02:38,876][transformers.trainer][INFO] - Instantaneous batch size per device = 4
[2025-07-10 16:02:38,876][transformers.trainer][INFO] - Total train batch size (w. parallel, distributed & accumulation) = 16
[2025-07-10 16:02:38,876][transformers.trainer][INFO] - Gradient Accumulation steps = 4
[2025-07-10 16:02:38,876][transformers.trainer][INFO] - Total optimization steps = 640
[2025-07-10 16:02:38,878][transformers.trainer][INFO] - Number of trainable parameters = 1,566,919,686
[2025-07-10 16:09:32,605][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `DebertaV2ForSequenceClassification.forward` and have been ignored: prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades. If prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades are not expected by `DebertaV2ForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-10 16:09:32,609][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-10 16:09:32,609][transformers.trainer][INFO] - Num examples = 132
[2025-07-10 16:09:32,609][transformers.trainer][INFO] - Batch size = 4
[2025-07-10 16:09:41,083][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-32
[2025-07-10 16:09:41,085][transformers.configuration_utils][INFO] - Configuration saved in /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-32/config.json
[2025-07-10 16:09:46,899][transformers.modeling_utils][INFO] - Model weights saved in /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-32/model.safetensors
[2025-07-10 16:16:50,105][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `DebertaV2ForSequenceClassification.forward` and have been ignored: prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades. If prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades are not expected by `DebertaV2ForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-10 16:16:50,109][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-10 16:16:50,110][transformers.trainer][INFO] - Num examples = 132
[2025-07-10 16:16:50,110][transformers.trainer][INFO] - Batch size = 4
[2025-07-10 16:16:58,598][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-64
[2025-07-10 16:16:58,601][transformers.configuration_utils][INFO] - Configuration saved in /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-64/config.json
[2025-07-10 16:17:05,052][transformers.modeling_utils][INFO] - Model weights saved in /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-64/model.safetensors
[2025-07-10 16:17:13,233][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-32] due to args.save_total_limit
[2025-07-10 16:24:09,684][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `DebertaV2ForSequenceClassification.forward` and have been ignored: prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades. If prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades are not expected by `DebertaV2ForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-10 16:24:09,688][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-10 16:24:09,688][transformers.trainer][INFO] - Num examples = 132
[2025-07-10 16:24:09,688][transformers.trainer][INFO] - Batch size = 4
[2025-07-10 16:24:18,146][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-96
[2025-07-10 16:24:18,148][transformers.configuration_utils][INFO] - Configuration saved in /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-96/config.json
[2025-07-10 16:24:23,608][transformers.modeling_utils][INFO] - Model weights saved in /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-96/model.safetensors
[2025-07-10 16:24:30,971][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-64] due to args.save_total_limit
[2025-07-10 16:31:27,837][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `DebertaV2ForSequenceClassification.forward` and have been ignored: prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades. If prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades are not expected by `DebertaV2ForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-10 16:31:27,841][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-10 16:31:27,842][transformers.trainer][INFO] - Num examples = 132
[2025-07-10 16:31:27,842][transformers.trainer][INFO] - Batch size = 4
[2025-07-10 16:31:36,301][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-128
[2025-07-10 16:31:36,304][transformers.configuration_utils][INFO] - Configuration saved in /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-128/config.json
[2025-07-10 16:31:43,337][transformers.modeling_utils][INFO] - Model weights saved in /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-128/model.safetensors
[2025-07-10 16:31:49,743][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-96] due to args.save_total_limit
[2025-07-10 16:38:52,376][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `DebertaV2ForSequenceClassification.forward` and have been ignored: prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades. If prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades are not expected by `DebertaV2ForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-10 16:38:52,380][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-10 16:38:52,380][transformers.trainer][INFO] - Num examples = 132
[2025-07-10 16:38:52,380][transformers.trainer][INFO] - Batch size = 4
[2025-07-10 16:39:00,857][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-160
[2025-07-10 16:39:00,862][transformers.configuration_utils][INFO] - Configuration saved in /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-160/config.json
[2025-07-10 16:39:10,224][transformers.modeling_utils][INFO] - Model weights saved in /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-160/model.safetensors
[2025-07-10 16:46:16,187][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `DebertaV2ForSequenceClassification.forward` and have been ignored: prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades. If prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades are not expected by `DebertaV2ForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-10 16:46:16,213][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-10 16:46:16,214][transformers.trainer][INFO] - Num examples = 132
[2025-07-10 16:46:16,214][transformers.trainer][INFO] - Batch size = 4
[2025-07-10 16:46:24,732][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-192
[2025-07-10 16:46:24,736][transformers.configuration_utils][INFO] - Configuration saved in /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-192/config.json
[2025-07-10 16:46:31,956][transformers.modeling_utils][INFO] - Model weights saved in /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-192/model.safetensors
[2025-07-10 16:46:38,914][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-160] due to args.save_total_limit
[2025-07-10 16:53:35,545][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `DebertaV2ForSequenceClassification.forward` and have been ignored: prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades. If prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades are not expected by `DebertaV2ForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-10 16:53:35,550][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-10 16:53:35,550][transformers.trainer][INFO] - Num examples = 132
[2025-07-10 16:53:35,550][transformers.trainer][INFO] - Batch size = 4
[2025-07-10 16:53:44,013][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-224
[2025-07-10 16:53:44,015][transformers.configuration_utils][INFO] - Configuration saved in /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-224/config.json
[2025-07-10 16:53:48,622][transformers.modeling_utils][INFO] - Model weights saved in /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-224/model.safetensors
[2025-07-10 16:53:55,055][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-192] due to args.save_total_limit
[2025-07-10 17:00:51,991][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `DebertaV2ForSequenceClassification.forward` and have been ignored: prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades. If prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades are not expected by `DebertaV2ForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-10 17:00:51,995][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-10 17:00:51,995][transformers.trainer][INFO] - Num examples = 132
[2025-07-10 17:00:51,995][transformers.trainer][INFO] - Batch size = 4
[2025-07-10 17:01:00,444][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-256
[2025-07-10 17:01:00,447][transformers.configuration_utils][INFO] - Configuration saved in /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-256/config.json
[2025-07-10 17:01:05,603][transformers.modeling_utils][INFO] - Model weights saved in /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-256/model.safetensors
[2025-07-10 17:01:12,787][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-224] due to args.save_total_limit
[2025-07-10 17:08:09,449][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `DebertaV2ForSequenceClassification.forward` and have been ignored: prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades. If prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades are not expected by `DebertaV2ForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-10 17:08:09,456][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-10 17:08:09,456][transformers.trainer][INFO] - Num examples = 132
[2025-07-10 17:08:09,456][transformers.trainer][INFO] - Batch size = 4
[2025-07-10 17:08:17,939][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-288
[2025-07-10 17:08:17,941][transformers.configuration_utils][INFO] - Configuration saved in /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-288/config.json
[2025-07-10 17:08:24,232][transformers.modeling_utils][INFO] - Model weights saved in /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-288/model.safetensors
[2025-07-10 17:08:33,143][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-256] due to args.save_total_limit
[2025-07-10 17:08:33,643][transformers.trainer][INFO] -
Training completed. Do not forget to share your model on huggingface.co/models =)
[2025-07-10 17:08:33,644][transformers.trainer][INFO] - Loading best model from /workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-128 (score: 0.6863342898134864).
[2025-07-10 17:08:36,030][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-10/15-53-06/results/checkpoint-288] due to args.save_total_limit
[2025-07-10 17:08:36,692][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `DebertaV2ForSequenceClassification.forward` and have been ignored: prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades. If prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades are not expected by `DebertaV2ForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-10 17:08:36,696][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-10 17:08:36,696][transformers.trainer][INFO] - Num examples = 132
[2025-07-10 17:08:36,696][transformers.trainer][INFO] - Batch size = 4
[2025-07-10 17:08:45,160][__main__][INFO] - Training completed successfully.
[2025-07-10 17:08:45,160][__main__][INFO] - Running on Test
[2025-07-10 17:08:45,161][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `DebertaV2ForSequenceClassification.forward` and have been ignored: prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades. If prompt, id, essay_year, id_prompt, supporting_text, reference, essay_text, grades are not expected by `DebertaV2ForSequenceClassification.forward`, you can safely ignore this message.
[2025-07-10 17:08:45,164][transformers.trainer][INFO] -
***** Running Evaluation *****
[2025-07-10 17:08:45,164][transformers.trainer][INFO] - Num examples = 138
[2025-07-10 17:08:45,164][transformers.trainer][INFO] - Batch size = 4
[2025-07-10 17:08:53,928][__main__][INFO] - Test metrics: {'eval_loss': 0.9303792119026184, 'eval_model_preparation_time': 0.0074, 'eval_accuracy': 0.7028985507246377, 'eval_RMSE': 25.931906372573962, 'eval_QWK': 0.6826328310864394, 'eval_HDIV': 0.007246376811594235, 'eval_Macro_F1': 0.49688027796692447, 'eval_Micro_F1': 0.7028985507246377, 'eval_Weighted_F1': 0.7116672747290037, 'eval_TP_0': 0, 'eval_TN_0': 137, 'eval_FP_0': 0, 'eval_FN_0': 1, 'eval_TP_1': 0, 'eval_TN_1': 138, 'eval_FP_1': 0, 'eval_FN_1': 0, 'eval_TP_2': 6, 'eval_TN_2': 122, 'eval_FP_2': 6, 'eval_FN_2': 4, 'eval_TP_3': 45, 'eval_TN_3': 65, 'eval_FP_3': 7, 'eval_FN_3': 21, 'eval_TP_4': 40, 'eval_TN_4': 71, 'eval_FP_4': 16, 'eval_FN_4': 11, 'eval_TP_5': 6, 'eval_TN_5': 116, 'eval_FP_5': 12, 'eval_FN_5': 4, 'eval_runtime': 8.7537, 'eval_samples_per_second': 15.765, 'eval_steps_per_second': 3.998, 'epoch': 9.0}
[2025-07-10 17:08:53,929][transformers.trainer][INFO] - Saving model checkpoint to ./results/best_model
[2025-07-10 17:08:53,931][transformers.configuration_utils][INFO] - Configuration saved in ./results/best_model/config.json
[2025-07-10 17:08:58,786][transformers.modeling_utils][INFO] - Model weights saved in ./results/best_model/model.safetensors
[2025-07-10 17:08:58,789][transformers.tokenization_utils_base][INFO] - tokenizer config file saved in ./results/best_model/tokenizer_config.json
[2025-07-10 17:08:58,789][transformers.tokenization_utils_base][INFO] - Special tokens file saved in ./results/best_model/special_tokens_map.json
[2025-07-10 17:08:58,816][__main__][INFO] - Model and tokenizer saved to ./results/best_model
[2025-07-10 17:08:58,940][__main__][INFO] - Fine Tuning Finished.
[2025-07-10 17:08:59,455][__main__][INFO] - Total emissions: 0.1036 kg CO2eq