[2025-06-30 02:43:10,543][__main__][INFO] - cache_dir: /tmp/ dataset: name: kamel-usp/aes_enem_dataset split: JBCS2025 training_params: seed: 42 num_train_epochs: 20 logging_steps: 100 metric_for_best_model: QWK bf16: true bootstrap: enabled: true n_bootstrap: 10000 bootstrap_seed: 42 metrics: - QWK - Macro_F1 - Weighted_F1 post_training_results: model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59 experiments: model: name: meta-llama/Llama-3.1-8B type: llama31_classification_lora num_labels: 6 output_dir: ./results/ logging_dir: ./logs/ best_model_dir: ./results/best_model lora_r: 8 lora_dropout: 0.05 lora_alpha: 16 lora_target_modules: all-linear checkpoint_path: '' tokenizer: name: meta-llama/Llama-3.1-8B dataset: grade_index: 2 use_full_context: true training_params: weight_decay: 0.01 warmup_ratio: 0.1 learning_rate: 5.0e-05 train_batch_size: 1 eval_batch_size: 16 gradient_accumulation_steps: 16 gradient_checkpointing: false [2025-06-30 02:43:14,579][__main__][INFO] - GPU 0: NVIDIA H200 | TDP ≈ 700 W [2025-06-30 02:43:14,579][__main__][INFO] - Starting the Fine Tuning training process. [2025-06-30 02:43:20,425][transformers.tokenization_utils_base][INFO] - loading file tokenizer.json from cache at /tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/tokenizer.json [2025-06-30 02:43:20,425][transformers.tokenization_utils_base][INFO] - loading file tokenizer.model from cache at None [2025-06-30 02:43:20,425][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at None [2025-06-30 02:43:20,425][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/special_tokens_map.json [2025-06-30 02:43:20,425][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/tokenizer_config.json [2025-06-30 02:43:20,425][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None [2025-06-30 02:43:20,706][transformers.tokenization_utils_base][INFO] - Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. [2025-06-30 02:43:20,712][__main__][INFO] - Tokenizer function parameters- Padding:longest; Truncation: False; Use Full Context: True [2025-06-30 02:43:22,365][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-06-30 02:43:22,367][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "id2label": { "0": "LABEL_0", "1": "LABEL_1", "2": "LABEL_2", "3": "LABEL_3", "4": "LABEL_4", "5": "LABEL_5" }, "initializer_range": 0.02, "intermediate_size": 14336, "label2id": { "LABEL_0": 0, "LABEL_1": 1, "LABEL_2": 2, "LABEL_3": 3, "LABEL_4": 4, "LABEL_5": 5 }, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-06-30 02:43:22,507][transformers.modeling_utils][INFO] - loading weights file model.safetensors from cache at /tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/model.safetensors.index.json [2025-06-30 02:43:22,507][transformers.modeling_utils][INFO] - Will use torch_dtype=torch.bfloat16 as defined in model's config object [2025-06-30 02:43:22,508][transformers.modeling_utils][INFO] - Instantiating LlamaForSequenceClassification model under default dtype torch.bfloat16. [2025-06-30 02:43:28,442][transformers.modeling_utils][INFO] - Some weights of the model checkpoint at meta-llama/Llama-3.1-8B were not used when initializing LlamaForSequenceClassification: ['lm_head.weight'] - This IS expected if you are initializing LlamaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing LlamaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [2025-06-30 02:43:28,443][transformers.modeling_utils][WARNING] - Some weights of LlamaForSequenceClassification were not initialized from the model checkpoint at meta-llama/Llama-3.1-8B and are newly initialized: ['score.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [2025-06-30 02:43:29,505][__main__][INFO] - Initialized new PEFT model for ce loss [2025-06-30 02:43:29,508][__main__][INFO] - None [2025-06-30 02:43:29,509][transformers.training_args][INFO] - PyTorch: setting up devices [2025-06-30 02:43:29,540][__main__][INFO] - Total steps: 620. Number of warmup steps: 62 [2025-06-30 02:43:29,546][transformers.trainer][INFO] - You have loaded a model on multiple GPUs. `is_model_parallel` attribute will be force-set to `True` to avoid any unexpected behavior such as device placement mismatching. [2025-06-30 02:43:29,571][transformers.trainer][INFO] - Using auto half precision backend [2025-06-30 02:43:29,571][transformers.trainer][WARNING] - No label_names provided for model class `PeftModelForSequenceClassification`. Since `PeftModel` hides base models input arguments, if label_names is not given, label_names can't be set automatically within `Trainer`. Note that empty label_names list will be used instead. [2025-06-30 02:43:29,574][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades. If essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-06-30 02:43:29,590][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-06-30 02:43:29,590][transformers.trainer][INFO] - Num examples = 132 [2025-06-30 02:43:29,590][transformers.trainer][INFO] - Batch size = 16 [2025-06-30 02:43:52,158][transformers.trainer][INFO] - The following columns in the Training set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades. If essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-06-30 02:43:52,211][transformers.trainer][INFO] - ***** Running training ***** [2025-06-30 02:43:52,211][transformers.trainer][INFO] - Num examples = 500 [2025-06-30 02:43:52,211][transformers.trainer][INFO] - Num Epochs = 20 [2025-06-30 02:43:52,211][transformers.trainer][INFO] - Instantaneous batch size per device = 1 [2025-06-30 02:43:52,211][transformers.trainer][INFO] - Total train batch size (w. parallel, distributed & accumulation) = 16 [2025-06-30 02:43:52,211][transformers.trainer][INFO] - Gradient Accumulation steps = 16 [2025-06-30 02:43:52,211][transformers.trainer][INFO] - Total optimization steps = 640 [2025-06-30 02:43:52,214][transformers.trainer][INFO] - Number of trainable parameters = 20,996,096 [2025-06-30 02:47:44,871][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades. If essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-06-30 02:47:44,875][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-06-30 02:47:44,875][transformers.trainer][INFO] - Num examples = 132 [2025-06-30 02:47:44,876][transformers.trainer][INFO] - Batch size = 16 [2025-06-30 02:48:07,149][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-32 [2025-06-30 02:48:07,948][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-06-30 02:48:07,948][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-06-30 02:52:00,705][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades. If essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-06-30 02:52:00,708][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-06-30 02:52:00,708][transformers.trainer][INFO] - Num examples = 132 [2025-06-30 02:52:00,708][transformers.trainer][INFO] - Batch size = 16 [2025-06-30 02:52:23,011][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-64 [2025-06-30 02:52:23,642][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-06-30 02:52:23,643][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-06-30 02:52:23,993][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-32] due to args.save_total_limit [2025-06-30 02:56:16,734][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades. If essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-06-30 02:56:16,738][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-06-30 02:56:16,738][transformers.trainer][INFO] - Num examples = 132 [2025-06-30 02:56:16,738][transformers.trainer][INFO] - Batch size = 16 [2025-06-30 02:56:39,008][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-96 [2025-06-30 02:56:39,471][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-06-30 02:56:39,472][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-06-30 02:56:39,870][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-64] due to args.save_total_limit [2025-06-30 03:00:32,709][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades. If essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-06-30 03:00:32,712][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-06-30 03:00:32,713][transformers.trainer][INFO] - Num examples = 132 [2025-06-30 03:00:32,713][transformers.trainer][INFO] - Batch size = 16 [2025-06-30 03:00:54,997][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-128 [2025-06-30 03:00:55,521][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-06-30 03:00:55,522][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-06-30 03:04:48,381][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades. If essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-06-30 03:04:48,385][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-06-30 03:04:48,385][transformers.trainer][INFO] - Num examples = 132 [2025-06-30 03:04:48,385][transformers.trainer][INFO] - Batch size = 16 [2025-06-30 03:05:10,656][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-160 [2025-06-30 03:05:11,129][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-06-30 03:05:11,130][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-06-30 03:05:11,486][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-128] due to args.save_total_limit [2025-06-30 03:09:04,257][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades. If essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-06-30 03:09:04,260][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-06-30 03:09:04,261][transformers.trainer][INFO] - Num examples = 132 [2025-06-30 03:09:04,261][transformers.trainer][INFO] - Batch size = 16 [2025-06-30 03:09:26,513][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-192 [2025-06-30 03:09:27,003][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-06-30 03:09:27,004][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-06-30 03:09:27,384][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-96] due to args.save_total_limit [2025-06-30 03:09:27,400][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-160] due to args.save_total_limit [2025-06-30 03:13:20,046][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades. If essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-06-30 03:13:20,049][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-06-30 03:13:20,049][transformers.trainer][INFO] - Num examples = 132 [2025-06-30 03:13:20,050][transformers.trainer][INFO] - Batch size = 16 [2025-06-30 03:13:42,319][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-224 [2025-06-30 03:13:42,771][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-06-30 03:13:42,771][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-06-30 03:17:36,075][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades. If essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-06-30 03:17:36,079][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-06-30 03:17:36,079][transformers.trainer][INFO] - Num examples = 132 [2025-06-30 03:17:36,079][transformers.trainer][INFO] - Batch size = 16 [2025-06-30 03:17:58,353][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-256 [2025-06-30 03:17:59,135][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-06-30 03:17:59,135][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-06-30 03:17:59,485][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-224] due to args.save_total_limit [2025-06-30 03:21:51,776][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades. If essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-06-30 03:21:51,780][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-06-30 03:21:51,780][transformers.trainer][INFO] - Num examples = 132 [2025-06-30 03:21:51,780][transformers.trainer][INFO] - Batch size = 16 [2025-06-30 03:22:14,029][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-288 [2025-06-30 03:22:14,546][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-06-30 03:22:14,547][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-06-30 03:22:14,974][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-256] due to args.save_total_limit [2025-06-30 03:26:08,004][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades. If essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-06-30 03:26:08,007][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-06-30 03:26:08,007][transformers.trainer][INFO] - Num examples = 132 [2025-06-30 03:26:08,007][transformers.trainer][INFO] - Batch size = 16 [2025-06-30 03:26:30,298][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-320 [2025-06-30 03:26:30,783][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-06-30 03:26:30,784][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-06-30 03:26:31,193][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-192] due to args.save_total_limit [2025-06-30 03:26:31,213][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-288] due to args.save_total_limit [2025-06-30 03:30:24,252][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades. If essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-06-30 03:30:24,256][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-06-30 03:30:24,256][transformers.trainer][INFO] - Num examples = 132 [2025-06-30 03:30:24,256][transformers.trainer][INFO] - Batch size = 16 [2025-06-30 03:30:46,525][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-352 [2025-06-30 03:30:47,023][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-06-30 03:30:47,023][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-06-30 03:34:40,147][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades. If essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-06-30 03:34:40,150][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-06-30 03:34:40,151][transformers.trainer][INFO] - Num examples = 132 [2025-06-30 03:34:40,151][transformers.trainer][INFO] - Batch size = 16 [2025-06-30 03:35:02,420][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-384 [2025-06-30 03:35:02,901][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-06-30 03:35:02,902][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-06-30 03:35:03,281][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-352] due to args.save_total_limit [2025-06-30 03:38:56,300][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades. If essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-06-30 03:38:56,304][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-06-30 03:38:56,304][transformers.trainer][INFO] - Num examples = 132 [2025-06-30 03:38:56,304][transformers.trainer][INFO] - Batch size = 16 [2025-06-30 03:39:18,569][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-416 [2025-06-30 03:39:19,070][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-06-30 03:39:19,071][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-06-30 03:39:19,487][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-384] due to args.save_total_limit [2025-06-30 03:43:12,725][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades. If essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-06-30 03:43:12,729][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-06-30 03:43:12,729][transformers.trainer][INFO] - Num examples = 132 [2025-06-30 03:43:12,729][transformers.trainer][INFO] - Batch size = 16 [2025-06-30 03:43:35,008][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-448 [2025-06-30 03:43:35,510][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-06-30 03:43:35,511][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-06-30 03:43:35,875][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-416] due to args.save_total_limit [2025-06-30 03:47:28,746][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades. If essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-06-30 03:47:28,750][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-06-30 03:47:28,750][transformers.trainer][INFO] - Num examples = 132 [2025-06-30 03:47:28,750][transformers.trainer][INFO] - Batch size = 16 [2025-06-30 03:47:51,014][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-480 [2025-06-30 03:47:51,480][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-06-30 03:47:51,481][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-06-30 03:47:51,889][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-448] due to args.save_total_limit [2025-06-30 03:47:51,903][transformers.trainer][INFO] - Training completed. Do not forget to share your model on huggingface.co/models =) [2025-06-30 03:47:51,903][transformers.trainer][INFO] - Loading best model from /workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-320 (score: 0.4302446642373763). [2025-06-30 03:47:52,000][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-06-30/02-43-10/results/checkpoint-480] due to args.save_total_limit [2025-06-30 03:47:52,020][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades. If essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-06-30 03:47:52,024][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-06-30 03:47:52,024][transformers.trainer][INFO] - Num examples = 132 [2025-06-30 03:47:52,024][transformers.trainer][INFO] - Batch size = 16 [2025-06-30 03:48:14,279][__main__][INFO] - Training completed successfully. [2025-06-30 03:48:14,279][__main__][INFO] - Running on Test [2025-06-30 03:48:14,279][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades. If essay_text, reference, id, supporting_text, prompt, id_prompt, essay_year, grades are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-06-30 03:48:14,283][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-06-30 03:48:14,283][transformers.trainer][INFO] - Num examples = 138 [2025-06-30 03:48:14,283][transformers.trainer][INFO] - Batch size = 16 [2025-06-30 03:48:37,758][__main__][INFO] - Test metrics: {'eval_loss': 1.99528968334198, 'eval_model_preparation_time': 0.0126, 'eval_accuracy': 0.35507246376811596, 'eval_RMSE': 55.638585297552666, 'eval_QWK': 0.33732599546778896, 'eval_HDIV': 0.10144927536231885, 'eval_Macro_F1': 0.24308878872131887, 'eval_Micro_F1': 0.35507246376811596, 'eval_Weighted_F1': 0.34109319613510286, 'eval_TP_0': 0, 'eval_TN_0': 137, 'eval_FP_0': 0, 'eval_FN_0': 1, 'eval_TP_1': 11, 'eval_TN_1': 84, 'eval_FP_1': 25, 'eval_FN_1': 18, 'eval_TP_2': 6, 'eval_TN_2': 112, 'eval_FP_2': 8, 'eval_FN_2': 12, 'eval_TP_3': 13, 'eval_TN_3': 68, 'eval_FP_3': 25, 'eval_FN_3': 32, 'eval_TP_4': 19, 'eval_TN_4': 69, 'eval_FP_4': 31, 'eval_FN_4': 19, 'eval_TP_5': 0, 'eval_TN_5': 131, 'eval_FP_5': 0, 'eval_FN_5': 7, 'eval_runtime': 23.4624, 'eval_samples_per_second': 5.882, 'eval_steps_per_second': 0.384, 'epoch': 15.0} [2025-06-30 03:48:37,758][transformers.trainer][INFO] - Saving model checkpoint to ./results/best_model [2025-06-30 03:48:38,192][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-06-30 03:48:38,193][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-06-30 03:48:38,406][transformers.tokenization_utils_base][INFO] - tokenizer config file saved in ./results/best_model/tokenizer_config.json [2025-06-30 03:48:38,406][transformers.tokenization_utils_base][INFO] - Special tokens file saved in ./results/best_model/special_tokens_map.json [2025-06-30 03:48:38,523][__main__][INFO] - Model and tokenizer saved to ./results/best_model [2025-06-30 03:48:38,548][__main__][INFO] - Fine Tuning Finished. [2025-06-30 03:48:39,055][__main__][INFO] - Total emissions: 0.3845 kg CO2eq