[2025-07-04 04:55:28,822][__main__][INFO] - cache_dir: /tmp/ dataset: name: kamel-usp/aes_enem_dataset split: JBCS2025 training_params: seed: 42 num_train_epochs: 20 logging_steps: 100 metric_for_best_model: QWK bf16: true bootstrap: enabled: true n_bootstrap: 10000 bootstrap_seed: 42 metrics: - QWK - Macro_F1 - Weighted_F1 post_training_results: model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59 experiments: model: name: meta-llama/Llama-3.1-8B type: llama31_classification_lora num_labels: 6 output_dir: ./results/phi4-balanced/C2 logging_dir: ./logs/phi4-balanced/C2 best_model_dir: ./results/phi4-balanced/C2/best_model lora_r: 8 lora_dropout: 0.05 lora_alpha: 16 lora_target_modules: all-linear tokenizer: name: meta-llama/Llama-3.1-8B dataset: grade_index: 1 use_full_context: true training_params: weight_decay: 0.01 warmup_ratio: 0.1 learning_rate: 5.0e-05 train_batch_size: 8 eval_batch_size: 4 gradient_accumulation_steps: 2 gradient_checkpointing: true [2025-07-04 04:55:32,725][__main__][INFO] - GPU 0: NVIDIA H200 | TDP ≈ 700 W [2025-07-04 04:55:32,725][__main__][INFO] - Starting the Fine Tuning training process. [2025-07-04 04:55:36,414][transformers.tokenization_utils_base][INFO] - loading file tokenizer.json from cache at /tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/tokenizer.json [2025-07-04 04:55:36,414][transformers.tokenization_utils_base][INFO] - loading file tokenizer.model from cache at None [2025-07-04 04:55:36,414][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at None [2025-07-04 04:55:36,414][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/special_tokens_map.json [2025-07-04 04:55:36,414][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/tokenizer_config.json [2025-07-04 04:55:36,414][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None [2025-07-04 04:55:36,688][transformers.tokenization_utils_base][INFO] - Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. [2025-07-04 04:55:36,694][__main__][INFO] - Tokenizer function parameters- Padding:longest; Truncation: False; Use Full Context: True [2025-07-04 04:55:38,465][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-07-04 04:55:38,467][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "id2label": { "0": "LABEL_0", "1": "LABEL_1", "2": "LABEL_2", "3": "LABEL_3", "4": "LABEL_4", "5": "LABEL_5" }, "initializer_range": 0.02, "intermediate_size": 14336, "label2id": { "LABEL_0": 0, "LABEL_1": 1, "LABEL_2": 2, "LABEL_3": 3, "LABEL_4": 4, "LABEL_5": 5 }, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-07-04 04:55:38,601][transformers.modeling_utils][INFO] - loading weights file model.safetensors from cache at /tmp/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/model.safetensors.index.json [2025-07-04 04:55:38,601][transformers.modeling_utils][INFO] - Will use torch_dtype=torch.bfloat16 as defined in model's config object [2025-07-04 04:55:38,601][transformers.modeling_utils][INFO] - Instantiating LlamaForSequenceClassification model under default dtype torch.bfloat16. [2025-07-04 04:55:50,327][transformers.modeling_utils][INFO] - Some weights of the model checkpoint at meta-llama/Llama-3.1-8B were not used when initializing LlamaForSequenceClassification: ['lm_head.weight'] - This IS expected if you are initializing LlamaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing LlamaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [2025-07-04 04:55:50,328][transformers.modeling_utils][WARNING] - Some weights of LlamaForSequenceClassification were not initialized from the model checkpoint at meta-llama/Llama-3.1-8B and are newly initialized: ['score.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [2025-07-04 04:55:51,766][__main__][INFO] - Initialized new PEFT model for ce loss [2025-07-04 04:55:51,769][__main__][INFO] - None [2025-07-04 04:55:51,771][transformers.training_args][INFO] - PyTorch: setting up devices [2025-07-04 04:55:51,807][__main__][INFO] - Total steps: 620. Number of warmup steps: 62 [2025-07-04 04:55:51,816][transformers.trainer][INFO] - You have loaded a model on multiple GPUs. `is_model_parallel` attribute will be force-set to `True` to avoid any unexpected behavior such as device placement mismatching. [2025-07-04 04:55:51,857][transformers.trainer][INFO] - Using auto half precision backend [2025-07-04 04:55:51,858][transformers.trainer][WARNING] - No label_names provided for model class `PeftModelForSequenceClassification`. Since `PeftModel` hides base models input arguments, if label_names is not given, label_names can't be set automatically within `Trainer`. Note that empty label_names list will be used instead. [2025-07-04 04:55:51,860][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt. If id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-07-04 04:55:51,876][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-04 04:55:51,876][transformers.trainer][INFO] - Num examples = 132 [2025-07-04 04:55:51,876][transformers.trainer][INFO] - Batch size = 4 [2025-07-04 04:56:19,539][transformers.trainer][INFO] - The following columns in the Training set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt. If id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-07-04 04:56:19,590][transformers.trainer][INFO] - ***** Running training ***** [2025-07-04 04:56:19,590][transformers.trainer][INFO] - Num examples = 500 [2025-07-04 04:56:19,590][transformers.trainer][INFO] - Num Epochs = 20 [2025-07-04 04:56:19,590][transformers.trainer][INFO] - Instantaneous batch size per device = 8 [2025-07-04 04:56:19,590][transformers.trainer][INFO] - Total train batch size (w. parallel, distributed & accumulation) = 16 [2025-07-04 04:56:19,590][transformers.trainer][INFO] - Gradient Accumulation steps = 2 [2025-07-04 04:56:19,590][transformers.trainer][INFO] - Total optimization steps = 640 [2025-07-04 04:56:19,593][transformers.trainer][INFO] - Number of trainable parameters = 20,996,096 [2025-07-04 04:56:19,735][transformers.models.llama.modeling_llama][WARNING] - `use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`. [2025-07-04 05:02:06,804][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt. If id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-07-04 05:02:06,808][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-04 05:02:06,808][transformers.trainer][INFO] - Num examples = 132 [2025-07-04 05:02:06,808][transformers.trainer][INFO] - Batch size = 4 [2025-07-04 05:02:34,140][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-32 [2025-07-04 05:02:34,640][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-07-04 05:02:34,640][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-07-04 05:08:22,342][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt. If id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-07-04 05:08:22,346][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-04 05:08:22,346][transformers.trainer][INFO] - Num examples = 132 [2025-07-04 05:08:22,346][transformers.trainer][INFO] - Batch size = 4 [2025-07-04 05:08:49,647][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-64 [2025-07-04 05:08:49,987][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-07-04 05:08:49,988][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-07-04 05:08:50,357][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-32] due to args.save_total_limit [2025-07-04 05:14:37,611][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt. If id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-07-04 05:14:37,615][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-04 05:14:37,615][transformers.trainer][INFO] - Num examples = 132 [2025-07-04 05:14:37,615][transformers.trainer][INFO] - Batch size = 4 [2025-07-04 05:15:04,941][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-96 [2025-07-04 05:15:05,300][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-07-04 05:15:05,300][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-07-04 05:15:05,671][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-64] due to args.save_total_limit [2025-07-04 05:20:52,748][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt. If id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-07-04 05:20:52,752][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-04 05:20:52,752][transformers.trainer][INFO] - Num examples = 132 [2025-07-04 05:20:52,752][transformers.trainer][INFO] - Batch size = 4 [2025-07-04 05:21:20,058][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-128 [2025-07-04 05:21:20,368][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-07-04 05:21:20,369][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-07-04 05:27:07,721][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt. If id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-07-04 05:27:07,725][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-04 05:27:07,725][transformers.trainer][INFO] - Num examples = 132 [2025-07-04 05:27:07,725][transformers.trainer][INFO] - Batch size = 4 [2025-07-04 05:27:35,007][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-160 [2025-07-04 05:27:35,338][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-07-04 05:27:35,339][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-07-04 05:27:35,833][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-96] due to args.save_total_limit [2025-07-04 05:27:35,848][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-128] due to args.save_total_limit [2025-07-04 05:33:22,641][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt. If id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-07-04 05:33:22,645][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-04 05:33:22,645][transformers.trainer][INFO] - Num examples = 132 [2025-07-04 05:33:22,645][transformers.trainer][INFO] - Batch size = 4 [2025-07-04 05:33:49,963][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-192 [2025-07-04 05:33:50,299][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-07-04 05:33:50,300][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-07-04 05:33:50,767][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-160] due to args.save_total_limit [2025-07-04 05:39:37,397][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt. If id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-07-04 05:39:37,401][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-04 05:39:37,401][transformers.trainer][INFO] - Num examples = 132 [2025-07-04 05:39:37,401][transformers.trainer][INFO] - Batch size = 4 [2025-07-04 05:40:04,738][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-224 [2025-07-04 05:40:05,068][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-07-04 05:40:05,069][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-07-04 05:45:52,581][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt. If id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-07-04 05:45:52,585][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-04 05:45:52,585][transformers.trainer][INFO] - Num examples = 132 [2025-07-04 05:45:52,585][transformers.trainer][INFO] - Batch size = 4 [2025-07-04 05:46:19,889][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-256 [2025-07-04 05:46:20,208][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-07-04 05:46:20,208][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-07-04 05:46:20,644][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-224] due to args.save_total_limit [2025-07-04 05:52:07,436][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt. If id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-07-04 05:52:07,440][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-04 05:52:07,440][transformers.trainer][INFO] - Num examples = 132 [2025-07-04 05:52:07,440][transformers.trainer][INFO] - Batch size = 4 [2025-07-04 05:52:34,762][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-288 [2025-07-04 05:52:35,094][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-07-04 05:52:35,095][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-07-04 05:52:35,478][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-256] due to args.save_total_limit [2025-07-04 05:58:22,378][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt. If id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-07-04 05:58:22,381][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-04 05:58:22,382][transformers.trainer][INFO] - Num examples = 132 [2025-07-04 05:58:22,382][transformers.trainer][INFO] - Batch size = 4 [2025-07-04 05:58:49,687][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-320 [2025-07-04 05:58:50,029][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-07-04 05:58:50,029][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-07-04 05:58:50,481][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-288] due to args.save_total_limit [2025-07-04 06:04:37,565][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt. If id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-07-04 06:04:37,569][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-04 06:04:37,569][transformers.trainer][INFO] - Num examples = 132 [2025-07-04 06:04:37,569][transformers.trainer][INFO] - Batch size = 4 [2025-07-04 06:05:04,864][transformers.trainer][INFO] - Saving model checkpoint to /workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-352 [2025-07-04 06:05:05,286][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-07-04 06:05:05,287][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-07-04 06:05:05,715][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-320] due to args.save_total_limit [2025-07-04 06:05:05,729][transformers.trainer][INFO] - Training completed. Do not forget to share your model on huggingface.co/models =) [2025-07-04 06:05:05,729][transformers.trainer][INFO] - Loading best model from /workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-192 (score: 0.2810513257869438). [2025-07-04 06:05:05,962][transformers.trainer][INFO] - Deleting older checkpoint [/workspace/jbcs2025/outputs/2025-07-04/04-55-28/results/phi4-balanced/C2/checkpoint-352] due to args.save_total_limit [2025-07-04 06:05:05,978][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt. If id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-07-04 06:05:05,982][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-04 06:05:05,982][transformers.trainer][INFO] - Num examples = 132 [2025-07-04 06:05:05,982][transformers.trainer][INFO] - Batch size = 4 [2025-07-04 06:05:33,239][__main__][INFO] - Training completed successfully. [2025-07-04 06:05:33,239][__main__][INFO] - Running on Test [2025-07-04 06:05:33,240][transformers.trainer][INFO] - The following columns in the Evaluation set don't have a corresponding argument in `PeftModelForSequenceClassification.forward` and have been ignored: id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt. If id_prompt, supporting_text, essay_year, id, grades, reference, essay_text, prompt are not expected by `PeftModelForSequenceClassification.forward`, you can safely ignore this message. [2025-07-04 06:05:33,243][transformers.trainer][INFO] - ***** Running Evaluation ***** [2025-07-04 06:05:33,243][transformers.trainer][INFO] - Num examples = 138 [2025-07-04 06:05:33,243][transformers.trainer][INFO] - Batch size = 4 [2025-07-04 06:06:01,557][__main__][INFO] - Test metrics: {'eval_loss': 1.812556505203247, 'eval_model_preparation_time': 0.0118, 'eval_accuracy': 0.26811594202898553, 'eval_RMSE': 67.07123619434549, 'eval_QWK': 0.24869506650951334, 'eval_HDIV': 0.18840579710144922, 'eval_Macro_F1': 0.20949693252092624, 'eval_Micro_F1': 0.26811594202898553, 'eval_Weighted_F1': 0.26213065105985056, 'eval_TP_0': 0, 'eval_TN_0': 137, 'eval_FP_0': 0, 'eval_FN_0': 1, 'eval_TP_1': 10, 'eval_TN_1': 87, 'eval_FP_1': 16, 'eval_FN_1': 25, 'eval_TP_2': 2, 'eval_TN_2': 109, 'eval_FP_2': 24, 'eval_FN_2': 3, 'eval_TP_3': 6, 'eval_TN_3': 83, 'eval_FP_3': 4, 'eval_FN_3': 45, 'eval_TP_4': 14, 'eval_TN_4': 66, 'eval_FP_4': 46, 'eval_FN_4': 12, 'eval_TP_5': 5, 'eval_TN_5': 107, 'eval_FP_5': 11, 'eval_FN_5': 15, 'eval_runtime': 28.3013, 'eval_samples_per_second': 4.876, 'eval_steps_per_second': 1.237, 'epoch': 11.0} [2025-07-04 06:06:01,557][transformers.trainer][INFO] - Saving model checkpoint to ./results/phi4-balanced/C2/best_model [2025-07-04 06:06:01,914][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /workspace/.hf_home/hub/models--meta-llama--Llama-3.1-8B/snapshots/d04e592bb4f6aa9cfee91e2e20afa771667e1d4b/config.json [2025-07-04 06:06:01,915][transformers.configuration_utils][INFO] - Model config LlamaConfig { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 128000, "eos_token_id": 128001, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 14336, "max_position_embeddings": 131072, "mlp_bias": false, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 8, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": { "factor": 8.0, "high_freq_factor": 4.0, "low_freq_factor": 1.0, "original_max_position_embeddings": 8192, "rope_type": "llama3" }, "rope_theta": 500000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.0", "use_cache": true, "vocab_size": 128256 } [2025-07-04 06:06:02,183][transformers.tokenization_utils_base][INFO] - tokenizer config file saved in ./results/phi4-balanced/C2/best_model/tokenizer_config.json [2025-07-04 06:06:02,183][transformers.tokenization_utils_base][INFO] - Special tokens file saved in ./results/phi4-balanced/C2/best_model/special_tokens_map.json [2025-07-04 06:06:02,313][__main__][INFO] - Model and tokenizer saved to ./results/phi4-balanced/C2/best_model [2025-07-04 06:06:02,355][__main__][INFO] - Fine Tuning Finished. [2025-07-04 06:06:02,865][__main__][INFO] - Total emissions: 0.0539 kg CO2eq