2025-10-04 21:25:18,508 - INFO - Initialising CheckpointManager with config: CheckpointConfig(model_name_or_path='allenai/tulu-2-7b', max_seq_length=8192, learning_rate=2e-05, num_train_epochs=3, per_device_train_batch_size=16, output_base_dir='model_checkpoints', save_total_limit=1, logging_steps=1, seed=42, fp16=True, bf16=None, warmup_steps=0, warmup_ratio=0.0, gradient_accumulation_steps=1, gradient_checkpointing=True, resume_from_checkpoint=True, checkpoint_name='aime-paraphrased-sft-120-s42-m4.5-e3-lr2e-5', use_trl=True, use_lora=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, apply_chat_template=False, add_generation_prompt=False, packing=False, remove_unused_columns=False) 2025-10-04 21:25:18,650 - INFO - Initialising CheckpointManager with config: CheckpointConfig(model_name_or_path='allenai/tulu-2-7b', max_seq_length=8192, learning_rate=2e-05, num_train_epochs=3, per_device_train_batch_size=16, output_base_dir='model_checkpoints', save_total_limit=1, logging_steps=1, seed=42, fp16=True, bf16=None, warmup_steps=0, warmup_ratio=0.0, gradient_accumulation_steps=1, gradient_checkpointing=True, resume_from_checkpoint=True, checkpoint_name='aime-paraphrased-sft-120-s42-m4.5-e3-lr2e-5', use_trl=True, use_lora=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, apply_chat_template=False, add_generation_prompt=False, packing=False, remove_unused_columns=False) 2025-10-04 21:25:28,031 - INFO - Fine-tuning with TRL on 120 raw text samples 2025-10-04 21:25:28,097 - INFO - Fine-tuning with TRL on 120 raw text samples 2025-10-04 21:31:13,537 - INFO - Training complete. Saving model to model_checkpoints/tulu-2-7b_20251004_aime-paraphrased-sft-120-s42-m4.5-e3-lr2e-5 2025-10-04 21:31:13,540 - INFO - Training complete. Saving model to model_checkpoints/tulu-2-7b_20251004_aime-paraphrased-sft-120-s42-m4.5-e3-lr2e-5