YuchenLi01 commited on
Commit
dd0686b
·
verified ·
1 Parent(s): 02aedea

Model save

Browse files
Files changed (4) hide show
  1. README.md +1 -1
  2. all_results.json +4 -4
  3. train_results.json +4 -4
  4. trainer_state.json +0 -0
README.md CHANGED
@@ -27,7 +27,7 @@ print(output["generated_text"])
27
 
28
  ## Training procedure
29
 
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yuchenl4/lmpref/runs/ultrafeedbackSkyworkAgree_alignmentZephyr7BSftFull_sdpo_score_ebs256_lr5e-06_0try1Np798OezumxB5XeTF3RtHQdWE8kF58TTgI9hJ74c9OvNz)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
 
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yuchenl4/lmpref/runs/ultrafeedbackSkyworkAgree_alignmentZephyr7BSftFull_sdpo_score_ebs256_lr5e-06_0try1BkYpU9eF8GSJTrDxIJzrzwSUTgp6NKZMTBRNCcZDnwFFf)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
all_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
  "epoch": 0.9971988795518207,
3
  "total_flos": 0.0,
4
- "train_loss": 0.44990385248419945,
5
- "train_runtime": 40900.2747,
6
  "train_samples": 45608,
7
- "train_samples_per_second": 1.115,
8
- "train_steps_per_second": 0.004
9
  }
 
1
  {
2
  "epoch": 0.9971988795518207,
3
  "total_flos": 0.0,
4
+ "train_loss": 0.5297158916344803,
5
+ "train_runtime": 36666.0128,
6
  "train_samples": 45608,
7
+ "train_samples_per_second": 1.244,
8
+ "train_steps_per_second": 0.005
9
  }
train_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
  "epoch": 0.9971988795518207,
3
  "total_flos": 0.0,
4
- "train_loss": 0.44990385248419945,
5
- "train_runtime": 40900.2747,
6
  "train_samples": 45608,
7
- "train_samples_per_second": 1.115,
8
- "train_steps_per_second": 0.004
9
  }
 
1
  {
2
  "epoch": 0.9971988795518207,
3
  "total_flos": 0.0,
4
+ "train_loss": 0.5297158916344803,
5
+ "train_runtime": 36666.0128,
6
  "train_samples": 45608,
7
+ "train_samples_per_second": 1.244,
8
+ "train_steps_per_second": 0.005
9
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff