YuchenLi01 commited on
Commit
21d4f75
·
verified ·
1 Parent(s): 0c32018

Model save

Browse files
Files changed (4) hide show
  1. README.md +1 -1
  2. all_results.json +3 -3
  3. train_results.json +3 -3
  4. trainer_state.json +0 -0
README.md CHANGED
@@ -27,7 +27,7 @@ print(output["generated_text"])
27
 
28
  ## Training procedure
29
 
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yuchenl4/lmpref/runs/ultrafeedbackSkyworkAgree_alignmentZephyr7BSftFull_sdpo_score_ebs256_lr1e-07_0try1BF7hJnWeg1ItSgrW3LbLmmRFdnX8AvASfq3xa15Ku4Fxb)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
 
27
 
28
  ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yuchenl4/lmpref/runs/ultrafeedbackSkyworkAgree_alignmentZephyr7BSftFull_sdpo_score_ebs256_lr1e-07_0try1mH6fAsH07Skaih7aBeXOQGSzIYsG1Ksymn54NyLSigKbX)
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
 
all_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
  "epoch": 0.9971988795518207,
3
  "total_flos": 0.0,
4
- "train_loss": 0.5609350311622191,
5
- "train_runtime": 40948.0878,
6
  "train_samples": 45608,
7
- "train_samples_per_second": 1.114,
8
  "train_steps_per_second": 0.004
9
  }
 
1
  {
2
  "epoch": 0.9971988795518207,
3
  "total_flos": 0.0,
4
+ "train_loss": 0.6403945751404494,
5
+ "train_runtime": 41762.485,
6
  "train_samples": 45608,
7
+ "train_samples_per_second": 1.092,
8
  "train_steps_per_second": 0.004
9
  }
train_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
  "epoch": 0.9971988795518207,
3
  "total_flos": 0.0,
4
- "train_loss": 0.5609350311622191,
5
- "train_runtime": 40948.0878,
6
  "train_samples": 45608,
7
- "train_samples_per_second": 1.114,
8
  "train_steps_per_second": 0.004
9
  }
 
1
  {
2
  "epoch": 0.9971988795518207,
3
  "total_flos": 0.0,
4
+ "train_loss": 0.6403945751404494,
5
+ "train_runtime": 41762.485,
6
  "train_samples": 45608,
7
+ "train_samples_per_second": 1.092,
8
  "train_steps_per_second": 0.004
9
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff