YuchenLi01 commited on
Commit
f78945d
·
verified ·
1 Parent(s): 522c1b2

Model save

Browse files
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: alignment-handbook/zephyr-7b-sft-full
3
+ library_name: transformers
4
+ model_name: ultrafeedbackSkyworkAgree_alignmentZephyr7BSftFull_sdpo_score_ebs256_lr1e-07_0
5
+ tags:
6
+ - generated_from_trainer
7
+ - trl
8
+ - dpo
9
+ licence: license
10
+ ---
11
+
12
+ # Model Card for ultrafeedbackSkyworkAgree_alignmentZephyr7BSftFull_sdpo_score_ebs256_lr1e-07_0
13
+
14
+ This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
+
17
+ ## Quick start
18
+
19
+ ```python
20
+ from transformers import pipeline
21
+
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="YuchenLi01/ultrafeedbackSkyworkAgree_alignmentZephyr7BSftFull_sdpo_score_ebs256_lr1e-07_0", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
+
28
+ ## Training procedure
29
+
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yuchenl4/lmpref/runs/olden_king_smkymkmb3c_0)
31
+
32
+ This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
33
+
34
+ ### Framework versions
35
+
36
+ - TRL: 0.12.0
37
+ - Transformers: 4.46.2
38
+ - Pytorch: 2.3.0
39
+ - Datasets: 3.1.0
40
+ - Tokenizers: 0.20.3
41
+
42
+ ## Citations
43
+
44
+ Cite DPO as:
45
+
46
+ ```bibtex
47
+ @inproceedings{rafailov2023direct,
48
+ title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
49
+ author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
50
+ year = 2023,
51
+ booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
52
+ url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
53
+ editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
54
+ }
55
+ ```
56
+
57
+ Cite TRL as:
58
+
59
+ ```bibtex
60
+ @misc{vonwerra2022trl,
61
+ title = {{TRL: Transformer Reinforcement Learning}},
62
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
63
+ year = 2020,
64
+ journal = {GitHub repository},
65
+ publisher = {GitHub},
66
+ howpublished = {\url{https://github.com/huggingface/trl}}
67
+ }
68
+ ```
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9971988795518207,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.6403918319873596,
5
+ "train_runtime": 36690.4382,
6
+ "train_samples": 45608,
7
+ "train_samples_per_second": 1.243,
8
+ "train_steps_per_second": 0.005
9
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.46.2"
6
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9971988795518207,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.6403918319873596,
5
+ "train_runtime": 36690.4382,
6
+ "train_samples": 45608,
7
+ "train_samples_per_second": 1.243,
8
+ "train_steps_per_second": 0.005
9
+ }
trainer_state.json ADDED
@@ -0,0 +1,1736 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.9971988795518207,
5
+ "eval_steps": 2,
6
+ "global_step": 178,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0056022408963585435,
13
+ "grad_norm": 7.67260987802389,
14
+ "learning_rate": 5.555555555555555e-09,
15
+ "logits/chosen": -3.109375,
16
+ "logits/rejected": -3.1875,
17
+ "logps/chosen": -302.0,
18
+ "logps/rejected": -274.0,
19
+ "loss": 0.6914,
20
+ "rewards/accuracies": 0.0,
21
+ "rewards/chosen": 0.0,
22
+ "rewards/margins": 0.0,
23
+ "rewards/rejected": 0.0,
24
+ "step": 1
25
+ },
26
+ {
27
+ "epoch": 0.011204481792717087,
28
+ "eval_logits/chosen": -3.171875,
29
+ "eval_logits/rejected": -3.21875,
30
+ "eval_logps/chosen": -342.0,
31
+ "eval_logps/rejected": -284.0,
32
+ "eval_loss": 0.6922929883003235,
33
+ "eval_rewards/accuracies": 0.1927083283662796,
34
+ "eval_rewards/chosen": 5.5789947509765625e-05,
35
+ "eval_rewards/margins": -0.0003662109375,
36
+ "eval_rewards/rejected": 0.00041961669921875,
37
+ "eval_runtime": 133.6177,
38
+ "eval_samples_per_second": 11.129,
39
+ "eval_steps_per_second": 0.18,
40
+ "step": 2
41
+ },
42
+ {
43
+ "epoch": 0.022408963585434174,
44
+ "eval_logits/chosen": -3.171875,
45
+ "eval_logits/rejected": -3.21875,
46
+ "eval_logps/chosen": -342.0,
47
+ "eval_logps/rejected": -284.0,
48
+ "eval_loss": 0.6920216083526611,
49
+ "eval_rewards/accuracies": 0.2083333283662796,
50
+ "eval_rewards/chosen": 0.0003509521484375,
51
+ "eval_rewards/margins": -0.000396728515625,
52
+ "eval_rewards/rejected": 0.0007476806640625,
53
+ "eval_runtime": 134.3568,
54
+ "eval_samples_per_second": 11.068,
55
+ "eval_steps_per_second": 0.179,
56
+ "step": 4
57
+ },
58
+ {
59
+ "epoch": 0.03361344537815126,
60
+ "eval_logits/chosen": -3.171875,
61
+ "eval_logits/rejected": -3.21875,
62
+ "eval_logps/chosen": -342.0,
63
+ "eval_logps/rejected": -284.0,
64
+ "eval_loss": 0.6921105980873108,
65
+ "eval_rewards/accuracies": 0.2057291716337204,
66
+ "eval_rewards/chosen": 0.0001201629638671875,
67
+ "eval_rewards/margins": 0.000484466552734375,
68
+ "eval_rewards/rejected": -0.0003643035888671875,
69
+ "eval_runtime": 134.2477,
70
+ "eval_samples_per_second": 11.077,
71
+ "eval_steps_per_second": 0.179,
72
+ "step": 6
73
+ },
74
+ {
75
+ "epoch": 0.04481792717086835,
76
+ "eval_logits/chosen": -3.171875,
77
+ "eval_logits/rejected": -3.21875,
78
+ "eval_logps/chosen": -342.0,
79
+ "eval_logps/rejected": -284.0,
80
+ "eval_loss": 0.6920554041862488,
81
+ "eval_rewards/accuracies": 0.21875,
82
+ "eval_rewards/chosen": -9.679794311523438e-05,
83
+ "eval_rewards/margins": -0.00018310546875,
84
+ "eval_rewards/rejected": 8.487701416015625e-05,
85
+ "eval_runtime": 134.4853,
86
+ "eval_samples_per_second": 11.057,
87
+ "eval_steps_per_second": 0.178,
88
+ "step": 8
89
+ },
90
+ {
91
+ "epoch": 0.056022408963585436,
92
+ "grad_norm": 7.0192772162926635,
93
+ "learning_rate": 5.5555555555555555e-08,
94
+ "logits/chosen": -3.1875,
95
+ "logits/rejected": -3.203125,
96
+ "logps/chosen": -304.0,
97
+ "logps/rejected": -278.0,
98
+ "loss": 0.6917,
99
+ "rewards/accuracies": 0.1996527761220932,
100
+ "rewards/chosen": 0.000156402587890625,
101
+ "rewards/margins": 4.744529724121094e-05,
102
+ "rewards/rejected": 0.00010824203491210938,
103
+ "step": 10
104
+ },
105
+ {
106
+ "epoch": 0.056022408963585436,
107
+ "eval_logits/chosen": -3.171875,
108
+ "eval_logits/rejected": -3.21875,
109
+ "eval_logps/chosen": -342.0,
110
+ "eval_logps/rejected": -284.0,
111
+ "eval_loss": 0.692084014415741,
112
+ "eval_rewards/accuracies": 0.2395833283662796,
113
+ "eval_rewards/chosen": -0.0001068115234375,
114
+ "eval_rewards/margins": 0.000270843505859375,
115
+ "eval_rewards/rejected": -0.000377655029296875,
116
+ "eval_runtime": 134.2041,
117
+ "eval_samples_per_second": 11.08,
118
+ "eval_steps_per_second": 0.179,
119
+ "step": 10
120
+ },
121
+ {
122
+ "epoch": 0.06722689075630252,
123
+ "eval_logits/chosen": -3.171875,
124
+ "eval_logits/rejected": -3.21875,
125
+ "eval_logps/chosen": -342.0,
126
+ "eval_logps/rejected": -284.0,
127
+ "eval_loss": 0.6917845010757446,
128
+ "eval_rewards/accuracies": 0.203125,
129
+ "eval_rewards/chosen": 0.00049591064453125,
130
+ "eval_rewards/margins": 0.000331878662109375,
131
+ "eval_rewards/rejected": 0.00016117095947265625,
132
+ "eval_runtime": 134.0059,
133
+ "eval_samples_per_second": 11.097,
134
+ "eval_steps_per_second": 0.179,
135
+ "step": 12
136
+ },
137
+ {
138
+ "epoch": 0.0784313725490196,
139
+ "eval_logits/chosen": -3.171875,
140
+ "eval_logits/rejected": -3.21875,
141
+ "eval_logps/chosen": -342.0,
142
+ "eval_logps/rejected": -284.0,
143
+ "eval_loss": 0.6914895176887512,
144
+ "eval_rewards/accuracies": 0.2317708283662796,
145
+ "eval_rewards/chosen": 0.000972747802734375,
146
+ "eval_rewards/margins": 0.000957489013671875,
147
+ "eval_rewards/rejected": 1.621246337890625e-05,
148
+ "eval_runtime": 134.116,
149
+ "eval_samples_per_second": 11.087,
150
+ "eval_steps_per_second": 0.179,
151
+ "step": 14
152
+ },
153
+ {
154
+ "epoch": 0.0896358543417367,
155
+ "eval_logits/chosen": -3.171875,
156
+ "eval_logits/rejected": -3.21875,
157
+ "eval_logps/chosen": -342.0,
158
+ "eval_logps/rejected": -284.0,
159
+ "eval_loss": 0.6913825869560242,
160
+ "eval_rewards/accuracies": 0.2890625,
161
+ "eval_rewards/chosen": 0.000621795654296875,
162
+ "eval_rewards/margins": 0.00136566162109375,
163
+ "eval_rewards/rejected": -0.000743865966796875,
164
+ "eval_runtime": 133.0747,
165
+ "eval_samples_per_second": 11.174,
166
+ "eval_steps_per_second": 0.18,
167
+ "step": 16
168
+ },
169
+ {
170
+ "epoch": 0.10084033613445378,
171
+ "eval_logits/chosen": -3.15625,
172
+ "eval_logits/rejected": -3.21875,
173
+ "eval_logps/chosen": -342.0,
174
+ "eval_logps/rejected": -284.0,
175
+ "eval_loss": 0.6909087896347046,
176
+ "eval_rewards/accuracies": 0.328125,
177
+ "eval_rewards/chosen": 0.00168609619140625,
178
+ "eval_rewards/margins": 0.002593994140625,
179
+ "eval_rewards/rejected": -0.00091552734375,
180
+ "eval_runtime": 133.8874,
181
+ "eval_samples_per_second": 11.106,
182
+ "eval_steps_per_second": 0.179,
183
+ "step": 18
184
+ },
185
+ {
186
+ "epoch": 0.11204481792717087,
187
+ "grad_norm": 7.546692092094965,
188
+ "learning_rate": 9.996145181203615e-08,
189
+ "logits/chosen": -3.15625,
190
+ "logits/rejected": -3.1875,
191
+ "logps/chosen": -324.0,
192
+ "logps/rejected": -278.0,
193
+ "loss": 0.6915,
194
+ "rewards/accuracies": 0.2593750059604645,
195
+ "rewards/chosen": 0.000823974609375,
196
+ "rewards/margins": 0.0010986328125,
197
+ "rewards/rejected": -0.000274658203125,
198
+ "step": 20
199
+ },
200
+ {
201
+ "epoch": 0.11204481792717087,
202
+ "eval_logits/chosen": -3.15625,
203
+ "eval_logits/rejected": -3.203125,
204
+ "eval_logps/chosen": -342.0,
205
+ "eval_logps/rejected": -284.0,
206
+ "eval_loss": 0.6904590129852295,
207
+ "eval_rewards/accuracies": 0.4270833432674408,
208
+ "eval_rewards/chosen": 0.0024261474609375,
209
+ "eval_rewards/margins": 0.00421142578125,
210
+ "eval_rewards/rejected": -0.0017852783203125,
211
+ "eval_runtime": 134.0613,
212
+ "eval_samples_per_second": 11.092,
213
+ "eval_steps_per_second": 0.179,
214
+ "step": 20
215
+ },
216
+ {
217
+ "epoch": 0.12324929971988796,
218
+ "eval_logits/chosen": -3.15625,
219
+ "eval_logits/rejected": -3.203125,
220
+ "eval_logps/chosen": -342.0,
221
+ "eval_logps/rejected": -284.0,
222
+ "eval_loss": 0.6897143125534058,
223
+ "eval_rewards/accuracies": 0.4947916567325592,
224
+ "eval_rewards/chosen": 0.003570556640625,
225
+ "eval_rewards/margins": 0.00640869140625,
226
+ "eval_rewards/rejected": -0.002838134765625,
227
+ "eval_runtime": 134.4842,
228
+ "eval_samples_per_second": 11.057,
229
+ "eval_steps_per_second": 0.178,
230
+ "step": 22
231
+ },
232
+ {
233
+ "epoch": 0.13445378151260504,
234
+ "eval_logits/chosen": -3.15625,
235
+ "eval_logits/rejected": -3.203125,
236
+ "eval_logps/chosen": -342.0,
237
+ "eval_logps/rejected": -284.0,
238
+ "eval_loss": 0.6890442371368408,
239
+ "eval_rewards/accuracies": 0.5364583134651184,
240
+ "eval_rewards/chosen": 0.0052490234375,
241
+ "eval_rewards/margins": 0.00836181640625,
242
+ "eval_rewards/rejected": -0.0031280517578125,
243
+ "eval_runtime": 133.679,
244
+ "eval_samples_per_second": 11.124,
245
+ "eval_steps_per_second": 0.18,
246
+ "step": 24
247
+ },
248
+ {
249
+ "epoch": 0.14565826330532214,
250
+ "eval_logits/chosen": -3.15625,
251
+ "eval_logits/rejected": -3.203125,
252
+ "eval_logps/chosen": -342.0,
253
+ "eval_logps/rejected": -284.0,
254
+ "eval_loss": 0.6882352232933044,
255
+ "eval_rewards/accuracies": 0.5625,
256
+ "eval_rewards/chosen": 0.006683349609375,
257
+ "eval_rewards/margins": 0.01025390625,
258
+ "eval_rewards/rejected": -0.0035552978515625,
259
+ "eval_runtime": 134.2487,
260
+ "eval_samples_per_second": 11.076,
261
+ "eval_steps_per_second": 0.179,
262
+ "step": 26
263
+ },
264
+ {
265
+ "epoch": 0.1568627450980392,
266
+ "eval_logits/chosen": -3.15625,
267
+ "eval_logits/rejected": -3.203125,
268
+ "eval_logps/chosen": -342.0,
269
+ "eval_logps/rejected": -284.0,
270
+ "eval_loss": 0.6876863241195679,
271
+ "eval_rewards/accuracies": 0.6015625,
272
+ "eval_rewards/chosen": 0.006866455078125,
273
+ "eval_rewards/margins": 0.0118408203125,
274
+ "eval_rewards/rejected": -0.004974365234375,
275
+ "eval_runtime": 134.3926,
276
+ "eval_samples_per_second": 11.065,
277
+ "eval_steps_per_second": 0.179,
278
+ "step": 28
279
+ },
280
+ {
281
+ "epoch": 0.16806722689075632,
282
+ "grad_norm": 6.890557992493745,
283
+ "learning_rate": 9.861849601988382e-08,
284
+ "logits/chosen": -3.171875,
285
+ "logits/rejected": -3.234375,
286
+ "logps/chosen": -340.0,
287
+ "logps/rejected": -266.0,
288
+ "loss": 0.6887,
289
+ "rewards/accuracies": 0.565625011920929,
290
+ "rewards/chosen": 0.005950927734375,
291
+ "rewards/margins": 0.00897216796875,
292
+ "rewards/rejected": -0.0030517578125,
293
+ "step": 30
294
+ },
295
+ {
296
+ "epoch": 0.16806722689075632,
297
+ "eval_logits/chosen": -3.15625,
298
+ "eval_logits/rejected": -3.203125,
299
+ "eval_logps/chosen": -342.0,
300
+ "eval_logps/rejected": -284.0,
301
+ "eval_loss": 0.6867438554763794,
302
+ "eval_rewards/accuracies": 0.6145833134651184,
303
+ "eval_rewards/chosen": 0.007720947265625,
304
+ "eval_rewards/margins": 0.01300048828125,
305
+ "eval_rewards/rejected": -0.005279541015625,
306
+ "eval_runtime": 134.4984,
307
+ "eval_samples_per_second": 11.056,
308
+ "eval_steps_per_second": 0.178,
309
+ "step": 30
310
+ },
311
+ {
312
+ "epoch": 0.1792717086834734,
313
+ "eval_logits/chosen": -3.15625,
314
+ "eval_logits/rejected": -3.203125,
315
+ "eval_logps/chosen": -340.0,
316
+ "eval_logps/rejected": -284.0,
317
+ "eval_loss": 0.6854631304740906,
318
+ "eval_rewards/accuracies": 0.6588541865348816,
319
+ "eval_rewards/chosen": 0.00958251953125,
320
+ "eval_rewards/margins": 0.0169677734375,
321
+ "eval_rewards/rejected": -0.007415771484375,
322
+ "eval_runtime": 133.0213,
323
+ "eval_samples_per_second": 11.179,
324
+ "eval_steps_per_second": 0.18,
325
+ "step": 32
326
+ },
327
+ {
328
+ "epoch": 0.19047619047619047,
329
+ "eval_logits/chosen": -3.15625,
330
+ "eval_logits/rejected": -3.203125,
331
+ "eval_logps/chosen": -340.0,
332
+ "eval_logps/rejected": -284.0,
333
+ "eval_loss": 0.6838385462760925,
334
+ "eval_rewards/accuracies": 0.6770833134651184,
335
+ "eval_rewards/chosen": 0.01116943359375,
336
+ "eval_rewards/margins": 0.0201416015625,
337
+ "eval_rewards/rejected": -0.009033203125,
338
+ "eval_runtime": 135.0159,
339
+ "eval_samples_per_second": 11.014,
340
+ "eval_steps_per_second": 0.178,
341
+ "step": 34
342
+ },
343
+ {
344
+ "epoch": 0.20168067226890757,
345
+ "eval_logits/chosen": -3.140625,
346
+ "eval_logits/rejected": -3.1875,
347
+ "eval_logps/chosen": -340.0,
348
+ "eval_logps/rejected": -284.0,
349
+ "eval_loss": 0.6825665235519409,
350
+ "eval_rewards/accuracies": 0.6796875,
351
+ "eval_rewards/chosen": 0.01300048828125,
352
+ "eval_rewards/margins": 0.023193359375,
353
+ "eval_rewards/rejected": -0.01019287109375,
354
+ "eval_runtime": 134.4265,
355
+ "eval_samples_per_second": 11.062,
356
+ "eval_steps_per_second": 0.179,
357
+ "step": 36
358
+ },
359
+ {
360
+ "epoch": 0.21288515406162464,
361
+ "eval_logits/chosen": -3.140625,
362
+ "eval_logits/rejected": -3.1875,
363
+ "eval_logps/chosen": -340.0,
364
+ "eval_logps/rejected": -284.0,
365
+ "eval_loss": 0.6815204620361328,
366
+ "eval_rewards/accuracies": 0.6875,
367
+ "eval_rewards/chosen": 0.0137939453125,
368
+ "eval_rewards/margins": 0.025634765625,
369
+ "eval_rewards/rejected": -0.01177978515625,
370
+ "eval_runtime": 134.3747,
371
+ "eval_samples_per_second": 11.066,
372
+ "eval_steps_per_second": 0.179,
373
+ "step": 38
374
+ },
375
+ {
376
+ "epoch": 0.22408963585434175,
377
+ "grad_norm": 7.330771538752886,
378
+ "learning_rate": 9.540715869125407e-08,
379
+ "logits/chosen": -3.125,
380
+ "logits/rejected": -3.125,
381
+ "logps/chosen": -308.0,
382
+ "logps/rejected": -276.0,
383
+ "loss": 0.6837,
384
+ "rewards/accuracies": 0.645312488079071,
385
+ "rewards/chosen": 0.01116943359375,
386
+ "rewards/margins": 0.0194091796875,
387
+ "rewards/rejected": -0.00823974609375,
388
+ "step": 40
389
+ },
390
+ {
391
+ "epoch": 0.22408963585434175,
392
+ "eval_logits/chosen": -3.140625,
393
+ "eval_logits/rejected": -3.1875,
394
+ "eval_logps/chosen": -340.0,
395
+ "eval_logps/rejected": -284.0,
396
+ "eval_loss": 0.6800552010536194,
397
+ "eval_rewards/accuracies": 0.6979166865348816,
398
+ "eval_rewards/chosen": 0.01556396484375,
399
+ "eval_rewards/margins": 0.029296875,
400
+ "eval_rewards/rejected": -0.0137939453125,
401
+ "eval_runtime": 134.815,
402
+ "eval_samples_per_second": 11.03,
403
+ "eval_steps_per_second": 0.178,
404
+ "step": 40
405
+ },
406
+ {
407
+ "epoch": 0.23529411764705882,
408
+ "eval_logits/chosen": -3.140625,
409
+ "eval_logits/rejected": -3.1875,
410
+ "eval_logps/chosen": -340.0,
411
+ "eval_logps/rejected": -284.0,
412
+ "eval_loss": 0.6788628101348877,
413
+ "eval_rewards/accuracies": 0.6979166865348816,
414
+ "eval_rewards/chosen": 0.01507568359375,
415
+ "eval_rewards/margins": 0.0322265625,
416
+ "eval_rewards/rejected": -0.01708984375,
417
+ "eval_runtime": 133.9485,
418
+ "eval_samples_per_second": 11.101,
419
+ "eval_steps_per_second": 0.179,
420
+ "step": 42
421
+ },
422
+ {
423
+ "epoch": 0.24649859943977592,
424
+ "eval_logits/chosen": -3.140625,
425
+ "eval_logits/rejected": -3.1875,
426
+ "eval_logps/chosen": -340.0,
427
+ "eval_logps/rejected": -286.0,
428
+ "eval_loss": 0.677721381187439,
429
+ "eval_rewards/accuracies": 0.7135416865348816,
430
+ "eval_rewards/chosen": 0.0152587890625,
431
+ "eval_rewards/margins": 0.03515625,
432
+ "eval_rewards/rejected": -0.0198974609375,
433
+ "eval_runtime": 134.206,
434
+ "eval_samples_per_second": 11.08,
435
+ "eval_steps_per_second": 0.179,
436
+ "step": 44
437
+ },
438
+ {
439
+ "epoch": 0.25770308123249297,
440
+ "eval_logits/chosen": -3.125,
441
+ "eval_logits/rejected": -3.171875,
442
+ "eval_logps/chosen": -340.0,
443
+ "eval_logps/rejected": -286.0,
444
+ "eval_loss": 0.6766650080680847,
445
+ "eval_rewards/accuracies": 0.7265625,
446
+ "eval_rewards/chosen": 0.01513671875,
447
+ "eval_rewards/margins": 0.03759765625,
448
+ "eval_rewards/rejected": -0.0224609375,
449
+ "eval_runtime": 134.1986,
450
+ "eval_samples_per_second": 11.081,
451
+ "eval_steps_per_second": 0.179,
452
+ "step": 46
453
+ },
454
+ {
455
+ "epoch": 0.2689075630252101,
456
+ "eval_logits/chosen": -3.125,
457
+ "eval_logits/rejected": -3.171875,
458
+ "eval_logps/chosen": -340.0,
459
+ "eval_logps/rejected": -286.0,
460
+ "eval_loss": 0.6755136847496033,
461
+ "eval_rewards/accuracies": 0.7161458134651184,
462
+ "eval_rewards/chosen": 0.0135498046875,
463
+ "eval_rewards/margins": 0.039794921875,
464
+ "eval_rewards/rejected": -0.0262451171875,
465
+ "eval_runtime": 133.4992,
466
+ "eval_samples_per_second": 11.139,
467
+ "eval_steps_per_second": 0.18,
468
+ "step": 48
469
+ },
470
+ {
471
+ "epoch": 0.2801120448179272,
472
+ "grad_norm": 6.881196622326406,
473
+ "learning_rate": 9.045084971874737e-08,
474
+ "logits/chosen": -3.125,
475
+ "logits/rejected": -3.1875,
476
+ "logps/chosen": -330.0,
477
+ "logps/rejected": -286.0,
478
+ "loss": 0.6768,
479
+ "rewards/accuracies": 0.739062488079071,
480
+ "rewards/chosen": 0.01806640625,
481
+ "rewards/margins": 0.0400390625,
482
+ "rewards/rejected": -0.02197265625,
483
+ "step": 50
484
+ },
485
+ {
486
+ "epoch": 0.2801120448179272,
487
+ "eval_logits/chosen": -3.125,
488
+ "eval_logits/rejected": -3.171875,
489
+ "eval_logps/chosen": -340.0,
490
+ "eval_logps/rejected": -286.0,
491
+ "eval_loss": 0.6741231679916382,
492
+ "eval_rewards/accuracies": 0.7317708134651184,
493
+ "eval_rewards/chosen": 0.0123291015625,
494
+ "eval_rewards/margins": 0.043701171875,
495
+ "eval_rewards/rejected": -0.03125,
496
+ "eval_runtime": 134.2614,
497
+ "eval_samples_per_second": 11.075,
498
+ "eval_steps_per_second": 0.179,
499
+ "step": 50
500
+ },
501
+ {
502
+ "epoch": 0.2913165266106443,
503
+ "eval_logits/chosen": -3.125,
504
+ "eval_logits/rejected": -3.171875,
505
+ "eval_logps/chosen": -340.0,
506
+ "eval_logps/rejected": -286.0,
507
+ "eval_loss": 0.672571063041687,
508
+ "eval_rewards/accuracies": 0.7473958134651184,
509
+ "eval_rewards/chosen": 0.0118408203125,
510
+ "eval_rewards/margins": 0.04833984375,
511
+ "eval_rewards/rejected": -0.036376953125,
512
+ "eval_runtime": 134.8474,
513
+ "eval_samples_per_second": 11.027,
514
+ "eval_steps_per_second": 0.178,
515
+ "step": 52
516
+ },
517
+ {
518
+ "epoch": 0.3025210084033613,
519
+ "eval_logits/chosen": -3.125,
520
+ "eval_logits/rejected": -3.171875,
521
+ "eval_logps/chosen": -340.0,
522
+ "eval_logps/rejected": -288.0,
523
+ "eval_loss": 0.6705121397972107,
524
+ "eval_rewards/accuracies": 0.7395833134651184,
525
+ "eval_rewards/chosen": 0.009765625,
526
+ "eval_rewards/margins": 0.052490234375,
527
+ "eval_rewards/rejected": -0.042724609375,
528
+ "eval_runtime": 133.4771,
529
+ "eval_samples_per_second": 11.14,
530
+ "eval_steps_per_second": 0.18,
531
+ "step": 54
532
+ },
533
+ {
534
+ "epoch": 0.3137254901960784,
535
+ "eval_logits/chosen": -3.125,
536
+ "eval_logits/rejected": -3.171875,
537
+ "eval_logps/chosen": -340.0,
538
+ "eval_logps/rejected": -288.0,
539
+ "eval_loss": 0.6688975095748901,
540
+ "eval_rewards/accuracies": 0.7760416865348816,
541
+ "eval_rewards/chosen": 0.00811767578125,
542
+ "eval_rewards/margins": 0.05712890625,
543
+ "eval_rewards/rejected": -0.049072265625,
544
+ "eval_runtime": 133.9848,
545
+ "eval_samples_per_second": 11.098,
546
+ "eval_steps_per_second": 0.179,
547
+ "step": 56
548
+ },
549
+ {
550
+ "epoch": 0.32492997198879553,
551
+ "eval_logits/chosen": -3.125,
552
+ "eval_logits/rejected": -3.171875,
553
+ "eval_logps/chosen": -342.0,
554
+ "eval_logps/rejected": -288.0,
555
+ "eval_loss": 0.6666556000709534,
556
+ "eval_rewards/accuracies": 0.7604166865348816,
557
+ "eval_rewards/chosen": 0.005767822265625,
558
+ "eval_rewards/margins": 0.0615234375,
559
+ "eval_rewards/rejected": -0.0556640625,
560
+ "eval_runtime": 132.6823,
561
+ "eval_samples_per_second": 11.207,
562
+ "eval_steps_per_second": 0.181,
563
+ "step": 58
564
+ },
565
+ {
566
+ "epoch": 0.33613445378151263,
567
+ "grad_norm": 7.102237419157038,
568
+ "learning_rate": 8.394003727664709e-08,
569
+ "logits/chosen": -3.171875,
570
+ "logits/rejected": -3.21875,
571
+ "logps/chosen": -324.0,
572
+ "logps/rejected": -270.0,
573
+ "loss": 0.6683,
574
+ "rewards/accuracies": 0.765625,
575
+ "rewards/chosen": 0.01373291015625,
576
+ "rewards/margins": 0.05859375,
577
+ "rewards/rejected": -0.044921875,
578
+ "step": 60
579
+ },
580
+ {
581
+ "epoch": 0.33613445378151263,
582
+ "eval_logits/chosen": -3.109375,
583
+ "eval_logits/rejected": -3.15625,
584
+ "eval_logps/chosen": -342.0,
585
+ "eval_logps/rejected": -290.0,
586
+ "eval_loss": 0.6645676493644714,
587
+ "eval_rewards/accuracies": 0.7734375,
588
+ "eval_rewards/chosen": 0.004119873046875,
589
+ "eval_rewards/margins": 0.06787109375,
590
+ "eval_rewards/rejected": -0.0634765625,
591
+ "eval_runtime": 134.253,
592
+ "eval_samples_per_second": 11.076,
593
+ "eval_steps_per_second": 0.179,
594
+ "step": 60
595
+ },
596
+ {
597
+ "epoch": 0.3473389355742297,
598
+ "eval_logits/chosen": -3.109375,
599
+ "eval_logits/rejected": -3.15625,
600
+ "eval_logps/chosen": -342.0,
601
+ "eval_logps/rejected": -290.0,
602
+ "eval_loss": 0.6622598767280579,
603
+ "eval_rewards/accuracies": 0.7760416865348816,
604
+ "eval_rewards/chosen": 0.00191497802734375,
605
+ "eval_rewards/margins": 0.07373046875,
606
+ "eval_rewards/rejected": -0.07177734375,
607
+ "eval_runtime": 134.3595,
608
+ "eval_samples_per_second": 11.067,
609
+ "eval_steps_per_second": 0.179,
610
+ "step": 62
611
+ },
612
+ {
613
+ "epoch": 0.3585434173669468,
614
+ "eval_logits/chosen": -3.109375,
615
+ "eval_logits/rejected": -3.15625,
616
+ "eval_logps/chosen": -342.0,
617
+ "eval_logps/rejected": -292.0,
618
+ "eval_loss": 0.6603178977966309,
619
+ "eval_rewards/accuracies": 0.7578125,
620
+ "eval_rewards/chosen": -0.0023193359375,
621
+ "eval_rewards/margins": 0.07763671875,
622
+ "eval_rewards/rejected": -0.080078125,
623
+ "eval_runtime": 135.0017,
624
+ "eval_samples_per_second": 11.015,
625
+ "eval_steps_per_second": 0.178,
626
+ "step": 64
627
+ },
628
+ {
629
+ "epoch": 0.3697478991596639,
630
+ "eval_logits/chosen": -3.109375,
631
+ "eval_logits/rejected": -3.15625,
632
+ "eval_logps/chosen": -342.0,
633
+ "eval_logps/rejected": -292.0,
634
+ "eval_loss": 0.6583611369132996,
635
+ "eval_rewards/accuracies": 0.7630208134651184,
636
+ "eval_rewards/chosen": -0.005096435546875,
637
+ "eval_rewards/margins": 0.083984375,
638
+ "eval_rewards/rejected": -0.0888671875,
639
+ "eval_runtime": 134.9993,
640
+ "eval_samples_per_second": 11.015,
641
+ "eval_steps_per_second": 0.178,
642
+ "step": 66
643
+ },
644
+ {
645
+ "epoch": 0.38095238095238093,
646
+ "eval_logits/chosen": -3.109375,
647
+ "eval_logits/rejected": -3.15625,
648
+ "eval_logps/chosen": -344.0,
649
+ "eval_logps/rejected": -294.0,
650
+ "eval_loss": 0.6562416553497314,
651
+ "eval_rewards/accuracies": 0.7708333134651184,
652
+ "eval_rewards/chosen": -0.00860595703125,
653
+ "eval_rewards/margins": 0.08984375,
654
+ "eval_rewards/rejected": -0.0986328125,
655
+ "eval_runtime": 134.0273,
656
+ "eval_samples_per_second": 11.095,
657
+ "eval_steps_per_second": 0.179,
658
+ "step": 68
659
+ },
660
+ {
661
+ "epoch": 0.39215686274509803,
662
+ "grad_norm": 6.194643384872316,
663
+ "learning_rate": 7.612492823579743e-08,
664
+ "logits/chosen": -3.09375,
665
+ "logits/rejected": -3.140625,
666
+ "logps/chosen": -308.0,
667
+ "logps/rejected": -280.0,
668
+ "loss": 0.6581,
669
+ "rewards/accuracies": 0.7640625238418579,
670
+ "rewards/chosen": -0.000598907470703125,
671
+ "rewards/margins": 0.08056640625,
672
+ "rewards/rejected": -0.0810546875,
673
+ "step": 70
674
+ },
675
+ {
676
+ "epoch": 0.39215686274509803,
677
+ "eval_logits/chosen": -3.09375,
678
+ "eval_logits/rejected": -3.140625,
679
+ "eval_logps/chosen": -344.0,
680
+ "eval_logps/rejected": -294.0,
681
+ "eval_loss": 0.6542046666145325,
682
+ "eval_rewards/accuracies": 0.7630208134651184,
683
+ "eval_rewards/chosen": -0.01361083984375,
684
+ "eval_rewards/margins": 0.09521484375,
685
+ "eval_rewards/rejected": -0.10888671875,
686
+ "eval_runtime": 133.8942,
687
+ "eval_samples_per_second": 11.106,
688
+ "eval_steps_per_second": 0.179,
689
+ "step": 70
690
+ },
691
+ {
692
+ "epoch": 0.40336134453781514,
693
+ "eval_logits/chosen": -3.09375,
694
+ "eval_logits/rejected": -3.140625,
695
+ "eval_logps/chosen": -344.0,
696
+ "eval_logps/rejected": -296.0,
697
+ "eval_loss": 0.6521748304367065,
698
+ "eval_rewards/accuracies": 0.7708333134651184,
699
+ "eval_rewards/chosen": -0.01904296875,
700
+ "eval_rewards/margins": 0.1015625,
701
+ "eval_rewards/rejected": -0.12060546875,
702
+ "eval_runtime": 135.2867,
703
+ "eval_samples_per_second": 10.991,
704
+ "eval_steps_per_second": 0.177,
705
+ "step": 72
706
+ },
707
+ {
708
+ "epoch": 0.41456582633053224,
709
+ "eval_logits/chosen": -3.09375,
710
+ "eval_logits/rejected": -3.140625,
711
+ "eval_logps/chosen": -344.0,
712
+ "eval_logps/rejected": -296.0,
713
+ "eval_loss": 0.6502817869186401,
714
+ "eval_rewards/accuracies": 0.7786458134651184,
715
+ "eval_rewards/chosen": -0.0244140625,
716
+ "eval_rewards/margins": 0.10693359375,
717
+ "eval_rewards/rejected": -0.130859375,
718
+ "eval_runtime": 133.7003,
719
+ "eval_samples_per_second": 11.122,
720
+ "eval_steps_per_second": 0.18,
721
+ "step": 74
722
+ },
723
+ {
724
+ "epoch": 0.4257703081232493,
725
+ "eval_logits/chosen": -3.09375,
726
+ "eval_logits/rejected": -3.140625,
727
+ "eval_logps/chosen": -344.0,
728
+ "eval_logps/rejected": -298.0,
729
+ "eval_loss": 0.6483878493309021,
730
+ "eval_rewards/accuracies": 0.7864583134651184,
731
+ "eval_rewards/chosen": -0.0308837890625,
732
+ "eval_rewards/margins": 0.111328125,
733
+ "eval_rewards/rejected": -0.142578125,
734
+ "eval_runtime": 134.1515,
735
+ "eval_samples_per_second": 11.084,
736
+ "eval_steps_per_second": 0.179,
737
+ "step": 76
738
+ },
739
+ {
740
+ "epoch": 0.4369747899159664,
741
+ "eval_logits/chosen": -3.09375,
742
+ "eval_logits/rejected": -3.140625,
743
+ "eval_logps/chosen": -346.0,
744
+ "eval_logps/rejected": -298.0,
745
+ "eval_loss": 0.6462110877037048,
746
+ "eval_rewards/accuracies": 0.7708333134651184,
747
+ "eval_rewards/chosen": -0.037109375,
748
+ "eval_rewards/margins": 0.11767578125,
749
+ "eval_rewards/rejected": -0.154296875,
750
+ "eval_runtime": 133.8558,
751
+ "eval_samples_per_second": 11.109,
752
+ "eval_steps_per_second": 0.179,
753
+ "step": 78
754
+ },
755
+ {
756
+ "epoch": 0.4481792717086835,
757
+ "grad_norm": 6.1230379557304975,
758
+ "learning_rate": 6.730585285387464e-08,
759
+ "logits/chosen": -3.125,
760
+ "logits/rejected": -3.15625,
761
+ "logps/chosen": -340.0,
762
+ "logps/rejected": -290.0,
763
+ "loss": 0.6458,
764
+ "rewards/accuracies": 0.8109375238418579,
765
+ "rewards/chosen": -0.01348876953125,
766
+ "rewards/margins": 0.119140625,
767
+ "rewards/rejected": -0.1328125,
768
+ "step": 80
769
+ },
770
+ {
771
+ "epoch": 0.4481792717086835,
772
+ "eval_logits/chosen": -3.09375,
773
+ "eval_logits/rejected": -3.140625,
774
+ "eval_logps/chosen": -346.0,
775
+ "eval_logps/rejected": -300.0,
776
+ "eval_loss": 0.6442796587944031,
777
+ "eval_rewards/accuracies": 0.7708333134651184,
778
+ "eval_rewards/chosen": -0.044677734375,
779
+ "eval_rewards/margins": 0.12255859375,
780
+ "eval_rewards/rejected": -0.1669921875,
781
+ "eval_runtime": 133.5665,
782
+ "eval_samples_per_second": 11.133,
783
+ "eval_steps_per_second": 0.18,
784
+ "step": 80
785
+ },
786
+ {
787
+ "epoch": 0.45938375350140054,
788
+ "eval_logits/chosen": -3.09375,
789
+ "eval_logits/rejected": -3.140625,
790
+ "eval_logps/chosen": -348.0,
791
+ "eval_logps/rejected": -300.0,
792
+ "eval_loss": 0.6423355937004089,
793
+ "eval_rewards/accuracies": 0.7838541865348816,
794
+ "eval_rewards/chosen": -0.049560546875,
795
+ "eval_rewards/margins": 0.1279296875,
796
+ "eval_rewards/rejected": -0.1767578125,
797
+ "eval_runtime": 133.4892,
798
+ "eval_samples_per_second": 11.139,
799
+ "eval_steps_per_second": 0.18,
800
+ "step": 82
801
+ },
802
+ {
803
+ "epoch": 0.47058823529411764,
804
+ "eval_logits/chosen": -3.09375,
805
+ "eval_logits/rejected": -3.125,
806
+ "eval_logps/chosen": -348.0,
807
+ "eval_logps/rejected": -302.0,
808
+ "eval_loss": 0.6405628323554993,
809
+ "eval_rewards/accuracies": 0.7916666865348816,
810
+ "eval_rewards/chosen": -0.052978515625,
811
+ "eval_rewards/margins": 0.1337890625,
812
+ "eval_rewards/rejected": -0.1865234375,
813
+ "eval_runtime": 133.4399,
814
+ "eval_samples_per_second": 11.144,
815
+ "eval_steps_per_second": 0.18,
816
+ "step": 84
817
+ },
818
+ {
819
+ "epoch": 0.48179271708683474,
820
+ "eval_logits/chosen": -3.09375,
821
+ "eval_logits/rejected": -3.125,
822
+ "eval_logps/chosen": -348.0,
823
+ "eval_logps/rejected": -302.0,
824
+ "eval_loss": 0.6387618184089661,
825
+ "eval_rewards/accuracies": 0.7994791865348816,
826
+ "eval_rewards/chosen": -0.056884765625,
827
+ "eval_rewards/margins": 0.138671875,
828
+ "eval_rewards/rejected": -0.1953125,
829
+ "eval_runtime": 133.8344,
830
+ "eval_samples_per_second": 11.111,
831
+ "eval_steps_per_second": 0.179,
832
+ "step": 86
833
+ },
834
+ {
835
+ "epoch": 0.49299719887955185,
836
+ "eval_logits/chosen": -3.078125,
837
+ "eval_logits/rejected": -3.125,
838
+ "eval_logps/chosen": -348.0,
839
+ "eval_logps/rejected": -304.0,
840
+ "eval_loss": 0.6369534730911255,
841
+ "eval_rewards/accuracies": 0.7994791865348816,
842
+ "eval_rewards/chosen": -0.060302734375,
843
+ "eval_rewards/margins": 0.14453125,
844
+ "eval_rewards/rejected": -0.2041015625,
845
+ "eval_runtime": 134.4799,
846
+ "eval_samples_per_second": 11.057,
847
+ "eval_steps_per_second": 0.178,
848
+ "step": 88
849
+ },
850
+ {
851
+ "epoch": 0.5042016806722689,
852
+ "grad_norm": 7.254285729207196,
853
+ "learning_rate": 5.7821723252011546e-08,
854
+ "logits/chosen": -3.0625,
855
+ "logits/rejected": -3.09375,
856
+ "logps/chosen": -330.0,
857
+ "logps/rejected": -288.0,
858
+ "loss": 0.6397,
859
+ "rewards/accuracies": 0.746874988079071,
860
+ "rewards/chosen": -0.05517578125,
861
+ "rewards/margins": 0.1279296875,
862
+ "rewards/rejected": -0.1826171875,
863
+ "step": 90
864
+ },
865
+ {
866
+ "epoch": 0.5042016806722689,
867
+ "eval_logits/chosen": -3.078125,
868
+ "eval_logits/rejected": -3.125,
869
+ "eval_logps/chosen": -348.0,
870
+ "eval_logps/rejected": -304.0,
871
+ "eval_loss": 0.635140061378479,
872
+ "eval_rewards/accuracies": 0.7916666865348816,
873
+ "eval_rewards/chosen": -0.062255859375,
874
+ "eval_rewards/margins": 0.1484375,
875
+ "eval_rewards/rejected": -0.2109375,
876
+ "eval_runtime": 134.0783,
877
+ "eval_samples_per_second": 11.091,
878
+ "eval_steps_per_second": 0.179,
879
+ "step": 90
880
+ },
881
+ {
882
+ "epoch": 0.5154061624649859,
883
+ "eval_logits/chosen": -3.078125,
884
+ "eval_logits/rejected": -3.125,
885
+ "eval_logps/chosen": -348.0,
886
+ "eval_logps/rejected": -306.0,
887
+ "eval_loss": 0.6333230137825012,
888
+ "eval_rewards/accuracies": 0.7942708134651184,
889
+ "eval_rewards/chosen": -0.0654296875,
890
+ "eval_rewards/margins": 0.154296875,
891
+ "eval_rewards/rejected": -0.2197265625,
892
+ "eval_runtime": 133.8735,
893
+ "eval_samples_per_second": 11.108,
894
+ "eval_steps_per_second": 0.179,
895
+ "step": 92
896
+ },
897
+ {
898
+ "epoch": 0.5266106442577031,
899
+ "eval_logits/chosen": -3.078125,
900
+ "eval_logits/rejected": -3.125,
901
+ "eval_logps/chosen": -348.0,
902
+ "eval_logps/rejected": -306.0,
903
+ "eval_loss": 0.6314958930015564,
904
+ "eval_rewards/accuracies": 0.8046875,
905
+ "eval_rewards/chosen": -0.06494140625,
906
+ "eval_rewards/margins": 0.1611328125,
907
+ "eval_rewards/rejected": -0.2255859375,
908
+ "eval_runtime": 133.4379,
909
+ "eval_samples_per_second": 11.144,
910
+ "eval_steps_per_second": 0.18,
911
+ "step": 94
912
+ },
913
+ {
914
+ "epoch": 0.5378151260504201,
915
+ "eval_logits/chosen": -3.078125,
916
+ "eval_logits/rejected": -3.125,
917
+ "eval_logps/chosen": -348.0,
918
+ "eval_logps/rejected": -306.0,
919
+ "eval_loss": 0.6297190189361572,
920
+ "eval_rewards/accuracies": 0.7994791865348816,
921
+ "eval_rewards/chosen": -0.06494140625,
922
+ "eval_rewards/margins": 0.1650390625,
923
+ "eval_rewards/rejected": -0.2294921875,
924
+ "eval_runtime": 134.0434,
925
+ "eval_samples_per_second": 11.093,
926
+ "eval_steps_per_second": 0.179,
927
+ "step": 96
928
+ },
929
+ {
930
+ "epoch": 0.5490196078431373,
931
+ "eval_logits/chosen": -3.078125,
932
+ "eval_logits/rejected": -3.125,
933
+ "eval_logps/chosen": -348.0,
934
+ "eval_logps/rejected": -306.0,
935
+ "eval_loss": 0.6282486915588379,
936
+ "eval_rewards/accuracies": 0.7942708134651184,
937
+ "eval_rewards/chosen": -0.0654296875,
938
+ "eval_rewards/margins": 0.1689453125,
939
+ "eval_rewards/rejected": -0.234375,
940
+ "eval_runtime": 133.4702,
941
+ "eval_samples_per_second": 11.141,
942
+ "eval_steps_per_second": 0.18,
943
+ "step": 98
944
+ },
945
+ {
946
+ "epoch": 0.5602240896358543,
947
+ "grad_norm": 10.597297662849414,
948
+ "learning_rate": 4.803700921204658e-08,
949
+ "logits/chosen": -3.09375,
950
+ "logits/rejected": -3.109375,
951
+ "logps/chosen": -332.0,
952
+ "logps/rejected": -298.0,
953
+ "loss": 0.6281,
954
+ "rewards/accuracies": 0.7828124761581421,
955
+ "rewards/chosen": -0.06591796875,
956
+ "rewards/margins": 0.158203125,
957
+ "rewards/rejected": -0.2236328125,
958
+ "step": 100
959
+ },
960
+ {
961
+ "epoch": 0.5602240896358543,
962
+ "eval_logits/chosen": -3.078125,
963
+ "eval_logits/rejected": -3.125,
964
+ "eval_logps/chosen": -348.0,
965
+ "eval_logps/rejected": -308.0,
966
+ "eval_loss": 0.6264181137084961,
967
+ "eval_rewards/accuracies": 0.7916666865348816,
968
+ "eval_rewards/chosen": -0.0673828125,
969
+ "eval_rewards/margins": 0.173828125,
970
+ "eval_rewards/rejected": -0.2412109375,
971
+ "eval_runtime": 133.3945,
972
+ "eval_samples_per_second": 11.147,
973
+ "eval_steps_per_second": 0.18,
974
+ "step": 100
975
+ },
976
+ {
977
+ "epoch": 0.5714285714285714,
978
+ "eval_logits/chosen": -3.078125,
979
+ "eval_logits/rejected": -3.125,
980
+ "eval_logps/chosen": -348.0,
981
+ "eval_logps/rejected": -308.0,
982
+ "eval_loss": 0.6247581839561462,
983
+ "eval_rewards/accuracies": 0.7994791865348816,
984
+ "eval_rewards/chosen": -0.06591796875,
985
+ "eval_rewards/margins": 0.1787109375,
986
+ "eval_rewards/rejected": -0.2451171875,
987
+ "eval_runtime": 134.9509,
988
+ "eval_samples_per_second": 11.019,
989
+ "eval_steps_per_second": 0.178,
990
+ "step": 102
991
+ },
992
+ {
993
+ "epoch": 0.5826330532212886,
994
+ "eval_logits/chosen": -3.078125,
995
+ "eval_logits/rejected": -3.125,
996
+ "eval_logps/chosen": -348.0,
997
+ "eval_logps/rejected": -308.0,
998
+ "eval_loss": 0.6230900883674622,
999
+ "eval_rewards/accuracies": 0.8020833134651184,
1000
+ "eval_rewards/chosen": -0.0654296875,
1001
+ "eval_rewards/margins": 0.18359375,
1002
+ "eval_rewards/rejected": -0.25,
1003
+ "eval_runtime": 133.2495,
1004
+ "eval_samples_per_second": 11.16,
1005
+ "eval_steps_per_second": 0.18,
1006
+ "step": 104
1007
+ },
1008
+ {
1009
+ "epoch": 0.5938375350140056,
1010
+ "eval_logits/chosen": -3.078125,
1011
+ "eval_logits/rejected": -3.125,
1012
+ "eval_logps/chosen": -348.0,
1013
+ "eval_logps/rejected": -308.0,
1014
+ "eval_loss": 0.621644914150238,
1015
+ "eval_rewards/accuracies": 0.8020833134651184,
1016
+ "eval_rewards/chosen": -0.0654296875,
1017
+ "eval_rewards/margins": 0.1884765625,
1018
+ "eval_rewards/rejected": -0.25390625,
1019
+ "eval_runtime": 134.3837,
1020
+ "eval_samples_per_second": 11.065,
1021
+ "eval_steps_per_second": 0.179,
1022
+ "step": 106
1023
+ },
1024
+ {
1025
+ "epoch": 0.6050420168067226,
1026
+ "eval_logits/chosen": -3.078125,
1027
+ "eval_logits/rejected": -3.125,
1028
+ "eval_logps/chosen": -348.0,
1029
+ "eval_logps/rejected": -308.0,
1030
+ "eval_loss": 0.6203284859657288,
1031
+ "eval_rewards/accuracies": 0.8072916865348816,
1032
+ "eval_rewards/chosen": -0.064453125,
1033
+ "eval_rewards/margins": 0.19140625,
1034
+ "eval_rewards/rejected": -0.255859375,
1035
+ "eval_runtime": 133.785,
1036
+ "eval_samples_per_second": 11.115,
1037
+ "eval_steps_per_second": 0.179,
1038
+ "step": 108
1039
+ },
1040
+ {
1041
+ "epoch": 0.6162464985994398,
1042
+ "grad_norm": 7.480434655836269,
1043
+ "learning_rate": 3.8327731807204744e-08,
1044
+ "logits/chosen": -3.109375,
1045
+ "logits/rejected": -3.09375,
1046
+ "logps/chosen": -326.0,
1047
+ "logps/rejected": -306.0,
1048
+ "loss": 0.6191,
1049
+ "rewards/accuracies": 0.7734375,
1050
+ "rewards/chosen": -0.0673828125,
1051
+ "rewards/margins": 0.1845703125,
1052
+ "rewards/rejected": -0.251953125,
1053
+ "step": 110
1054
+ },
1055
+ {
1056
+ "epoch": 0.6162464985994398,
1057
+ "eval_logits/chosen": -3.078125,
1058
+ "eval_logits/rejected": -3.125,
1059
+ "eval_logps/chosen": -348.0,
1060
+ "eval_logps/rejected": -310.0,
1061
+ "eval_loss": 0.6189648509025574,
1062
+ "eval_rewards/accuracies": 0.8072916865348816,
1063
+ "eval_rewards/chosen": -0.06494140625,
1064
+ "eval_rewards/margins": 0.1953125,
1065
+ "eval_rewards/rejected": -0.259765625,
1066
+ "eval_runtime": 134.1423,
1067
+ "eval_samples_per_second": 11.085,
1068
+ "eval_steps_per_second": 0.179,
1069
+ "step": 110
1070
+ },
1071
+ {
1072
+ "epoch": 0.6274509803921569,
1073
+ "eval_logits/chosen": -3.078125,
1074
+ "eval_logits/rejected": -3.125,
1075
+ "eval_logps/chosen": -348.0,
1076
+ "eval_logps/rejected": -310.0,
1077
+ "eval_loss": 0.6177744269371033,
1078
+ "eval_rewards/accuracies": 0.8046875,
1079
+ "eval_rewards/chosen": -0.06982421875,
1080
+ "eval_rewards/margins": 0.19921875,
1081
+ "eval_rewards/rejected": -0.26953125,
1082
+ "eval_runtime": 133.5083,
1083
+ "eval_samples_per_second": 11.138,
1084
+ "eval_steps_per_second": 0.18,
1085
+ "step": 112
1086
+ },
1087
+ {
1088
+ "epoch": 0.6386554621848739,
1089
+ "eval_logits/chosen": -3.078125,
1090
+ "eval_logits/rejected": -3.125,
1091
+ "eval_logps/chosen": -350.0,
1092
+ "eval_logps/rejected": -312.0,
1093
+ "eval_loss": 0.616416335105896,
1094
+ "eval_rewards/accuracies": 0.7994791865348816,
1095
+ "eval_rewards/chosen": -0.07470703125,
1096
+ "eval_rewards/margins": 0.205078125,
1097
+ "eval_rewards/rejected": -0.279296875,
1098
+ "eval_runtime": 133.8287,
1099
+ "eval_samples_per_second": 11.111,
1100
+ "eval_steps_per_second": 0.179,
1101
+ "step": 114
1102
+ },
1103
+ {
1104
+ "epoch": 0.6498599439775911,
1105
+ "eval_logits/chosen": -3.078125,
1106
+ "eval_logits/rejected": -3.125,
1107
+ "eval_logps/chosen": -350.0,
1108
+ "eval_logps/rejected": -312.0,
1109
+ "eval_loss": 0.6151247620582581,
1110
+ "eval_rewards/accuracies": 0.796875,
1111
+ "eval_rewards/chosen": -0.0791015625,
1112
+ "eval_rewards/margins": 0.208984375,
1113
+ "eval_rewards/rejected": -0.287109375,
1114
+ "eval_runtime": 133.8728,
1115
+ "eval_samples_per_second": 11.108,
1116
+ "eval_steps_per_second": 0.179,
1117
+ "step": 116
1118
+ },
1119
+ {
1120
+ "epoch": 0.6610644257703081,
1121
+ "eval_logits/chosen": -3.078125,
1122
+ "eval_logits/rejected": -3.125,
1123
+ "eval_logps/chosen": -350.0,
1124
+ "eval_logps/rejected": -312.0,
1125
+ "eval_loss": 0.6137887239456177,
1126
+ "eval_rewards/accuracies": 0.7994791865348816,
1127
+ "eval_rewards/chosen": -0.08349609375,
1128
+ "eval_rewards/margins": 0.212890625,
1129
+ "eval_rewards/rejected": -0.294921875,
1130
+ "eval_runtime": 134.3374,
1131
+ "eval_samples_per_second": 11.069,
1132
+ "eval_steps_per_second": 0.179,
1133
+ "step": 118
1134
+ },
1135
+ {
1136
+ "epoch": 0.6722689075630253,
1137
+ "grad_norm": 6.779083992073795,
1138
+ "learning_rate": 2.906701312312861e-08,
1139
+ "logits/chosen": -3.09375,
1140
+ "logits/rejected": -3.109375,
1141
+ "logps/chosen": -346.0,
1142
+ "logps/rejected": -320.0,
1143
+ "loss": 0.6183,
1144
+ "rewards/accuracies": 0.778124988079071,
1145
+ "rewards/chosen": -0.07373046875,
1146
+ "rewards/margins": 0.2001953125,
1147
+ "rewards/rejected": -0.2734375,
1148
+ "step": 120
1149
+ },
1150
+ {
1151
+ "epoch": 0.6722689075630253,
1152
+ "eval_logits/chosen": -3.078125,
1153
+ "eval_logits/rejected": -3.125,
1154
+ "eval_logps/chosen": -350.0,
1155
+ "eval_logps/rejected": -314.0,
1156
+ "eval_loss": 0.6126104593276978,
1157
+ "eval_rewards/accuracies": 0.7994791865348816,
1158
+ "eval_rewards/chosen": -0.0869140625,
1159
+ "eval_rewards/margins": 0.216796875,
1160
+ "eval_rewards/rejected": -0.3046875,
1161
+ "eval_runtime": 134.5655,
1162
+ "eval_samples_per_second": 11.05,
1163
+ "eval_steps_per_second": 0.178,
1164
+ "step": 120
1165
+ },
1166
+ {
1167
+ "epoch": 0.6834733893557423,
1168
+ "eval_logits/chosen": -3.078125,
1169
+ "eval_logits/rejected": -3.125,
1170
+ "eval_logps/chosen": -350.0,
1171
+ "eval_logps/rejected": -314.0,
1172
+ "eval_loss": 0.6114761233329773,
1173
+ "eval_rewards/accuracies": 0.796875,
1174
+ "eval_rewards/chosen": -0.087890625,
1175
+ "eval_rewards/margins": 0.220703125,
1176
+ "eval_rewards/rejected": -0.30859375,
1177
+ "eval_runtime": 134.1574,
1178
+ "eval_samples_per_second": 11.084,
1179
+ "eval_steps_per_second": 0.179,
1180
+ "step": 122
1181
+ },
1182
+ {
1183
+ "epoch": 0.6946778711484594,
1184
+ "eval_logits/chosen": -3.078125,
1185
+ "eval_logits/rejected": -3.125,
1186
+ "eval_logps/chosen": -350.0,
1187
+ "eval_logps/rejected": -314.0,
1188
+ "eval_loss": 0.6107271313667297,
1189
+ "eval_rewards/accuracies": 0.7994791865348816,
1190
+ "eval_rewards/chosen": -0.08837890625,
1191
+ "eval_rewards/margins": 0.2236328125,
1192
+ "eval_rewards/rejected": -0.3125,
1193
+ "eval_runtime": 134.2971,
1194
+ "eval_samples_per_second": 11.072,
1195
+ "eval_steps_per_second": 0.179,
1196
+ "step": 124
1197
+ },
1198
+ {
1199
+ "epoch": 0.7058823529411765,
1200
+ "eval_logits/chosen": -3.078125,
1201
+ "eval_logits/rejected": -3.125,
1202
+ "eval_logps/chosen": -350.0,
1203
+ "eval_logps/rejected": -314.0,
1204
+ "eval_loss": 0.6096285581588745,
1205
+ "eval_rewards/accuracies": 0.7942708134651184,
1206
+ "eval_rewards/chosen": -0.0869140625,
1207
+ "eval_rewards/margins": 0.2265625,
1208
+ "eval_rewards/rejected": -0.314453125,
1209
+ "eval_runtime": 135.0298,
1210
+ "eval_samples_per_second": 11.012,
1211
+ "eval_steps_per_second": 0.178,
1212
+ "step": 126
1213
+ },
1214
+ {
1215
+ "epoch": 0.7170868347338936,
1216
+ "eval_logits/chosen": -3.078125,
1217
+ "eval_logits/rejected": -3.125,
1218
+ "eval_logps/chosen": -350.0,
1219
+ "eval_logps/rejected": -314.0,
1220
+ "eval_loss": 0.6089996099472046,
1221
+ "eval_rewards/accuracies": 0.7942708134651184,
1222
+ "eval_rewards/chosen": -0.0859375,
1223
+ "eval_rewards/margins": 0.2275390625,
1224
+ "eval_rewards/rejected": -0.314453125,
1225
+ "eval_runtime": 134.1686,
1226
+ "eval_samples_per_second": 11.083,
1227
+ "eval_steps_per_second": 0.179,
1228
+ "step": 128
1229
+ },
1230
+ {
1231
+ "epoch": 0.7282913165266106,
1232
+ "grad_norm": 7.520918890430443,
1233
+ "learning_rate": 2.0610737385376347e-08,
1234
+ "logits/chosen": -3.109375,
1235
+ "logits/rejected": -3.125,
1236
+ "logps/chosen": -328.0,
1237
+ "logps/rejected": -308.0,
1238
+ "loss": 0.606,
1239
+ "rewards/accuracies": 0.7906249761581421,
1240
+ "rewards/chosen": -0.0810546875,
1241
+ "rewards/margins": 0.2265625,
1242
+ "rewards/rejected": -0.30859375,
1243
+ "step": 130
1244
+ },
1245
+ {
1246
+ "epoch": 0.7282913165266106,
1247
+ "eval_logits/chosen": -3.078125,
1248
+ "eval_logits/rejected": -3.125,
1249
+ "eval_logps/chosen": -350.0,
1250
+ "eval_logps/rejected": -314.0,
1251
+ "eval_loss": 0.6081994771957397,
1252
+ "eval_rewards/accuracies": 0.796875,
1253
+ "eval_rewards/chosen": -0.08642578125,
1254
+ "eval_rewards/margins": 0.23046875,
1255
+ "eval_rewards/rejected": -0.31640625,
1256
+ "eval_runtime": 134.4089,
1257
+ "eval_samples_per_second": 11.063,
1258
+ "eval_steps_per_second": 0.179,
1259
+ "step": 130
1260
+ },
1261
+ {
1262
+ "epoch": 0.7394957983193278,
1263
+ "eval_logits/chosen": -3.078125,
1264
+ "eval_logits/rejected": -3.125,
1265
+ "eval_logps/chosen": -350.0,
1266
+ "eval_logps/rejected": -316.0,
1267
+ "eval_loss": 0.6074265241622925,
1268
+ "eval_rewards/accuracies": 0.7942708134651184,
1269
+ "eval_rewards/chosen": -0.08740234375,
1270
+ "eval_rewards/margins": 0.2333984375,
1271
+ "eval_rewards/rejected": -0.3203125,
1272
+ "eval_runtime": 133.9872,
1273
+ "eval_samples_per_second": 11.098,
1274
+ "eval_steps_per_second": 0.179,
1275
+ "step": 132
1276
+ },
1277
+ {
1278
+ "epoch": 0.7507002801120448,
1279
+ "eval_logits/chosen": -3.078125,
1280
+ "eval_logits/rejected": -3.125,
1281
+ "eval_logps/chosen": -350.0,
1282
+ "eval_logps/rejected": -316.0,
1283
+ "eval_loss": 0.6068228483200073,
1284
+ "eval_rewards/accuracies": 0.7994791865348816,
1285
+ "eval_rewards/chosen": -0.08837890625,
1286
+ "eval_rewards/margins": 0.2353515625,
1287
+ "eval_rewards/rejected": -0.322265625,
1288
+ "eval_runtime": 133.6241,
1289
+ "eval_samples_per_second": 11.128,
1290
+ "eval_steps_per_second": 0.18,
1291
+ "step": 134
1292
+ },
1293
+ {
1294
+ "epoch": 0.7619047619047619,
1295
+ "eval_logits/chosen": -3.078125,
1296
+ "eval_logits/rejected": -3.125,
1297
+ "eval_logps/chosen": -350.0,
1298
+ "eval_logps/rejected": -316.0,
1299
+ "eval_loss": 0.6062489748001099,
1300
+ "eval_rewards/accuracies": 0.7890625,
1301
+ "eval_rewards/chosen": -0.087890625,
1302
+ "eval_rewards/margins": 0.2353515625,
1303
+ "eval_rewards/rejected": -0.32421875,
1304
+ "eval_runtime": 133.7733,
1305
+ "eval_samples_per_second": 11.116,
1306
+ "eval_steps_per_second": 0.179,
1307
+ "step": 136
1308
+ },
1309
+ {
1310
+ "epoch": 0.773109243697479,
1311
+ "eval_logits/chosen": -3.078125,
1312
+ "eval_logits/rejected": -3.125,
1313
+ "eval_logps/chosen": -350.0,
1314
+ "eval_logps/rejected": -316.0,
1315
+ "eval_loss": 0.6058236360549927,
1316
+ "eval_rewards/accuracies": 0.7916666865348816,
1317
+ "eval_rewards/chosen": -0.0859375,
1318
+ "eval_rewards/margins": 0.23828125,
1319
+ "eval_rewards/rejected": -0.32421875,
1320
+ "eval_runtime": 132.8269,
1321
+ "eval_samples_per_second": 11.195,
1322
+ "eval_steps_per_second": 0.181,
1323
+ "step": 138
1324
+ },
1325
+ {
1326
+ "epoch": 0.7843137254901961,
1327
+ "grad_norm": 6.700409826395709,
1328
+ "learning_rate": 1.3283874528215733e-08,
1329
+ "logits/chosen": -3.109375,
1330
+ "logits/rejected": -3.09375,
1331
+ "logps/chosen": -346.0,
1332
+ "logps/rejected": -318.0,
1333
+ "loss": 0.6045,
1334
+ "rewards/accuracies": 0.8062499761581421,
1335
+ "rewards/chosen": -0.0751953125,
1336
+ "rewards/margins": 0.251953125,
1337
+ "rewards/rejected": -0.326171875,
1338
+ "step": 140
1339
+ },
1340
+ {
1341
+ "epoch": 0.7843137254901961,
1342
+ "eval_logits/chosen": -3.078125,
1343
+ "eval_logits/rejected": -3.125,
1344
+ "eval_logps/chosen": -350.0,
1345
+ "eval_logps/rejected": -316.0,
1346
+ "eval_loss": 0.6051818132400513,
1347
+ "eval_rewards/accuracies": 0.7890625,
1348
+ "eval_rewards/chosen": -0.0849609375,
1349
+ "eval_rewards/margins": 0.240234375,
1350
+ "eval_rewards/rejected": -0.326171875,
1351
+ "eval_runtime": 134.1619,
1352
+ "eval_samples_per_second": 11.084,
1353
+ "eval_steps_per_second": 0.179,
1354
+ "step": 140
1355
+ },
1356
+ {
1357
+ "epoch": 0.7955182072829131,
1358
+ "eval_logits/chosen": -3.078125,
1359
+ "eval_logits/rejected": -3.125,
1360
+ "eval_logps/chosen": -350.0,
1361
+ "eval_logps/rejected": -316.0,
1362
+ "eval_loss": 0.604813277721405,
1363
+ "eval_rewards/accuracies": 0.7994791865348816,
1364
+ "eval_rewards/chosen": -0.08447265625,
1365
+ "eval_rewards/margins": 0.2412109375,
1366
+ "eval_rewards/rejected": -0.326171875,
1367
+ "eval_runtime": 134.0921,
1368
+ "eval_samples_per_second": 11.089,
1369
+ "eval_steps_per_second": 0.179,
1370
+ "step": 142
1371
+ },
1372
+ {
1373
+ "epoch": 0.8067226890756303,
1374
+ "eval_logits/chosen": -3.078125,
1375
+ "eval_logits/rejected": -3.125,
1376
+ "eval_logps/chosen": -350.0,
1377
+ "eval_logps/rejected": -316.0,
1378
+ "eval_loss": 0.6045451164245605,
1379
+ "eval_rewards/accuracies": 0.796875,
1380
+ "eval_rewards/chosen": -0.08251953125,
1381
+ "eval_rewards/margins": 0.2421875,
1382
+ "eval_rewards/rejected": -0.32421875,
1383
+ "eval_runtime": 133.3791,
1384
+ "eval_samples_per_second": 11.149,
1385
+ "eval_steps_per_second": 0.18,
1386
+ "step": 144
1387
+ },
1388
+ {
1389
+ "epoch": 0.8179271708683473,
1390
+ "eval_logits/chosen": -3.078125,
1391
+ "eval_logits/rejected": -3.125,
1392
+ "eval_logps/chosen": -350.0,
1393
+ "eval_logps/rejected": -316.0,
1394
+ "eval_loss": 0.604152262210846,
1395
+ "eval_rewards/accuracies": 0.796875,
1396
+ "eval_rewards/chosen": -0.0830078125,
1397
+ "eval_rewards/margins": 0.2431640625,
1398
+ "eval_rewards/rejected": -0.326171875,
1399
+ "eval_runtime": 136.215,
1400
+ "eval_samples_per_second": 10.917,
1401
+ "eval_steps_per_second": 0.176,
1402
+ "step": 146
1403
+ },
1404
+ {
1405
+ "epoch": 0.8291316526610645,
1406
+ "eval_logits/chosen": -3.078125,
1407
+ "eval_logits/rejected": -3.125,
1408
+ "eval_logps/chosen": -350.0,
1409
+ "eval_logps/rejected": -316.0,
1410
+ "eval_loss": 0.6041899919509888,
1411
+ "eval_rewards/accuracies": 0.796875,
1412
+ "eval_rewards/chosen": -0.08349609375,
1413
+ "eval_rewards/margins": 0.2421875,
1414
+ "eval_rewards/rejected": -0.326171875,
1415
+ "eval_runtime": 133.2436,
1416
+ "eval_samples_per_second": 11.16,
1417
+ "eval_steps_per_second": 0.18,
1418
+ "step": 148
1419
+ },
1420
+ {
1421
+ "epoch": 0.8403361344537815,
1422
+ "grad_norm": 7.912383511701059,
1423
+ "learning_rate": 7.367991782295391e-09,
1424
+ "logits/chosen": -3.09375,
1425
+ "logits/rejected": -3.0625,
1426
+ "logps/chosen": -342.0,
1427
+ "logps/rejected": -318.0,
1428
+ "loss": 0.602,
1429
+ "rewards/accuracies": 0.7953125238418579,
1430
+ "rewards/chosen": -0.07470703125,
1431
+ "rewards/margins": 0.234375,
1432
+ "rewards/rejected": -0.30859375,
1433
+ "step": 150
1434
+ },
1435
+ {
1436
+ "epoch": 0.8403361344537815,
1437
+ "eval_logits/chosen": -3.078125,
1438
+ "eval_logits/rejected": -3.125,
1439
+ "eval_logps/chosen": -350.0,
1440
+ "eval_logps/rejected": -316.0,
1441
+ "eval_loss": 0.6036604642868042,
1442
+ "eval_rewards/accuracies": 0.796875,
1443
+ "eval_rewards/chosen": -0.08203125,
1444
+ "eval_rewards/margins": 0.2451171875,
1445
+ "eval_rewards/rejected": -0.328125,
1446
+ "eval_runtime": 133.8914,
1447
+ "eval_samples_per_second": 11.106,
1448
+ "eval_steps_per_second": 0.179,
1449
+ "step": 150
1450
+ },
1451
+ {
1452
+ "epoch": 0.8515406162464986,
1453
+ "eval_logits/chosen": -3.078125,
1454
+ "eval_logits/rejected": -3.125,
1455
+ "eval_logps/chosen": -350.0,
1456
+ "eval_logps/rejected": -316.0,
1457
+ "eval_loss": 0.6033771634101868,
1458
+ "eval_rewards/accuracies": 0.7916666865348816,
1459
+ "eval_rewards/chosen": -0.08349609375,
1460
+ "eval_rewards/margins": 0.24609375,
1461
+ "eval_rewards/rejected": -0.328125,
1462
+ "eval_runtime": 133.2707,
1463
+ "eval_samples_per_second": 11.158,
1464
+ "eval_steps_per_second": 0.18,
1465
+ "step": 152
1466
+ },
1467
+ {
1468
+ "epoch": 0.8627450980392157,
1469
+ "eval_logits/chosen": -3.078125,
1470
+ "eval_logits/rejected": -3.125,
1471
+ "eval_logps/chosen": -350.0,
1472
+ "eval_logps/rejected": -316.0,
1473
+ "eval_loss": 0.60316401720047,
1474
+ "eval_rewards/accuracies": 0.7864583134651184,
1475
+ "eval_rewards/chosen": -0.0849609375,
1476
+ "eval_rewards/margins": 0.2470703125,
1477
+ "eval_rewards/rejected": -0.33203125,
1478
+ "eval_runtime": 132.7316,
1479
+ "eval_samples_per_second": 11.203,
1480
+ "eval_steps_per_second": 0.181,
1481
+ "step": 154
1482
+ },
1483
+ {
1484
+ "epoch": 0.8739495798319328,
1485
+ "eval_logits/chosen": -3.078125,
1486
+ "eval_logits/rejected": -3.125,
1487
+ "eval_logps/chosen": -350.0,
1488
+ "eval_logps/rejected": -316.0,
1489
+ "eval_loss": 0.6029848456382751,
1490
+ "eval_rewards/accuracies": 0.7942708134651184,
1491
+ "eval_rewards/chosen": -0.08740234375,
1492
+ "eval_rewards/margins": 0.24609375,
1493
+ "eval_rewards/rejected": -0.333984375,
1494
+ "eval_runtime": 134.4869,
1495
+ "eval_samples_per_second": 11.057,
1496
+ "eval_steps_per_second": 0.178,
1497
+ "step": 156
1498
+ },
1499
+ {
1500
+ "epoch": 0.8851540616246498,
1501
+ "eval_logits/chosen": -3.078125,
1502
+ "eval_logits/rejected": -3.125,
1503
+ "eval_logps/chosen": -350.0,
1504
+ "eval_logps/rejected": -316.0,
1505
+ "eval_loss": 0.6026829481124878,
1506
+ "eval_rewards/accuracies": 0.7942708134651184,
1507
+ "eval_rewards/chosen": -0.08837890625,
1508
+ "eval_rewards/margins": 0.248046875,
1509
+ "eval_rewards/rejected": -0.3359375,
1510
+ "eval_runtime": 134.2812,
1511
+ "eval_samples_per_second": 11.074,
1512
+ "eval_steps_per_second": 0.179,
1513
+ "step": 158
1514
+ },
1515
+ {
1516
+ "epoch": 0.896358543417367,
1517
+ "grad_norm": 7.847752860433035,
1518
+ "learning_rate": 3.0904332038757976e-09,
1519
+ "logits/chosen": -3.109375,
1520
+ "logits/rejected": -3.109375,
1521
+ "logps/chosen": -340.0,
1522
+ "logps/rejected": -308.0,
1523
+ "loss": 0.6021,
1524
+ "rewards/accuracies": 0.760937511920929,
1525
+ "rewards/chosen": -0.08251953125,
1526
+ "rewards/margins": 0.2294921875,
1527
+ "rewards/rejected": -0.3125,
1528
+ "step": 160
1529
+ },
1530
+ {
1531
+ "epoch": 0.896358543417367,
1532
+ "eval_logits/chosen": -3.078125,
1533
+ "eval_logits/rejected": -3.125,
1534
+ "eval_logps/chosen": -350.0,
1535
+ "eval_logps/rejected": -316.0,
1536
+ "eval_loss": 0.6027438640594482,
1537
+ "eval_rewards/accuracies": 0.7942708134651184,
1538
+ "eval_rewards/chosen": -0.087890625,
1539
+ "eval_rewards/margins": 0.2490234375,
1540
+ "eval_rewards/rejected": -0.3359375,
1541
+ "eval_runtime": 133.411,
1542
+ "eval_samples_per_second": 11.146,
1543
+ "eval_steps_per_second": 0.18,
1544
+ "step": 160
1545
+ },
1546
+ {
1547
+ "epoch": 0.907563025210084,
1548
+ "eval_logits/chosen": -3.078125,
1549
+ "eval_logits/rejected": -3.125,
1550
+ "eval_logps/chosen": -350.0,
1551
+ "eval_logps/rejected": -316.0,
1552
+ "eval_loss": 0.6028435826301575,
1553
+ "eval_rewards/accuracies": 0.796875,
1554
+ "eval_rewards/chosen": -0.08837890625,
1555
+ "eval_rewards/margins": 0.248046875,
1556
+ "eval_rewards/rejected": -0.3359375,
1557
+ "eval_runtime": 133.846,
1558
+ "eval_samples_per_second": 11.11,
1559
+ "eval_steps_per_second": 0.179,
1560
+ "step": 162
1561
+ },
1562
+ {
1563
+ "epoch": 0.9187675070028011,
1564
+ "eval_logits/chosen": -3.078125,
1565
+ "eval_logits/rejected": -3.125,
1566
+ "eval_logps/chosen": -350.0,
1567
+ "eval_logps/rejected": -318.0,
1568
+ "eval_loss": 0.6026445627212524,
1569
+ "eval_rewards/accuracies": 0.796875,
1570
+ "eval_rewards/chosen": -0.08837890625,
1571
+ "eval_rewards/margins": 0.2490234375,
1572
+ "eval_rewards/rejected": -0.337890625,
1573
+ "eval_runtime": 133.4677,
1574
+ "eval_samples_per_second": 11.141,
1575
+ "eval_steps_per_second": 0.18,
1576
+ "step": 164
1577
+ },
1578
+ {
1579
+ "epoch": 0.9299719887955182,
1580
+ "eval_logits/chosen": -3.078125,
1581
+ "eval_logits/rejected": -3.125,
1582
+ "eval_logps/chosen": -350.0,
1583
+ "eval_logps/rejected": -316.0,
1584
+ "eval_loss": 0.6026737093925476,
1585
+ "eval_rewards/accuracies": 0.796875,
1586
+ "eval_rewards/chosen": -0.0888671875,
1587
+ "eval_rewards/margins": 0.2490234375,
1588
+ "eval_rewards/rejected": -0.337890625,
1589
+ "eval_runtime": 134.151,
1590
+ "eval_samples_per_second": 11.085,
1591
+ "eval_steps_per_second": 0.179,
1592
+ "step": 166
1593
+ },
1594
+ {
1595
+ "epoch": 0.9411764705882353,
1596
+ "eval_logits/chosen": -3.078125,
1597
+ "eval_logits/rejected": -3.125,
1598
+ "eval_logps/chosen": -350.0,
1599
+ "eval_logps/rejected": -318.0,
1600
+ "eval_loss": 0.6025987267494202,
1601
+ "eval_rewards/accuracies": 0.7942708134651184,
1602
+ "eval_rewards/chosen": -0.0888671875,
1603
+ "eval_rewards/margins": 0.2490234375,
1604
+ "eval_rewards/rejected": -0.337890625,
1605
+ "eval_runtime": 135.6171,
1606
+ "eval_samples_per_second": 10.965,
1607
+ "eval_steps_per_second": 0.177,
1608
+ "step": 168
1609
+ },
1610
+ {
1611
+ "epoch": 0.9523809523809523,
1612
+ "grad_norm": 6.811460693897883,
1613
+ "learning_rate": 6.15582970243117e-10,
1614
+ "logits/chosen": -3.09375,
1615
+ "logits/rejected": -3.125,
1616
+ "logps/chosen": -336.0,
1617
+ "logps/rejected": -312.0,
1618
+ "loss": 0.5974,
1619
+ "rewards/accuracies": 0.807812511920929,
1620
+ "rewards/chosen": -0.083984375,
1621
+ "rewards/margins": 0.26953125,
1622
+ "rewards/rejected": -0.353515625,
1623
+ "step": 170
1624
+ },
1625
+ {
1626
+ "epoch": 0.9523809523809523,
1627
+ "eval_logits/chosen": -3.078125,
1628
+ "eval_logits/rejected": -3.125,
1629
+ "eval_logps/chosen": -350.0,
1630
+ "eval_logps/rejected": -318.0,
1631
+ "eval_loss": 0.6025583148002625,
1632
+ "eval_rewards/accuracies": 0.7916666865348816,
1633
+ "eval_rewards/chosen": -0.08935546875,
1634
+ "eval_rewards/margins": 0.248046875,
1635
+ "eval_rewards/rejected": -0.337890625,
1636
+ "eval_runtime": 133.2423,
1637
+ "eval_samples_per_second": 11.16,
1638
+ "eval_steps_per_second": 0.18,
1639
+ "step": 170
1640
+ },
1641
+ {
1642
+ "epoch": 0.9635854341736695,
1643
+ "eval_logits/chosen": -3.078125,
1644
+ "eval_logits/rejected": -3.125,
1645
+ "eval_logps/chosen": -350.0,
1646
+ "eval_logps/rejected": -318.0,
1647
+ "eval_loss": 0.6024807691574097,
1648
+ "eval_rewards/accuracies": 0.7916666865348816,
1649
+ "eval_rewards/chosen": -0.08837890625,
1650
+ "eval_rewards/margins": 0.25,
1651
+ "eval_rewards/rejected": -0.337890625,
1652
+ "eval_runtime": 134.1414,
1653
+ "eval_samples_per_second": 11.085,
1654
+ "eval_steps_per_second": 0.179,
1655
+ "step": 172
1656
+ },
1657
+ {
1658
+ "epoch": 0.9747899159663865,
1659
+ "eval_logits/chosen": -3.078125,
1660
+ "eval_logits/rejected": -3.125,
1661
+ "eval_logps/chosen": -350.0,
1662
+ "eval_logps/rejected": -318.0,
1663
+ "eval_loss": 0.602593183517456,
1664
+ "eval_rewards/accuracies": 0.7942708134651184,
1665
+ "eval_rewards/chosen": -0.08935546875,
1666
+ "eval_rewards/margins": 0.2490234375,
1667
+ "eval_rewards/rejected": -0.337890625,
1668
+ "eval_runtime": 133.5195,
1669
+ "eval_samples_per_second": 11.137,
1670
+ "eval_steps_per_second": 0.18,
1671
+ "step": 174
1672
+ },
1673
+ {
1674
+ "epoch": 0.9859943977591037,
1675
+ "eval_logits/chosen": -3.078125,
1676
+ "eval_logits/rejected": -3.125,
1677
+ "eval_logps/chosen": -350.0,
1678
+ "eval_logps/rejected": -316.0,
1679
+ "eval_loss": 0.6026169061660767,
1680
+ "eval_rewards/accuracies": 0.796875,
1681
+ "eval_rewards/chosen": -0.08984375,
1682
+ "eval_rewards/margins": 0.248046875,
1683
+ "eval_rewards/rejected": -0.337890625,
1684
+ "eval_runtime": 134.5308,
1685
+ "eval_samples_per_second": 11.053,
1686
+ "eval_steps_per_second": 0.178,
1687
+ "step": 176
1688
+ },
1689
+ {
1690
+ "epoch": 0.9971988795518207,
1691
+ "eval_logits/chosen": -3.078125,
1692
+ "eval_logits/rejected": -3.125,
1693
+ "eval_logps/chosen": -350.0,
1694
+ "eval_logps/rejected": -316.0,
1695
+ "eval_loss": 0.6027360558509827,
1696
+ "eval_rewards/accuracies": 0.7916666865348816,
1697
+ "eval_rewards/chosen": -0.0888671875,
1698
+ "eval_rewards/margins": 0.248046875,
1699
+ "eval_rewards/rejected": -0.337890625,
1700
+ "eval_runtime": 134.9547,
1701
+ "eval_samples_per_second": 11.019,
1702
+ "eval_steps_per_second": 0.178,
1703
+ "step": 178
1704
+ },
1705
+ {
1706
+ "epoch": 0.9971988795518207,
1707
+ "step": 178,
1708
+ "total_flos": 0.0,
1709
+ "train_loss": 0.6403918319873596,
1710
+ "train_runtime": 36690.4382,
1711
+ "train_samples_per_second": 1.243,
1712
+ "train_steps_per_second": 0.005
1713
+ }
1714
+ ],
1715
+ "logging_steps": 10,
1716
+ "max_steps": 178,
1717
+ "num_input_tokens_seen": 0,
1718
+ "num_train_epochs": 1,
1719
+ "save_steps": 2,
1720
+ "stateful_callbacks": {
1721
+ "TrainerControl": {
1722
+ "args": {
1723
+ "should_epoch_stop": false,
1724
+ "should_evaluate": false,
1725
+ "should_log": false,
1726
+ "should_save": true,
1727
+ "should_training_stop": true
1728
+ },
1729
+ "attributes": {}
1730
+ }
1731
+ },
1732
+ "total_flos": 0.0,
1733
+ "train_batch_size": 32,
1734
+ "trial_name": null,
1735
+ "trial_params": null
1736
+ }