geoffmunn commited on
Commit
cf9151a
·
verified ·
1 Parent(s): 7624171

Delete checkpoint-129

Browse files
checkpoint-129/README.md DELETED
@@ -1,206 +0,0 @@
1
- ---
2
- base_model: Qwen/Qwen3-4B
3
- library_name: peft
4
- tags:
5
- - base_model:adapter:Qwen/Qwen3-4B
6
- - lora
7
- - transformers
8
- ---
9
-
10
- # Model Card for Model ID
11
-
12
- <!-- Provide a quick summary of what the model is/does. -->
13
-
14
-
15
-
16
- ## Model Details
17
-
18
- ### Model Description
19
-
20
- <!-- Provide a longer summary of what this model is. -->
21
-
22
-
23
-
24
- - **Developed by:** [More Information Needed]
25
- - **Funded by [optional]:** [More Information Needed]
26
- - **Shared by [optional]:** [More Information Needed]
27
- - **Model type:** [More Information Needed]
28
- - **Language(s) (NLP):** [More Information Needed]
29
- - **License:** [More Information Needed]
30
- - **Finetuned from model [optional]:** [More Information Needed]
31
-
32
- ### Model Sources [optional]
33
-
34
- <!-- Provide the basic links for the model. -->
35
-
36
- - **Repository:** [More Information Needed]
37
- - **Paper [optional]:** [More Information Needed]
38
- - **Demo [optional]:** [More Information Needed]
39
-
40
- ## Uses
41
-
42
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
43
-
44
- ### Direct Use
45
-
46
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
-
48
- [More Information Needed]
49
-
50
- ### Downstream Use [optional]
51
-
52
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
53
-
54
- [More Information Needed]
55
-
56
- ### Out-of-Scope Use
57
-
58
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
59
-
60
- [More Information Needed]
61
-
62
- ## Bias, Risks, and Limitations
63
-
64
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
65
-
66
- [More Information Needed]
67
-
68
- ### Recommendations
69
-
70
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
71
-
72
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
73
-
74
- ## How to Get Started with the Model
75
-
76
- Use the code below to get started with the model.
77
-
78
- [More Information Needed]
79
-
80
- ## Training Details
81
-
82
- ### Training Data
83
-
84
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
85
-
86
- [More Information Needed]
87
-
88
- ### Training Procedure
89
-
90
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
91
-
92
- #### Preprocessing [optional]
93
-
94
- [More Information Needed]
95
-
96
-
97
- #### Training Hyperparameters
98
-
99
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
100
-
101
- #### Speeds, Sizes, Times [optional]
102
-
103
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
104
-
105
- [More Information Needed]
106
-
107
- ## Evaluation
108
-
109
- <!-- This section describes the evaluation protocols and provides the results. -->
110
-
111
- ### Testing Data, Factors & Metrics
112
-
113
- #### Testing Data
114
-
115
- <!-- This should link to a Dataset Card if possible. -->
116
-
117
- [More Information Needed]
118
-
119
- #### Factors
120
-
121
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
122
-
123
- [More Information Needed]
124
-
125
- #### Metrics
126
-
127
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
128
-
129
- [More Information Needed]
130
-
131
- ### Results
132
-
133
- [More Information Needed]
134
-
135
- #### Summary
136
-
137
-
138
-
139
- ## Model Examination [optional]
140
-
141
- <!-- Relevant interpretability work for the model goes here -->
142
-
143
- [More Information Needed]
144
-
145
- ## Environmental Impact
146
-
147
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
148
-
149
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
150
-
151
- - **Hardware Type:** [More Information Needed]
152
- - **Hours used:** [More Information Needed]
153
- - **Cloud Provider:** [More Information Needed]
154
- - **Compute Region:** [More Information Needed]
155
- - **Carbon Emitted:** [More Information Needed]
156
-
157
- ## Technical Specifications [optional]
158
-
159
- ### Model Architecture and Objective
160
-
161
- [More Information Needed]
162
-
163
- ### Compute Infrastructure
164
-
165
- [More Information Needed]
166
-
167
- #### Hardware
168
-
169
- [More Information Needed]
170
-
171
- #### Software
172
-
173
- [More Information Needed]
174
-
175
- ## Citation [optional]
176
-
177
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
178
-
179
- **BibTeX:**
180
-
181
- [More Information Needed]
182
-
183
- **APA:**
184
-
185
- [More Information Needed]
186
-
187
- ## Glossary [optional]
188
-
189
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
190
-
191
- [More Information Needed]
192
-
193
- ## More Information [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Authors [optional]
198
-
199
- [More Information Needed]
200
-
201
- ## Model Card Contact
202
-
203
- [More Information Needed]
204
- ### Framework versions
205
-
206
- - PEFT 0.18.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
checkpoint-129/adapter_config.json DELETED
@@ -1,49 +0,0 @@
1
- {
2
- "alora_invocation_tokens": null,
3
- "alpha_pattern": {},
4
- "arrow_config": null,
5
- "auto_mapping": null,
6
- "base_model_name_or_path": "Qwen/Qwen3-4B",
7
- "bias": "none",
8
- "corda_config": null,
9
- "ensure_weight_tying": false,
10
- "eva_config": null,
11
- "exclude_modules": null,
12
- "fan_in_fan_out": false,
13
- "inference_mode": true,
14
- "init_lora_weights": true,
15
- "layer_replication": null,
16
- "layers_pattern": null,
17
- "layers_to_transform": null,
18
- "loftq_config": {},
19
- "lora_alpha": 32,
20
- "lora_bias": false,
21
- "lora_dropout": 0.05,
22
- "megatron_config": null,
23
- "megatron_core": "megatron.core",
24
- "modules_to_save": [
25
- "classifier",
26
- "score"
27
- ],
28
- "peft_type": "LORA",
29
- "peft_version": "0.18.0",
30
- "qalora_group_size": 16,
31
- "r": 16,
32
- "rank_pattern": {},
33
- "revision": null,
34
- "target_modules": [
35
- "down_proj",
36
- "gate_proj",
37
- "o_proj",
38
- "q_proj",
39
- "k_proj",
40
- "up_proj",
41
- "v_proj"
42
- ],
43
- "target_parameters": null,
44
- "task_type": "SEQ_CLS",
45
- "trainable_token_indices": null,
46
- "use_dora": false,
47
- "use_qalora": false,
48
- "use_rslora": false
49
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
checkpoint-129/adapter_model.safetensors DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:4bb6a4bc27665aa0065fee12f3889ba47e9242f223a2b57f2b7f2855b271becb
3
- size 132198232
 
 
 
 
checkpoint-129/added_tokens.json DELETED
@@ -1,28 +0,0 @@
1
- {
2
- "</think>": 151668,
3
- "</tool_call>": 151658,
4
- "</tool_response>": 151666,
5
- "<think>": 151667,
6
- "<tool_call>": 151657,
7
- "<tool_response>": 151665,
8
- "<|box_end|>": 151649,
9
- "<|box_start|>": 151648,
10
- "<|endoftext|>": 151643,
11
- "<|file_sep|>": 151664,
12
- "<|fim_middle|>": 151660,
13
- "<|fim_pad|>": 151662,
14
- "<|fim_prefix|>": 151659,
15
- "<|fim_suffix|>": 151661,
16
- "<|im_end|>": 151645,
17
- "<|im_start|>": 151644,
18
- "<|image_pad|>": 151655,
19
- "<|object_ref_end|>": 151647,
20
- "<|object_ref_start|>": 151646,
21
- "<|quad_end|>": 151651,
22
- "<|quad_start|>": 151650,
23
- "<|repo_name|>": 151663,
24
- "<|video_pad|>": 151656,
25
- "<|vision_end|>": 151653,
26
- "<|vision_pad|>": 151654,
27
- "<|vision_start|>": 151652
28
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
checkpoint-129/chat_template.jinja DELETED
@@ -1,89 +0,0 @@
1
- {%- if tools %}
2
- {{- '<|im_start|>system\n' }}
3
- {%- if messages[0].role == 'system' %}
4
- {{- messages[0].content + '\n\n' }}
5
- {%- endif %}
6
- {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
7
- {%- for tool in tools %}
8
- {{- "\n" }}
9
- {{- tool | tojson }}
10
- {%- endfor %}
11
- {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
12
- {%- else %}
13
- {%- if messages[0].role == 'system' %}
14
- {{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
15
- {%- endif %}
16
- {%- endif %}
17
- {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
18
- {%- for message in messages[::-1] %}
19
- {%- set index = (messages|length - 1) - loop.index0 %}
20
- {%- if ns.multi_step_tool and message.role == "user" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}
21
- {%- set ns.multi_step_tool = false %}
22
- {%- set ns.last_query_index = index %}
23
- {%- endif %}
24
- {%- endfor %}
25
- {%- for message in messages %}
26
- {%- if message.content is string %}
27
- {%- set content = message.content %}
28
- {%- else %}
29
- {%- set content = '' %}
30
- {%- endif %}
31
- {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
32
- {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
33
- {%- elif message.role == "assistant" %}
34
- {%- set reasoning_content = '' %}
35
- {%- if message.reasoning_content is string %}
36
- {%- set reasoning_content = message.reasoning_content %}
37
- {%- else %}
38
- {%- if '</think>' in content %}
39
- {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
40
- {%- set content = content.split('</think>')[-1].lstrip('\n') %}
41
- {%- endif %}
42
- {%- endif %}
43
- {%- if loop.index0 > ns.last_query_index %}
44
- {%- if loop.last or (not loop.last and reasoning_content) %}
45
- {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
46
- {%- else %}
47
- {{- '<|im_start|>' + message.role + '\n' + content }}
48
- {%- endif %}
49
- {%- else %}
50
- {{- '<|im_start|>' + message.role + '\n' + content }}
51
- {%- endif %}
52
- {%- if message.tool_calls %}
53
- {%- for tool_call in message.tool_calls %}
54
- {%- if (loop.first and content) or (not loop.first) %}
55
- {{- '\n' }}
56
- {%- endif %}
57
- {%- if tool_call.function %}
58
- {%- set tool_call = tool_call.function %}
59
- {%- endif %}
60
- {{- '<tool_call>\n{"name": "' }}
61
- {{- tool_call.name }}
62
- {{- '", "arguments": ' }}
63
- {%- if tool_call.arguments is string %}
64
- {{- tool_call.arguments }}
65
- {%- else %}
66
- {{- tool_call.arguments | tojson }}
67
- {%- endif %}
68
- {{- '}\n</tool_call>' }}
69
- {%- endfor %}
70
- {%- endif %}
71
- {{- '<|im_end|>\n' }}
72
- {%- elif message.role == "tool" %}
73
- {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
74
- {{- '<|im_start|>user' }}
75
- {%- endif %}
76
- {{- '\n<tool_response>\n' }}
77
- {{- content }}
78
- {{- '\n</tool_response>' }}
79
- {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
80
- {{- '<|im_end|>\n' }}
81
- {%- endif %}
82
- {%- endif %}
83
- {%- endfor %}
84
- {%- if add_generation_prompt %}
85
- {{- '<|im_start|>assistant\n' }}
86
- {%- if enable_thinking is defined and enable_thinking is false %}
87
- {{- '<think>\n\n</think>\n\n' }}
88
- {%- endif %}
89
- {%- endif %}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
checkpoint-129/merges.txt DELETED
The diff for this file is too large to render. See raw diff
 
checkpoint-129/optimizer.pt DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d704988cf84fa1c43602c4b034f21ab286ca9cde4dfec38d3be05d2004230dd2
3
- size 264584341
 
 
 
 
checkpoint-129/rng_state.pth DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5ee9bd0a0d63eb5b2e3466609a02fba78b9ddc61c9a356cde6576ff17019b8d1
3
- size 14645
 
 
 
 
checkpoint-129/scheduler.pt DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:bee08ceab5dda312047ca0dbd0bb800bc7285836c15adbf8c4a5c1a53ca4068d
3
- size 1465
 
 
 
 
checkpoint-129/special_tokens_map.json DELETED
@@ -1,26 +0,0 @@
1
- {
2
- "additional_special_tokens": [
3
- "<|im_start|>",
4
- "<|im_end|>",
5
- "<|object_ref_start|>",
6
- "<|object_ref_end|>",
7
- "<|box_start|>",
8
- "<|box_end|>",
9
- "<|quad_start|>",
10
- "<|quad_end|>",
11
- "<|vision_start|>",
12
- "<|vision_end|>",
13
- "<|vision_pad|>",
14
- "<|image_pad|>",
15
- "<|video_pad|>"
16
- ],
17
- "bos_token": "<|im_end|>",
18
- "eos_token": {
19
- "content": "<|im_end|>",
20
- "lstrip": false,
21
- "normalized": false,
22
- "rstrip": false,
23
- "single_word": false
24
- },
25
- "pad_token": "<|im_end|>"
26
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
checkpoint-129/tokenizer.json DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ac6583c532ebcffab265f0693ef8624858bd22dece1754500925f53e5dc5f058
3
- size 11422929
 
 
 
 
checkpoint-129/tokenizer_config.json DELETED
@@ -1,239 +0,0 @@
1
- {
2
- "add_bos_token": false,
3
- "add_prefix_space": false,
4
- "added_tokens_decoder": {
5
- "151643": {
6
- "content": "<|endoftext|>",
7
- "lstrip": false,
8
- "normalized": false,
9
- "rstrip": false,
10
- "single_word": false,
11
- "special": true
12
- },
13
- "151644": {
14
- "content": "<|im_start|>",
15
- "lstrip": false,
16
- "normalized": false,
17
- "rstrip": false,
18
- "single_word": false,
19
- "special": true
20
- },
21
- "151645": {
22
- "content": "<|im_end|>",
23
- "lstrip": false,
24
- "normalized": false,
25
- "rstrip": false,
26
- "single_word": false,
27
- "special": true
28
- },
29
- "151646": {
30
- "content": "<|object_ref_start|>",
31
- "lstrip": false,
32
- "normalized": false,
33
- "rstrip": false,
34
- "single_word": false,
35
- "special": true
36
- },
37
- "151647": {
38
- "content": "<|object_ref_end|>",
39
- "lstrip": false,
40
- "normalized": false,
41
- "rstrip": false,
42
- "single_word": false,
43
- "special": true
44
- },
45
- "151648": {
46
- "content": "<|box_start|>",
47
- "lstrip": false,
48
- "normalized": false,
49
- "rstrip": false,
50
- "single_word": false,
51
- "special": true
52
- },
53
- "151649": {
54
- "content": "<|box_end|>",
55
- "lstrip": false,
56
- "normalized": false,
57
- "rstrip": false,
58
- "single_word": false,
59
- "special": true
60
- },
61
- "151650": {
62
- "content": "<|quad_start|>",
63
- "lstrip": false,
64
- "normalized": false,
65
- "rstrip": false,
66
- "single_word": false,
67
- "special": true
68
- },
69
- "151651": {
70
- "content": "<|quad_end|>",
71
- "lstrip": false,
72
- "normalized": false,
73
- "rstrip": false,
74
- "single_word": false,
75
- "special": true
76
- },
77
- "151652": {
78
- "content": "<|vision_start|>",
79
- "lstrip": false,
80
- "normalized": false,
81
- "rstrip": false,
82
- "single_word": false,
83
- "special": true
84
- },
85
- "151653": {
86
- "content": "<|vision_end|>",
87
- "lstrip": false,
88
- "normalized": false,
89
- "rstrip": false,
90
- "single_word": false,
91
- "special": true
92
- },
93
- "151654": {
94
- "content": "<|vision_pad|>",
95
- "lstrip": false,
96
- "normalized": false,
97
- "rstrip": false,
98
- "single_word": false,
99
- "special": true
100
- },
101
- "151655": {
102
- "content": "<|image_pad|>",
103
- "lstrip": false,
104
- "normalized": false,
105
- "rstrip": false,
106
- "single_word": false,
107
- "special": true
108
- },
109
- "151656": {
110
- "content": "<|video_pad|>",
111
- "lstrip": false,
112
- "normalized": false,
113
- "rstrip": false,
114
- "single_word": false,
115
- "special": true
116
- },
117
- "151657": {
118
- "content": "<tool_call>",
119
- "lstrip": false,
120
- "normalized": false,
121
- "rstrip": false,
122
- "single_word": false,
123
- "special": false
124
- },
125
- "151658": {
126
- "content": "</tool_call>",
127
- "lstrip": false,
128
- "normalized": false,
129
- "rstrip": false,
130
- "single_word": false,
131
- "special": false
132
- },
133
- "151659": {
134
- "content": "<|fim_prefix|>",
135
- "lstrip": false,
136
- "normalized": false,
137
- "rstrip": false,
138
- "single_word": false,
139
- "special": false
140
- },
141
- "151660": {
142
- "content": "<|fim_middle|>",
143
- "lstrip": false,
144
- "normalized": false,
145
- "rstrip": false,
146
- "single_word": false,
147
- "special": false
148
- },
149
- "151661": {
150
- "content": "<|fim_suffix|>",
151
- "lstrip": false,
152
- "normalized": false,
153
- "rstrip": false,
154
- "single_word": false,
155
- "special": false
156
- },
157
- "151662": {
158
- "content": "<|fim_pad|>",
159
- "lstrip": false,
160
- "normalized": false,
161
- "rstrip": false,
162
- "single_word": false,
163
- "special": false
164
- },
165
- "151663": {
166
- "content": "<|repo_name|>",
167
- "lstrip": false,
168
- "normalized": false,
169
- "rstrip": false,
170
- "single_word": false,
171
- "special": false
172
- },
173
- "151664": {
174
- "content": "<|file_sep|>",
175
- "lstrip": false,
176
- "normalized": false,
177
- "rstrip": false,
178
- "single_word": false,
179
- "special": false
180
- },
181
- "151665": {
182
- "content": "<tool_response>",
183
- "lstrip": false,
184
- "normalized": false,
185
- "rstrip": false,
186
- "single_word": false,
187
- "special": false
188
- },
189
- "151666": {
190
- "content": "</tool_response>",
191
- "lstrip": false,
192
- "normalized": false,
193
- "rstrip": false,
194
- "single_word": false,
195
- "special": false
196
- },
197
- "151667": {
198
- "content": "<think>",
199
- "lstrip": false,
200
- "normalized": false,
201
- "rstrip": false,
202
- "single_word": false,
203
- "special": false
204
- },
205
- "151668": {
206
- "content": "</think>",
207
- "lstrip": false,
208
- "normalized": false,
209
- "rstrip": false,
210
- "single_word": false,
211
- "special": false
212
- }
213
- },
214
- "additional_special_tokens": [
215
- "<|im_start|>",
216
- "<|im_end|>",
217
- "<|object_ref_start|>",
218
- "<|object_ref_end|>",
219
- "<|box_start|>",
220
- "<|box_end|>",
221
- "<|quad_start|>",
222
- "<|quad_end|>",
223
- "<|vision_start|>",
224
- "<|vision_end|>",
225
- "<|vision_pad|>",
226
- "<|image_pad|>",
227
- "<|video_pad|>"
228
- ],
229
- "bos_token": "<|im_end|>",
230
- "clean_up_tokenization_spaces": false,
231
- "eos_token": "<|im_end|>",
232
- "errors": "replace",
233
- "extra_special_tokens": {},
234
- "model_max_length": 131072,
235
- "pad_token": "<|im_end|>",
236
- "split_special_tokens": false,
237
- "tokenizer_class": "Qwen2Tokenizer",
238
- "unk_token": null
239
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
checkpoint-129/trainer_state.json DELETED
@@ -1,149 +0,0 @@
1
- {
2
- "best_global_step": 129,
3
- "best_metric": 0.0137786865234375,
4
- "best_model_checkpoint": "./star_trek_guard_finetuned/checkpoint-129",
5
- "epoch": 3.0,
6
- "eval_steps": 500,
7
- "global_step": 129,
8
- "is_hyper_param_search": false,
9
- "is_local_process_zero": true,
10
- "is_world_process_zero": true,
11
- "log_history": [
12
- {
13
- "epoch": 0.023564064801178203,
14
- "grad_norm": 1462.7562255859375,
15
- "learning_rate": 0.0,
16
- "loss": 33.8799,
17
- "step": 1
18
- },
19
- {
20
- "epoch": 0.23564064801178203,
21
- "grad_norm": 122.78912353515625,
22
- "learning_rate": 0.00013846153846153847,
23
- "loss": 19.1378,
24
- "step": 10
25
- },
26
- {
27
- "epoch": 0.47128129602356406,
28
- "grad_norm": 0.02678239718079567,
29
- "learning_rate": 0.00019868265225415265,
30
- "loss": 2.2563,
31
- "step": 20
32
- },
33
- {
34
- "epoch": 0.7069219440353461,
35
- "grad_norm": 4.225969314575195e-05,
36
- "learning_rate": 0.00019075754196709572,
37
- "loss": 0.1447,
38
- "step": 30
39
- },
40
- {
41
- "epoch": 0.9425625920471281,
42
- "grad_norm": 0.0,
43
- "learning_rate": 0.00017621620551276366,
44
- "loss": 0.0604,
45
- "step": 40
46
- },
47
- {
48
- "epoch": 1.0,
49
- "eval_loss": 0.322265625,
50
- "eval_runtime": 7.4656,
51
- "eval_samples_per_second": 20.226,
52
- "eval_steps_per_second": 5.09,
53
- "step": 43
54
- },
55
- {
56
- "epoch": 1.1649484536082475,
57
- "grad_norm": 0.0006452053203247488,
58
- "learning_rate": 0.00015611870653623825,
59
- "loss": 0.7274,
60
- "step": 50
61
- },
62
- {
63
- "epoch": 1.4005891016200294,
64
- "grad_norm": 0.2795312702655792,
65
- "learning_rate": 0.000131930153013598,
66
- "loss": 0.8471,
67
- "step": 60
68
- },
69
- {
70
- "epoch": 1.6362297496318114,
71
- "grad_norm": 0.00010770559310913086,
72
- "learning_rate": 0.00010541389085854176,
73
- "loss": 0.0002,
74
- "step": 70
75
- },
76
- {
77
- "epoch": 1.8718703976435935,
78
- "grad_norm": 0.0002467813901603222,
79
- "learning_rate": 7.85029559788976e-05,
80
- "loss": 0.0,
81
- "step": 80
82
- },
83
- {
84
- "epoch": 2.0,
85
- "eval_loss": 0.01540374755859375,
86
- "eval_runtime": 7.4751,
87
- "eval_samples_per_second": 20.2,
88
- "eval_steps_per_second": 5.084,
89
- "step": 86
90
- },
91
- {
92
- "epoch": 2.094256259204713,
93
- "grad_norm": 0.0,
94
- "learning_rate": 5.3159155930021e-05,
95
- "loss": 0.0,
96
- "step": 90
97
- },
98
- {
99
- "epoch": 2.329896907216495,
100
- "grad_norm": 0.0003396868414711207,
101
- "learning_rate": 3.123005411465766e-05,
102
- "loss": 0.0,
103
- "step": 100
104
- },
105
- {
106
- "epoch": 2.5655375552282766,
107
- "grad_norm": 0.0001834749709814787,
108
- "learning_rate": 1.4314282383241096e-05,
109
- "loss": 0.0,
110
- "step": 110
111
- },
112
- {
113
- "epoch": 2.8011782032400587,
114
- "grad_norm": 0.0002287882671225816,
115
- "learning_rate": 3.6450007480777093e-06,
116
- "loss": 0.0,
117
- "step": 120
118
- },
119
- {
120
- "epoch": 3.0,
121
- "eval_loss": 0.0137786865234375,
122
- "eval_runtime": 7.4932,
123
- "eval_samples_per_second": 20.152,
124
- "eval_steps_per_second": 5.071,
125
- "step": 129
126
- }
127
- ],
128
- "logging_steps": 10,
129
- "max_steps": 129,
130
- "num_input_tokens_seen": 0,
131
- "num_train_epochs": 3,
132
- "save_steps": 500,
133
- "stateful_callbacks": {
134
- "TrainerControl": {
135
- "args": {
136
- "should_epoch_stop": false,
137
- "should_evaluate": false,
138
- "should_log": false,
139
- "should_save": true,
140
- "should_training_stop": true
141
- },
142
- "attributes": {}
143
- }
144
- },
145
- "total_flos": 4.588810491396096e+16,
146
- "train_batch_size": 2,
147
- "trial_name": null,
148
- "trial_params": null
149
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
checkpoint-129/training_args.bin DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8a28a66d4d37807ab821d3ceae78b0e4267a7cb44904f7001080524544528ffb
3
- size 5841
 
 
 
 
checkpoint-129/vocab.json DELETED
The diff for this file is too large to render. See raw diff