geoffmunn commited on
Commit
1742937
·
verified ·
1 Parent(s): a186dd5

Upload fine-tuned Qwen3-4B Star Trek Guard model

Browse files
.gitattributes CHANGED
@@ -36,3 +36,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
36
  checkpoint-129/tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
  checkpoint-86/tokenizer.json filter=lfs diff=lfs merge=lfs -text
38
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
 
 
 
36
  checkpoint-129/tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
  checkpoint-86/tokenizer.json filter=lfs diff=lfs merge=lfs -text
38
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
39
+ checkpoint-141/tokenizer.json filter=lfs diff=lfs merge=lfs -text
40
+ checkpoint-423/tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -2,232 +2,205 @@
2
  base_model: Qwen/Qwen3-4B
3
  library_name: peft
4
  tags:
5
- - base_model:adapter:Qwen/Qwen3-4B
6
- - lora
7
- - transformers
8
- - text-classification
9
- - moderation
10
- - star-trek
11
  ---
12
 
13
- # Model Card for geoffmunn/Qwen3Guard-StarTrek-Classification-4B
 
 
 
14
 
15
- This is a fine-tuned version of **Qwen3-4B** using LoRA (Low-Rank Adaptation) to classify whether user-provided text is related to *Star Trek* or not. The model acts as a domain-specific content classifier, returning one of two labels: `"related"` or `"not_related"`. It was developed as part of the Qwen3Guard demonstration project to showcase how large language models can be adapted for custom classification tasks.
16
 
17
  ## Model Details
18
 
19
  ### Model Description
20
 
21
- This model is a binary sequence classifier fine-tuned on a synthetic dataset of Star Trek-related questions and general non-Star-Trek text. Built atop the Qwen3-4B foundation model, it uses parameter-efficient fine-tuning via LoRA to adapt the model for topic detection in conversational or input text. It is designed for use in moderation systems where filtering based on pop culture topics like *Star Trek* is desired.
 
 
22
 
23
- - **Developed by:** Geoff Munn ([@geoffmunn](https://github.com/geoffmunn))
24
- - **Shared by:** Geoff Munn
25
- - **Model type:** Causal language model with LoRA adapter for sequence classification
26
- - **Language(s) (NLP):** English
27
- - **License:** MIT License (see [GitHub repo](https://github.com/geoffmunn/Qwen3Guard))
28
- - **Finetuned from model:** Qwen/Qwen3-4B
 
29
 
30
- ### Model Sources
31
 
32
- - **Repository:** [https://github.com/geoffmunn/Qwen3Guard](https://github.com/geoffmunn/Qwen3Guard)
33
- - **Paper:** Not applicable
34
- - **Demo:** Interactive demo available via `star_trek_chat.html` in the repository; requires local API server
 
 
35
 
36
  ## Uses
37
 
 
 
38
  ### Direct Use
39
 
40
- The model can directly classify whether a given piece of text is related to *Star Trek*. Example applications include:
41
- - Filtering fan forum posts
42
- - Moderating trivia chatbots
43
- - Enhancing themed AI assistants
44
- - Educational tools focused on science fiction media
45
 
46
- Input: A string of text
47
- Output: One of two labels — `"related"` or `"not_related"`
48
 
49
- ### Downstream Use
50
 
51
- This model can be integrated into larger systems such as:
52
- - Themed conversational agents (e.g., a *Star Trek*-focused chatbot)
53
- - Content recommendation engines that route queries based on topic relevance
54
- - Fine-tuning starter for other sci-fi franchises (e.g., *Star Wars*, *Doctor Who*) using similar methodology
55
 
56
  ### Out-of-Scope Use
57
 
58
- This model should **not** be used for:
59
- - General content moderation (toxicity, hate speech, etc.)
60
- - Medical, legal, or safety-critical decision-making
61
- - Multilingual classification (trained only on English)
62
- - Detecting nuanced sentiment or emotion
63
- - Classifying topics outside entertainment/pop culture without retraining
64
 
65
- It may produce inaccurate classifications when presented with ambiguous references, parody content, or highly technical scientific discussions unrelated to *Star Trek* lore.
66
 
67
  ## Bias, Risks, and Limitations
68
 
69
- The training data consists entirely of synthetically generated questions about *Star Trek*, which introduces several limitations:
70
- - Potential overfitting to question formats rather than natural language statements
71
- - Limited coverage of obscure characters, episodes, or expanded universe material
72
- - No representation of non-English *Star Trek* content
73
- - Biases toward canonical series (TOS, TNG, DS9, etc.) over newer entries
74
 
75
- Additionally, because the dataset was auto-generated using prompts, there may be inconsistencies in labeling or artificial phrasing patterns.
76
 
77
  ### Recommendations
78
 
79
- Users should validate performance on real-world data before deployment. For production use, consider augmenting the dataset with human-labeled examples and testing across diverse inputs. Always pair this model with broader safeguards if used in public-facing applications.
 
 
80
 
81
  ## How to Get Started with the Model
82
 
83
- You can load and run inference using Hugging Face Transformers:
84
 
85
- ```python
86
- from transformers import AutoModelForSequenceClassification, AutoTokenizer
87
 
88
- model_id = "geoffmunn/Qwen3Guard-StarTrek-Classification-4B"
89
 
90
- tokenizer = AutoTokenizer.from_pretrained(model_id)
91
- model = AutoModelForSequenceClassification.from_pretrained(model_id)
92
 
93
- input_text = "What is the warp core made of in Star Trek?"
94
- inputs = tokenizer(input_text, return_tensors="pt", truncation=True, max_length=512)
95
 
96
- outputs = model(**inputs)
97
- predicted_class_id = outputs.logits.argmax().item()
98
- label = model.config.id2label[predicted_class_id]
99
 
100
- print(f"Label: {label}")
101
- ```
102
 
103
- Ensure you have the required libraries installed:
104
- ```bash
105
- pip install transformers torch peft
106
- ```
107
 
108
- ## Training Details
109
- ### Training Data
110
- The model was trained on a synthetic JSONL dataset containing 2,500 labeled examples of Star Trek-related questions marked as "related", and an equal number of randomly sampled general knowledge questions labeled "not_related". The dataset was generated using the script generate_star_trek_questions.py from the repository.
111
-
112
- Dataset format:
113
- ```json
114
- {"input": "What planet is Spock from?", "label": "related"}
115
- {"input": "Who wrote 'Pride and Prejudice'?", "label": "not_related"}
116
- ```
117
- Place your dataset at: `finetuning/star_trek/star_trek_guard_dataset.jsonl`
118
-
119
- ## Training Procedure
120
- ### Preprocessing
121
- Text inputs were tokenized using the Qwen3 tokenizer with a maximum sequence length of 512 tokens. Inputs longer than this were truncated. Labels were mapped via:
122
- ```python
123
- label2id = {"not_related": 0, "related": 1}
124
- id2label = {0: "not_related", 1: "related"}
125
- ```
126
-
127
- ### Training Hyperparameters
128
- - **Training regime:** Mixed precision training (fp16), enabled via Hugging Face Accelerate
129
- - **Batch size:** 2 (per GPU)
130
- - **Gradient accumulation steps:** 16 → effective batch size: 32
131
- - **Number of epochs:** 3
132
- - **Learning rate:** 2e-4
133
- - **Optimizer:** AdamW
134
- - **Max sequence length:** 512
135
- - **LoRA configuration:**
136
- - **Rank (r):** 16
137
- - **Alpha:** 32
138
- - **Dropout:** 0.05
139
- - **Target modules:** attention query/value layers and MLP up/down projections
140
-
141
- ## Speeds, Sizes, Times
142
- - **Hardware used:** NVIDIA GPU (assumed: A100 or equivalent)
143
- - **Training time:** ~2–3 hours depending on hardware
144
- - **Checkpoint size:** ~3.8 GB (adapter weights only, PEFT format)
145
- - **Inference memory:** < 10 GB VRAM (with quantization further reduction possible)
146
 
147
  ## Evaluation
 
 
 
148
  ### Testing Data, Factors & Metrics
 
149
  #### Testing Data
150
- A 10% holdout test set (~500 samples) was used for evaluation, split from the full dataset during training.
 
 
 
151
 
152
  #### Factors
153
- Evaluation focused on accuracy across:
154
 
155
- - Canonical vs. obscure Star Trek references
156
- - Question vs. statement format
157
- - Length of input text
158
-
159
  #### Metrics
160
- - **Accuracy:** Primary metric
161
- - **Precision, Recall, F1-score:** Per-class metrics reported during training
162
- - **Confusion Matrix:** Generated internally during test phase
163
 
164
- #### Results
165
- During final evaluation, the model achieved:
 
 
 
 
 
 
 
166
 
167
- - **Accuracy:** ~96–98% (on synthetic test set)
168
- - Strong precision/recall for "related" class
169
- - Minor false positives on space/science topics unrelated to Star Trek
170
 
171
- ## Summary
172
- The model performs well on its intended task within the scope of the training distribution but may degrade on edge cases or metaphorical references.
173
 
174
- ## Technical Specifications
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
175
  ### Model Architecture and Objective
176
- - **Base architecture:** Qwen3-4B (causal decoder-only LLM)
177
- - **Adaptation method:** LoRA (PEFT)
178
- - **Task head:** Sequence classification (single-label)
179
- - **Objective function:** Cross-entropy loss
180
-
181
  ### Compute Infrastructure
 
 
 
182
  #### Hardware
183
- - **GPU:** NVIDIA A100 / RTX 3090 / L40S or equivalent
184
- - **RAM:** ≥ 32 GB system memory recommended
 
185
  #### Software
186
- - Python 3.10+
187
- - PyTorch 2.4+ with CUDA 12.1+
188
- - Transformers 4.40+
189
- - PEFT 0.18.0
190
- - Accelerate, Datasets, Tokenizers
191
-
192
- ## Citation
193
- While no formal paper exists, please cite the GitHub repository if used academically.
194
-
195
- ### BibTeX:
196
- ```bibtext
197
- @software{munn_qwen3guard_2025,
198
- author = {Munn, Geoff},
199
- title = {Qwen3Guard: Demonstration of Qwen3Guard Models for Content Classification},
200
- year = {2025},
201
- publisher = {GitHub},
202
- journal = {GitHub repository},
203
- url = {https://github.com/geoffmunn/Qwen3Guard}
204
- }
205
- ```
206
-
207
- ### APA:
208
-
209
- Munn, G. (2025). Qwen3Guard: Demonstration of Qwen3Guard Models for Content Classification [Software]. GitHub. https://github.com/geoffmunn/Qwen3Guard
210
-
211
- ## Glossary
212
- - **LoRA (Low-Rank Adaptation):** A parameter-efficient fine-tuning technique that adds trainable low-rank matrices to pretrained weights.
213
- - **PEFT:** Parameter-Efficient Fine-Tuning, a Hugging Face library for lightweight adaptation of large models.
214
- - **GGUF:** Format used for running models in llama.cpp; not supported for streaming variant here.
215
- - **JSONL:** JSON Lines format – one JSON object per line.
216
-
217
- ## More Information
218
- For more details, including API server setup and web demos, visit:
219
- 👉 https://github.com/geoffmunn/Qwen3Guard
220
-
221
- Includes:
222
-
223
- - Ollama-compatible scripts
224
- - Flask-based API server (`api_server.py`)
225
- - HTML chat interface (`star_trek_chat.html`)
226
- - Dataset generation tools
227
-
228
- ## Model Card Authors
229
- Geoff Munn – Developer and maintainer
230
 
231
  ## Model Card Contact
232
- For questions or feedback, contact the author via GitHub:
233
- @geoffmunn
 
 
 
 
2
  base_model: Qwen/Qwen3-4B
3
  library_name: peft
4
  tags:
5
+ - base_model:adapter:Qwen/Qwen3-4B
6
+ - lora
7
+ - transformers
 
 
 
8
  ---
9
 
10
+ # Model Card for Model ID
11
+
12
+ <!-- Provide a quick summary of what the model is/does. -->
13
+
14
 
 
15
 
16
  ## Model Details
17
 
18
  ### Model Description
19
 
20
+ <!-- Provide a longer summary of what this model is. -->
21
+
22
+
23
 
24
+ - **Developed by:** [More Information Needed]
25
+ - **Funded by [optional]:** [More Information Needed]
26
+ - **Shared by [optional]:** [More Information Needed]
27
+ - **Model type:** [More Information Needed]
28
+ - **Language(s) (NLP):** [More Information Needed]
29
+ - **License:** [More Information Needed]
30
+ - **Finetuned from model [optional]:** [More Information Needed]
31
 
32
+ ### Model Sources [optional]
33
 
34
+ <!-- Provide the basic links for the model. -->
35
+
36
+ - **Repository:** [More Information Needed]
37
+ - **Paper [optional]:** [More Information Needed]
38
+ - **Demo [optional]:** [More Information Needed]
39
 
40
  ## Uses
41
 
42
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
43
+
44
  ### Direct Use
45
 
46
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
 
 
 
 
47
 
48
+ [More Information Needed]
 
49
 
50
+ ### Downstream Use [optional]
51
 
52
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
53
+
54
+ [More Information Needed]
 
55
 
56
  ### Out-of-Scope Use
57
 
58
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
 
 
 
 
 
59
 
60
+ [More Information Needed]
61
 
62
  ## Bias, Risks, and Limitations
63
 
64
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
 
 
 
65
 
66
+ [More Information Needed]
67
 
68
  ### Recommendations
69
 
70
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
71
+
72
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
73
 
74
  ## How to Get Started with the Model
75
 
76
+ Use the code below to get started with the model.
77
 
78
+ [More Information Needed]
 
79
 
80
+ ## Training Details
81
 
82
+ ### Training Data
 
83
 
84
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
 
85
 
86
+ [More Information Needed]
 
 
87
 
88
+ ### Training Procedure
 
89
 
90
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
 
 
 
91
 
92
+ #### Preprocessing [optional]
93
+
94
+ [More Information Needed]
95
+
96
+
97
+ #### Training Hyperparameters
98
+
99
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
100
+
101
+ #### Speeds, Sizes, Times [optional]
102
+
103
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
104
+
105
+ [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
106
 
107
  ## Evaluation
108
+
109
+ <!-- This section describes the evaluation protocols and provides the results. -->
110
+
111
  ### Testing Data, Factors & Metrics
112
+
113
  #### Testing Data
114
+
115
+ <!-- This should link to a Dataset Card if possible. -->
116
+
117
+ [More Information Needed]
118
 
119
  #### Factors
 
120
 
121
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
122
+
123
+ [More Information Needed]
124
+
125
  #### Metrics
 
 
 
126
 
127
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
128
+
129
+ [More Information Needed]
130
+
131
+ ### Results
132
+
133
+ [More Information Needed]
134
+
135
+ #### Summary
136
 
 
 
 
137
 
 
 
138
 
139
+ ## Model Examination [optional]
140
+
141
+ <!-- Relevant interpretability work for the model goes here -->
142
+
143
+ [More Information Needed]
144
+
145
+ ## Environmental Impact
146
+
147
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
148
+
149
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
150
+
151
+ - **Hardware Type:** [More Information Needed]
152
+ - **Hours used:** [More Information Needed]
153
+ - **Cloud Provider:** [More Information Needed]
154
+ - **Compute Region:** [More Information Needed]
155
+ - **Carbon Emitted:** [More Information Needed]
156
+
157
+ ## Technical Specifications [optional]
158
+
159
  ### Model Architecture and Objective
160
+
161
+ [More Information Needed]
162
+
 
 
163
  ### Compute Infrastructure
164
+
165
+ [More Information Needed]
166
+
167
  #### Hardware
168
+
169
+ [More Information Needed]
170
+
171
  #### Software
172
+
173
+ [More Information Needed]
174
+
175
+ ## Citation [optional]
176
+
177
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
178
+
179
+ **BibTeX:**
180
+
181
+ [More Information Needed]
182
+
183
+ **APA:**
184
+
185
+ [More Information Needed]
186
+
187
+ ## Glossary [optional]
188
+
189
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
190
+
191
+ [More Information Needed]
192
+
193
+ ## More Information [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Authors [optional]
198
+
199
+ [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
200
 
201
  ## Model Card Contact
202
+
203
+ [More Information Needed]
204
+ ### Framework versions
205
+
206
+ - PEFT 0.18.0
adapter_config.json CHANGED
@@ -32,13 +32,13 @@
32
  "rank_pattern": {},
33
  "revision": null,
34
  "target_modules": [
35
- "down_proj",
36
  "gate_proj",
 
37
  "o_proj",
38
- "q_proj",
39
  "k_proj",
40
- "up_proj",
41
- "v_proj"
 
42
  ],
43
  "target_parameters": null,
44
  "task_type": "SEQ_CLS",
 
32
  "rank_pattern": {},
33
  "revision": null,
34
  "target_modules": [
 
35
  "gate_proj",
36
+ "up_proj",
37
  "o_proj",
 
38
  "k_proj",
39
+ "down_proj",
40
+ "v_proj",
41
+ "q_proj"
42
  ],
43
  "target_parameters": null,
44
  "task_type": "SEQ_CLS",
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4bb6a4bc27665aa0065fee12f3889ba47e9242f223a2b57f2b7f2855b271becb
3
  size 132198232
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a6f21c33d3a151dae6a5e77677b9b7eb457714dded31d865baa45f9819aad2f
3
  size 132198232
checkpoint-141/README.md ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen3-4B
3
+ library_name: peft
4
+ tags:
5
+ - base_model:adapter:Qwen/Qwen3-4B
6
+ - lora
7
+ - transformers
8
+ ---
9
+
10
+ # Model Card for Model ID
11
+
12
+ <!-- Provide a quick summary of what the model is/does. -->
13
+
14
+
15
+
16
+ ## Model Details
17
+
18
+ ### Model Description
19
+
20
+ <!-- Provide a longer summary of what this model is. -->
21
+
22
+
23
+
24
+ - **Developed by:** [More Information Needed]
25
+ - **Funded by [optional]:** [More Information Needed]
26
+ - **Shared by [optional]:** [More Information Needed]
27
+ - **Model type:** [More Information Needed]
28
+ - **Language(s) (NLP):** [More Information Needed]
29
+ - **License:** [More Information Needed]
30
+ - **Finetuned from model [optional]:** [More Information Needed]
31
+
32
+ ### Model Sources [optional]
33
+
34
+ <!-- Provide the basic links for the model. -->
35
+
36
+ - **Repository:** [More Information Needed]
37
+ - **Paper [optional]:** [More Information Needed]
38
+ - **Demo [optional]:** [More Information Needed]
39
+
40
+ ## Uses
41
+
42
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
43
+
44
+ ### Direct Use
45
+
46
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
+
48
+ [More Information Needed]
49
+
50
+ ### Downstream Use [optional]
51
+
52
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
53
+
54
+ [More Information Needed]
55
+
56
+ ### Out-of-Scope Use
57
+
58
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
59
+
60
+ [More Information Needed]
61
+
62
+ ## Bias, Risks, and Limitations
63
+
64
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
65
+
66
+ [More Information Needed]
67
+
68
+ ### Recommendations
69
+
70
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
71
+
72
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
73
+
74
+ ## How to Get Started with the Model
75
+
76
+ Use the code below to get started with the model.
77
+
78
+ [More Information Needed]
79
+
80
+ ## Training Details
81
+
82
+ ### Training Data
83
+
84
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
85
+
86
+ [More Information Needed]
87
+
88
+ ### Training Procedure
89
+
90
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
91
+
92
+ #### Preprocessing [optional]
93
+
94
+ [More Information Needed]
95
+
96
+
97
+ #### Training Hyperparameters
98
+
99
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
100
+
101
+ #### Speeds, Sizes, Times [optional]
102
+
103
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
104
+
105
+ [More Information Needed]
106
+
107
+ ## Evaluation
108
+
109
+ <!-- This section describes the evaluation protocols and provides the results. -->
110
+
111
+ ### Testing Data, Factors & Metrics
112
+
113
+ #### Testing Data
114
+
115
+ <!-- This should link to a Dataset Card if possible. -->
116
+
117
+ [More Information Needed]
118
+
119
+ #### Factors
120
+
121
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
122
+
123
+ [More Information Needed]
124
+
125
+ #### Metrics
126
+
127
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
128
+
129
+ [More Information Needed]
130
+
131
+ ### Results
132
+
133
+ [More Information Needed]
134
+
135
+ #### Summary
136
+
137
+
138
+
139
+ ## Model Examination [optional]
140
+
141
+ <!-- Relevant interpretability work for the model goes here -->
142
+
143
+ [More Information Needed]
144
+
145
+ ## Environmental Impact
146
+
147
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
148
+
149
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
150
+
151
+ - **Hardware Type:** [More Information Needed]
152
+ - **Hours used:** [More Information Needed]
153
+ - **Cloud Provider:** [More Information Needed]
154
+ - **Compute Region:** [More Information Needed]
155
+ - **Carbon Emitted:** [More Information Needed]
156
+
157
+ ## Technical Specifications [optional]
158
+
159
+ ### Model Architecture and Objective
160
+
161
+ [More Information Needed]
162
+
163
+ ### Compute Infrastructure
164
+
165
+ [More Information Needed]
166
+
167
+ #### Hardware
168
+
169
+ [More Information Needed]
170
+
171
+ #### Software
172
+
173
+ [More Information Needed]
174
+
175
+ ## Citation [optional]
176
+
177
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
178
+
179
+ **BibTeX:**
180
+
181
+ [More Information Needed]
182
+
183
+ **APA:**
184
+
185
+ [More Information Needed]
186
+
187
+ ## Glossary [optional]
188
+
189
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
190
+
191
+ [More Information Needed]
192
+
193
+ ## More Information [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Authors [optional]
198
+
199
+ [More Information Needed]
200
+
201
+ ## Model Card Contact
202
+
203
+ [More Information Needed]
204
+ ### Framework versions
205
+
206
+ - PEFT 0.18.0
checkpoint-141/adapter_config.json ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": null,
6
+ "base_model_name_or_path": "Qwen/Qwen3-4B",
7
+ "bias": "none",
8
+ "corda_config": null,
9
+ "ensure_weight_tying": false,
10
+ "eva_config": null,
11
+ "exclude_modules": null,
12
+ "fan_in_fan_out": false,
13
+ "inference_mode": true,
14
+ "init_lora_weights": true,
15
+ "layer_replication": null,
16
+ "layers_pattern": null,
17
+ "layers_to_transform": null,
18
+ "loftq_config": {},
19
+ "lora_alpha": 32,
20
+ "lora_bias": false,
21
+ "lora_dropout": 0.05,
22
+ "megatron_config": null,
23
+ "megatron_core": "megatron.core",
24
+ "modules_to_save": [
25
+ "classifier",
26
+ "score"
27
+ ],
28
+ "peft_type": "LORA",
29
+ "peft_version": "0.18.0",
30
+ "qalora_group_size": 16,
31
+ "r": 16,
32
+ "rank_pattern": {},
33
+ "revision": null,
34
+ "target_modules": [
35
+ "gate_proj",
36
+ "up_proj",
37
+ "o_proj",
38
+ "k_proj",
39
+ "down_proj",
40
+ "v_proj",
41
+ "q_proj"
42
+ ],
43
+ "target_parameters": null,
44
+ "task_type": "SEQ_CLS",
45
+ "trainable_token_indices": null,
46
+ "use_dora": false,
47
+ "use_qalora": false,
48
+ "use_rslora": false
49
+ }
checkpoint-141/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a6f21c33d3a151dae6a5e77677b9b7eb457714dded31d865baa45f9819aad2f
3
+ size 132198232
checkpoint-141/added_tokens.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</think>": 151668,
3
+ "</tool_call>": 151658,
4
+ "</tool_response>": 151666,
5
+ "<think>": 151667,
6
+ "<tool_call>": 151657,
7
+ "<tool_response>": 151665,
8
+ "<|box_end|>": 151649,
9
+ "<|box_start|>": 151648,
10
+ "<|endoftext|>": 151643,
11
+ "<|file_sep|>": 151664,
12
+ "<|fim_middle|>": 151660,
13
+ "<|fim_pad|>": 151662,
14
+ "<|fim_prefix|>": 151659,
15
+ "<|fim_suffix|>": 151661,
16
+ "<|im_end|>": 151645,
17
+ "<|im_start|>": 151644,
18
+ "<|image_pad|>": 151655,
19
+ "<|object_ref_end|>": 151647,
20
+ "<|object_ref_start|>": 151646,
21
+ "<|quad_end|>": 151651,
22
+ "<|quad_start|>": 151650,
23
+ "<|repo_name|>": 151663,
24
+ "<|video_pad|>": 151656,
25
+ "<|vision_end|>": 151653,
26
+ "<|vision_pad|>": 151654,
27
+ "<|vision_start|>": 151652
28
+ }
checkpoint-141/chat_template.jinja ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if tools %}
2
+ {{- '<|im_start|>system\n' }}
3
+ {%- if messages[0].role == 'system' %}
4
+ {{- messages[0].content + '\n\n' }}
5
+ {%- endif %}
6
+ {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
7
+ {%- for tool in tools %}
8
+ {{- "\n" }}
9
+ {{- tool | tojson }}
10
+ {%- endfor %}
11
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
12
+ {%- else %}
13
+ {%- if messages[0].role == 'system' %}
14
+ {{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
15
+ {%- endif %}
16
+ {%- endif %}
17
+ {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
18
+ {%- for message in messages[::-1] %}
19
+ {%- set index = (messages|length - 1) - loop.index0 %}
20
+ {%- if ns.multi_step_tool and message.role == "user" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}
21
+ {%- set ns.multi_step_tool = false %}
22
+ {%- set ns.last_query_index = index %}
23
+ {%- endif %}
24
+ {%- endfor %}
25
+ {%- for message in messages %}
26
+ {%- if message.content is string %}
27
+ {%- set content = message.content %}
28
+ {%- else %}
29
+ {%- set content = '' %}
30
+ {%- endif %}
31
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
32
+ {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
33
+ {%- elif message.role == "assistant" %}
34
+ {%- set reasoning_content = '' %}
35
+ {%- if message.reasoning_content is string %}
36
+ {%- set reasoning_content = message.reasoning_content %}
37
+ {%- else %}
38
+ {%- if '</think>' in content %}
39
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
40
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
41
+ {%- endif %}
42
+ {%- endif %}
43
+ {%- if loop.index0 > ns.last_query_index %}
44
+ {%- if loop.last or (not loop.last and reasoning_content) %}
45
+ {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
46
+ {%- else %}
47
+ {{- '<|im_start|>' + message.role + '\n' + content }}
48
+ {%- endif %}
49
+ {%- else %}
50
+ {{- '<|im_start|>' + message.role + '\n' + content }}
51
+ {%- endif %}
52
+ {%- if message.tool_calls %}
53
+ {%- for tool_call in message.tool_calls %}
54
+ {%- if (loop.first and content) or (not loop.first) %}
55
+ {{- '\n' }}
56
+ {%- endif %}
57
+ {%- if tool_call.function %}
58
+ {%- set tool_call = tool_call.function %}
59
+ {%- endif %}
60
+ {{- '<tool_call>\n{"name": "' }}
61
+ {{- tool_call.name }}
62
+ {{- '", "arguments": ' }}
63
+ {%- if tool_call.arguments is string %}
64
+ {{- tool_call.arguments }}
65
+ {%- else %}
66
+ {{- tool_call.arguments | tojson }}
67
+ {%- endif %}
68
+ {{- '}\n</tool_call>' }}
69
+ {%- endfor %}
70
+ {%- endif %}
71
+ {{- '<|im_end|>\n' }}
72
+ {%- elif message.role == "tool" %}
73
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
74
+ {{- '<|im_start|>user' }}
75
+ {%- endif %}
76
+ {{- '\n<tool_response>\n' }}
77
+ {{- content }}
78
+ {{- '\n</tool_response>' }}
79
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
80
+ {{- '<|im_end|>\n' }}
81
+ {%- endif %}
82
+ {%- endif %}
83
+ {%- endfor %}
84
+ {%- if add_generation_prompt %}
85
+ {{- '<|im_start|>assistant\n' }}
86
+ {%- if enable_thinking is defined and enable_thinking is false %}
87
+ {{- '<think>\n\n</think>\n\n' }}
88
+ {%- endif %}
89
+ {%- endif %}
checkpoint-141/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-141/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f795efd5799e922a19b43ed81a40e25f80de13eb2f2bcce1ddf2ed5431590b3
3
+ size 264584341
checkpoint-141/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edef2b0ea23f8f2727d08cb80a851bb744450428779ff515b33fc28529f64a4f
3
+ size 14645
checkpoint-141/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:080d7c5d59e75eed27d8c28ac535b6ea8811e98ee45ecfd0401f2828d734ee59
3
+ size 1465
checkpoint-141/special_tokens_map.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "bos_token": "<|im_end|>",
18
+ "eos_token": {
19
+ "content": "<|im_end|>",
20
+ "lstrip": false,
21
+ "normalized": false,
22
+ "rstrip": false,
23
+ "single_word": false
24
+ },
25
+ "pad_token": "<|im_end|>"
26
+ }
checkpoint-141/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac6583c532ebcffab265f0693ef8624858bd22dece1754500925f53e5dc5f058
3
+ size 11422929
checkpoint-141/tokenizer_config.json ADDED
@@ -0,0 +1,239 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<tool_response>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": false
188
+ },
189
+ "151666": {
190
+ "content": "</tool_response>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": false
196
+ },
197
+ "151667": {
198
+ "content": "<think>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": false
204
+ },
205
+ "151668": {
206
+ "content": "</think>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": false
212
+ }
213
+ },
214
+ "additional_special_tokens": [
215
+ "<|im_start|>",
216
+ "<|im_end|>",
217
+ "<|object_ref_start|>",
218
+ "<|object_ref_end|>",
219
+ "<|box_start|>",
220
+ "<|box_end|>",
221
+ "<|quad_start|>",
222
+ "<|quad_end|>",
223
+ "<|vision_start|>",
224
+ "<|vision_end|>",
225
+ "<|vision_pad|>",
226
+ "<|image_pad|>",
227
+ "<|video_pad|>"
228
+ ],
229
+ "bos_token": "<|im_end|>",
230
+ "clean_up_tokenization_spaces": false,
231
+ "eos_token": "<|im_end|>",
232
+ "errors": "replace",
233
+ "extra_special_tokens": {},
234
+ "model_max_length": 131072,
235
+ "pad_token": "<|im_end|>",
236
+ "split_special_tokens": false,
237
+ "tokenizer_class": "Qwen2Tokenizer",
238
+ "unk_token": null
239
+ }
checkpoint-141/trainer_state.json ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": 141,
3
+ "best_metric": 2.384185791015625e-07,
4
+ "best_model_checkpoint": "./star_trek_guard_finetuned/checkpoint-141",
5
+ "epoch": 1.0,
6
+ "eval_steps": 500,
7
+ "global_step": 141,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.0071111111111111115,
14
+ "grad_norm": 863.2054443359375,
15
+ "learning_rate": 0.0,
16
+ "loss": 31.4624,
17
+ "step": 1
18
+ },
19
+ {
20
+ "epoch": 0.07111111111111111,
21
+ "grad_norm": 975.65576171875,
22
+ "learning_rate": 4.186046511627907e-05,
23
+ "loss": 35.8397,
24
+ "step": 10
25
+ },
26
+ {
27
+ "epoch": 0.14222222222222222,
28
+ "grad_norm": 198.86065673828125,
29
+ "learning_rate": 8.837209302325582e-05,
30
+ "loss": 8.8149,
31
+ "step": 20
32
+ },
33
+ {
34
+ "epoch": 0.21333333333333335,
35
+ "grad_norm": 8.080985069274902,
36
+ "learning_rate": 0.00013488372093023256,
37
+ "loss": 0.5709,
38
+ "step": 30
39
+ },
40
+ {
41
+ "epoch": 0.28444444444444444,
42
+ "grad_norm": 0.00014901161193847656,
43
+ "learning_rate": 0.0001813953488372093,
44
+ "loss": 0.1449,
45
+ "step": 40
46
+ },
47
+ {
48
+ "epoch": 0.35555555555555557,
49
+ "grad_norm": 231.45864868164062,
50
+ "learning_rate": 0.00019987699691483048,
51
+ "loss": 2.3554,
52
+ "step": 50
53
+ },
54
+ {
55
+ "epoch": 0.4266666666666667,
56
+ "grad_norm": 0.17169412970542908,
57
+ "learning_rate": 0.00019912640693269752,
58
+ "loss": 0.7842,
59
+ "step": 60
60
+ },
61
+ {
62
+ "epoch": 0.49777777777777776,
63
+ "grad_norm": 0.0002152260858565569,
64
+ "learning_rate": 0.00019769868307835994,
65
+ "loss": 0.3842,
66
+ "step": 70
67
+ },
68
+ {
69
+ "epoch": 0.5688888888888889,
70
+ "grad_norm": 0.7667607665061951,
71
+ "learning_rate": 0.00019560357815343577,
72
+ "loss": 0.1437,
73
+ "step": 80
74
+ },
75
+ {
76
+ "epoch": 0.64,
77
+ "grad_norm": 0.00011813640594482422,
78
+ "learning_rate": 0.00019285540384897073,
79
+ "loss": 0.1796,
80
+ "step": 90
81
+ },
82
+ {
83
+ "epoch": 0.7111111111111111,
84
+ "grad_norm": 0.014821142889559269,
85
+ "learning_rate": 0.00018947293298207635,
86
+ "loss": 0.0,
87
+ "step": 100
88
+ },
89
+ {
90
+ "epoch": 0.7822222222222223,
91
+ "grad_norm": 0.0,
92
+ "learning_rate": 0.0001854792712585539,
93
+ "loss": 0.0002,
94
+ "step": 110
95
+ },
96
+ {
97
+ "epoch": 0.8533333333333334,
98
+ "grad_norm": 0.0,
99
+ "learning_rate": 0.00018090169943749476,
100
+ "loss": 0.0,
101
+ "step": 120
102
+ },
103
+ {
104
+ "epoch": 0.9244444444444444,
105
+ "grad_norm": 0.0003181487263645977,
106
+ "learning_rate": 0.0001757714869760335,
107
+ "loss": 0.4035,
108
+ "step": 130
109
+ },
110
+ {
111
+ "epoch": 0.9955555555555555,
112
+ "grad_norm": 0.0,
113
+ "learning_rate": 0.00017012367842724887,
114
+ "loss": 0.3667,
115
+ "step": 140
116
+ },
117
+ {
118
+ "epoch": 1.0,
119
+ "eval_loss": 2.384185791015625e-07,
120
+ "eval_runtime": 24.7269,
121
+ "eval_samples_per_second": 20.221,
122
+ "eval_steps_per_second": 5.055,
123
+ "step": 141
124
+ }
125
+ ],
126
+ "logging_steps": 10,
127
+ "max_steps": 423,
128
+ "num_input_tokens_seen": 0,
129
+ "num_train_epochs": 3,
130
+ "save_steps": 500,
131
+ "stateful_callbacks": {
132
+ "TrainerControl": {
133
+ "args": {
134
+ "should_epoch_stop": false,
135
+ "should_evaluate": false,
136
+ "should_log": false,
137
+ "should_save": true,
138
+ "should_training_stop": false
139
+ },
140
+ "attributes": {}
141
+ }
142
+ },
143
+ "total_flos": 5.068641927168e+16,
144
+ "train_batch_size": 2,
145
+ "trial_name": null,
146
+ "trial_params": null
147
+ }
checkpoint-141/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f11edac4c464af1d40b4496e89b0bd130f7086055e45713a11fc27241c6b6b5a
3
+ size 5841
checkpoint-141/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-423/README.md ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen3-4B
3
+ library_name: peft
4
+ tags:
5
+ - base_model:adapter:Qwen/Qwen3-4B
6
+ - lora
7
+ - transformers
8
+ ---
9
+
10
+ # Model Card for Model ID
11
+
12
+ <!-- Provide a quick summary of what the model is/does. -->
13
+
14
+
15
+
16
+ ## Model Details
17
+
18
+ ### Model Description
19
+
20
+ <!-- Provide a longer summary of what this model is. -->
21
+
22
+
23
+
24
+ - **Developed by:** [More Information Needed]
25
+ - **Funded by [optional]:** [More Information Needed]
26
+ - **Shared by [optional]:** [More Information Needed]
27
+ - **Model type:** [More Information Needed]
28
+ - **Language(s) (NLP):** [More Information Needed]
29
+ - **License:** [More Information Needed]
30
+ - **Finetuned from model [optional]:** [More Information Needed]
31
+
32
+ ### Model Sources [optional]
33
+
34
+ <!-- Provide the basic links for the model. -->
35
+
36
+ - **Repository:** [More Information Needed]
37
+ - **Paper [optional]:** [More Information Needed]
38
+ - **Demo [optional]:** [More Information Needed]
39
+
40
+ ## Uses
41
+
42
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
43
+
44
+ ### Direct Use
45
+
46
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
47
+
48
+ [More Information Needed]
49
+
50
+ ### Downstream Use [optional]
51
+
52
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
53
+
54
+ [More Information Needed]
55
+
56
+ ### Out-of-Scope Use
57
+
58
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
59
+
60
+ [More Information Needed]
61
+
62
+ ## Bias, Risks, and Limitations
63
+
64
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
65
+
66
+ [More Information Needed]
67
+
68
+ ### Recommendations
69
+
70
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
71
+
72
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
73
+
74
+ ## How to Get Started with the Model
75
+
76
+ Use the code below to get started with the model.
77
+
78
+ [More Information Needed]
79
+
80
+ ## Training Details
81
+
82
+ ### Training Data
83
+
84
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
85
+
86
+ [More Information Needed]
87
+
88
+ ### Training Procedure
89
+
90
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
91
+
92
+ #### Preprocessing [optional]
93
+
94
+ [More Information Needed]
95
+
96
+
97
+ #### Training Hyperparameters
98
+
99
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
100
+
101
+ #### Speeds, Sizes, Times [optional]
102
+
103
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
104
+
105
+ [More Information Needed]
106
+
107
+ ## Evaluation
108
+
109
+ <!-- This section describes the evaluation protocols and provides the results. -->
110
+
111
+ ### Testing Data, Factors & Metrics
112
+
113
+ #### Testing Data
114
+
115
+ <!-- This should link to a Dataset Card if possible. -->
116
+
117
+ [More Information Needed]
118
+
119
+ #### Factors
120
+
121
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
122
+
123
+ [More Information Needed]
124
+
125
+ #### Metrics
126
+
127
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
128
+
129
+ [More Information Needed]
130
+
131
+ ### Results
132
+
133
+ [More Information Needed]
134
+
135
+ #### Summary
136
+
137
+
138
+
139
+ ## Model Examination [optional]
140
+
141
+ <!-- Relevant interpretability work for the model goes here -->
142
+
143
+ [More Information Needed]
144
+
145
+ ## Environmental Impact
146
+
147
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
148
+
149
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
150
+
151
+ - **Hardware Type:** [More Information Needed]
152
+ - **Hours used:** [More Information Needed]
153
+ - **Cloud Provider:** [More Information Needed]
154
+ - **Compute Region:** [More Information Needed]
155
+ - **Carbon Emitted:** [More Information Needed]
156
+
157
+ ## Technical Specifications [optional]
158
+
159
+ ### Model Architecture and Objective
160
+
161
+ [More Information Needed]
162
+
163
+ ### Compute Infrastructure
164
+
165
+ [More Information Needed]
166
+
167
+ #### Hardware
168
+
169
+ [More Information Needed]
170
+
171
+ #### Software
172
+
173
+ [More Information Needed]
174
+
175
+ ## Citation [optional]
176
+
177
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
178
+
179
+ **BibTeX:**
180
+
181
+ [More Information Needed]
182
+
183
+ **APA:**
184
+
185
+ [More Information Needed]
186
+
187
+ ## Glossary [optional]
188
+
189
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
190
+
191
+ [More Information Needed]
192
+
193
+ ## More Information [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Authors [optional]
198
+
199
+ [More Information Needed]
200
+
201
+ ## Model Card Contact
202
+
203
+ [More Information Needed]
204
+ ### Framework versions
205
+
206
+ - PEFT 0.18.0
checkpoint-423/adapter_config.json ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": null,
6
+ "base_model_name_or_path": "Qwen/Qwen3-4B",
7
+ "bias": "none",
8
+ "corda_config": null,
9
+ "ensure_weight_tying": false,
10
+ "eva_config": null,
11
+ "exclude_modules": null,
12
+ "fan_in_fan_out": false,
13
+ "inference_mode": true,
14
+ "init_lora_weights": true,
15
+ "layer_replication": null,
16
+ "layers_pattern": null,
17
+ "layers_to_transform": null,
18
+ "loftq_config": {},
19
+ "lora_alpha": 32,
20
+ "lora_bias": false,
21
+ "lora_dropout": 0.05,
22
+ "megatron_config": null,
23
+ "megatron_core": "megatron.core",
24
+ "modules_to_save": [
25
+ "classifier",
26
+ "score"
27
+ ],
28
+ "peft_type": "LORA",
29
+ "peft_version": "0.18.0",
30
+ "qalora_group_size": 16,
31
+ "r": 16,
32
+ "rank_pattern": {},
33
+ "revision": null,
34
+ "target_modules": [
35
+ "gate_proj",
36
+ "up_proj",
37
+ "o_proj",
38
+ "k_proj",
39
+ "down_proj",
40
+ "v_proj",
41
+ "q_proj"
42
+ ],
43
+ "target_parameters": null,
44
+ "task_type": "SEQ_CLS",
45
+ "trainable_token_indices": null,
46
+ "use_dora": false,
47
+ "use_qalora": false,
48
+ "use_rslora": false
49
+ }
checkpoint-423/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65a4af7908a82be2af25b07a5f5f56f578af85a92f9755d42c33183494b45eca
3
+ size 132198232
checkpoint-423/added_tokens.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</think>": 151668,
3
+ "</tool_call>": 151658,
4
+ "</tool_response>": 151666,
5
+ "<think>": 151667,
6
+ "<tool_call>": 151657,
7
+ "<tool_response>": 151665,
8
+ "<|box_end|>": 151649,
9
+ "<|box_start|>": 151648,
10
+ "<|endoftext|>": 151643,
11
+ "<|file_sep|>": 151664,
12
+ "<|fim_middle|>": 151660,
13
+ "<|fim_pad|>": 151662,
14
+ "<|fim_prefix|>": 151659,
15
+ "<|fim_suffix|>": 151661,
16
+ "<|im_end|>": 151645,
17
+ "<|im_start|>": 151644,
18
+ "<|image_pad|>": 151655,
19
+ "<|object_ref_end|>": 151647,
20
+ "<|object_ref_start|>": 151646,
21
+ "<|quad_end|>": 151651,
22
+ "<|quad_start|>": 151650,
23
+ "<|repo_name|>": 151663,
24
+ "<|video_pad|>": 151656,
25
+ "<|vision_end|>": 151653,
26
+ "<|vision_pad|>": 151654,
27
+ "<|vision_start|>": 151652
28
+ }
checkpoint-423/chat_template.jinja ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if tools %}
2
+ {{- '<|im_start|>system\n' }}
3
+ {%- if messages[0].role == 'system' %}
4
+ {{- messages[0].content + '\n\n' }}
5
+ {%- endif %}
6
+ {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
7
+ {%- for tool in tools %}
8
+ {{- "\n" }}
9
+ {{- tool | tojson }}
10
+ {%- endfor %}
11
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
12
+ {%- else %}
13
+ {%- if messages[0].role == 'system' %}
14
+ {{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
15
+ {%- endif %}
16
+ {%- endif %}
17
+ {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
18
+ {%- for message in messages[::-1] %}
19
+ {%- set index = (messages|length - 1) - loop.index0 %}
20
+ {%- if ns.multi_step_tool and message.role == "user" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}
21
+ {%- set ns.multi_step_tool = false %}
22
+ {%- set ns.last_query_index = index %}
23
+ {%- endif %}
24
+ {%- endfor %}
25
+ {%- for message in messages %}
26
+ {%- if message.content is string %}
27
+ {%- set content = message.content %}
28
+ {%- else %}
29
+ {%- set content = '' %}
30
+ {%- endif %}
31
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
32
+ {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
33
+ {%- elif message.role == "assistant" %}
34
+ {%- set reasoning_content = '' %}
35
+ {%- if message.reasoning_content is string %}
36
+ {%- set reasoning_content = message.reasoning_content %}
37
+ {%- else %}
38
+ {%- if '</think>' in content %}
39
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
40
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
41
+ {%- endif %}
42
+ {%- endif %}
43
+ {%- if loop.index0 > ns.last_query_index %}
44
+ {%- if loop.last or (not loop.last and reasoning_content) %}
45
+ {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
46
+ {%- else %}
47
+ {{- '<|im_start|>' + message.role + '\n' + content }}
48
+ {%- endif %}
49
+ {%- else %}
50
+ {{- '<|im_start|>' + message.role + '\n' + content }}
51
+ {%- endif %}
52
+ {%- if message.tool_calls %}
53
+ {%- for tool_call in message.tool_calls %}
54
+ {%- if (loop.first and content) or (not loop.first) %}
55
+ {{- '\n' }}
56
+ {%- endif %}
57
+ {%- if tool_call.function %}
58
+ {%- set tool_call = tool_call.function %}
59
+ {%- endif %}
60
+ {{- '<tool_call>\n{"name": "' }}
61
+ {{- tool_call.name }}
62
+ {{- '", "arguments": ' }}
63
+ {%- if tool_call.arguments is string %}
64
+ {{- tool_call.arguments }}
65
+ {%- else %}
66
+ {{- tool_call.arguments | tojson }}
67
+ {%- endif %}
68
+ {{- '}\n</tool_call>' }}
69
+ {%- endfor %}
70
+ {%- endif %}
71
+ {{- '<|im_end|>\n' }}
72
+ {%- elif message.role == "tool" %}
73
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
74
+ {{- '<|im_start|>user' }}
75
+ {%- endif %}
76
+ {{- '\n<tool_response>\n' }}
77
+ {{- content }}
78
+ {{- '\n</tool_response>' }}
79
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
80
+ {{- '<|im_end|>\n' }}
81
+ {%- endif %}
82
+ {%- endif %}
83
+ {%- endfor %}
84
+ {%- if add_generation_prompt %}
85
+ {{- '<|im_start|>assistant\n' }}
86
+ {%- if enable_thinking is defined and enable_thinking is false %}
87
+ {{- '<think>\n\n</think>\n\n' }}
88
+ {%- endif %}
89
+ {%- endif %}
checkpoint-423/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-423/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6795e1b4c815e9a0dcfe8baf9a443c52cbdb29229b24f0a243849d37ac9b9dc3
3
+ size 264584853
checkpoint-423/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:093e8447066d5a04fb32a25cb542c87260d80178404c36c644297264da5291a1
3
+ size 14645
checkpoint-423/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46dbb23a7f7568c49d5fd63ef60f502240d770b6864697d79e1f163063e33e79
3
+ size 1465
checkpoint-423/special_tokens_map.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "bos_token": "<|im_end|>",
18
+ "eos_token": {
19
+ "content": "<|im_end|>",
20
+ "lstrip": false,
21
+ "normalized": false,
22
+ "rstrip": false,
23
+ "single_word": false
24
+ },
25
+ "pad_token": "<|im_end|>"
26
+ }
checkpoint-423/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac6583c532ebcffab265f0693ef8624858bd22dece1754500925f53e5dc5f058
3
+ size 11422929
checkpoint-423/tokenizer_config.json ADDED
@@ -0,0 +1,239 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ },
181
+ "151665": {
182
+ "content": "<tool_response>",
183
+ "lstrip": false,
184
+ "normalized": false,
185
+ "rstrip": false,
186
+ "single_word": false,
187
+ "special": false
188
+ },
189
+ "151666": {
190
+ "content": "</tool_response>",
191
+ "lstrip": false,
192
+ "normalized": false,
193
+ "rstrip": false,
194
+ "single_word": false,
195
+ "special": false
196
+ },
197
+ "151667": {
198
+ "content": "<think>",
199
+ "lstrip": false,
200
+ "normalized": false,
201
+ "rstrip": false,
202
+ "single_word": false,
203
+ "special": false
204
+ },
205
+ "151668": {
206
+ "content": "</think>",
207
+ "lstrip": false,
208
+ "normalized": false,
209
+ "rstrip": false,
210
+ "single_word": false,
211
+ "special": false
212
+ }
213
+ },
214
+ "additional_special_tokens": [
215
+ "<|im_start|>",
216
+ "<|im_end|>",
217
+ "<|object_ref_start|>",
218
+ "<|object_ref_end|>",
219
+ "<|box_start|>",
220
+ "<|box_end|>",
221
+ "<|quad_start|>",
222
+ "<|quad_end|>",
223
+ "<|vision_start|>",
224
+ "<|vision_end|>",
225
+ "<|vision_pad|>",
226
+ "<|image_pad|>",
227
+ "<|video_pad|>"
228
+ ],
229
+ "bos_token": "<|im_end|>",
230
+ "clean_up_tokenization_spaces": false,
231
+ "eos_token": "<|im_end|>",
232
+ "errors": "replace",
233
+ "extra_special_tokens": {},
234
+ "model_max_length": 131072,
235
+ "pad_token": "<|im_end|>",
236
+ "split_special_tokens": false,
237
+ "tokenizer_class": "Qwen2Tokenizer",
238
+ "unk_token": null
239
+ }
checkpoint-423/trainer_state.json ADDED
@@ -0,0 +1,359 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": 141,
3
+ "best_metric": 2.384185791015625e-07,
4
+ "best_model_checkpoint": "./star_trek_guard_finetuned/checkpoint-141",
5
+ "epoch": 3.0,
6
+ "eval_steps": 500,
7
+ "global_step": 423,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.0071111111111111115,
14
+ "grad_norm": 863.2054443359375,
15
+ "learning_rate": 0.0,
16
+ "loss": 31.4624,
17
+ "step": 1
18
+ },
19
+ {
20
+ "epoch": 0.07111111111111111,
21
+ "grad_norm": 975.65576171875,
22
+ "learning_rate": 4.186046511627907e-05,
23
+ "loss": 35.8397,
24
+ "step": 10
25
+ },
26
+ {
27
+ "epoch": 0.14222222222222222,
28
+ "grad_norm": 198.86065673828125,
29
+ "learning_rate": 8.837209302325582e-05,
30
+ "loss": 8.8149,
31
+ "step": 20
32
+ },
33
+ {
34
+ "epoch": 0.21333333333333335,
35
+ "grad_norm": 8.080985069274902,
36
+ "learning_rate": 0.00013488372093023256,
37
+ "loss": 0.5709,
38
+ "step": 30
39
+ },
40
+ {
41
+ "epoch": 0.28444444444444444,
42
+ "grad_norm": 0.00014901161193847656,
43
+ "learning_rate": 0.0001813953488372093,
44
+ "loss": 0.1449,
45
+ "step": 40
46
+ },
47
+ {
48
+ "epoch": 0.35555555555555557,
49
+ "grad_norm": 231.45864868164062,
50
+ "learning_rate": 0.00019987699691483048,
51
+ "loss": 2.3554,
52
+ "step": 50
53
+ },
54
+ {
55
+ "epoch": 0.4266666666666667,
56
+ "grad_norm": 0.17169412970542908,
57
+ "learning_rate": 0.00019912640693269752,
58
+ "loss": 0.7842,
59
+ "step": 60
60
+ },
61
+ {
62
+ "epoch": 0.49777777777777776,
63
+ "grad_norm": 0.0002152260858565569,
64
+ "learning_rate": 0.00019769868307835994,
65
+ "loss": 0.3842,
66
+ "step": 70
67
+ },
68
+ {
69
+ "epoch": 0.5688888888888889,
70
+ "grad_norm": 0.7667607665061951,
71
+ "learning_rate": 0.00019560357815343577,
72
+ "loss": 0.1437,
73
+ "step": 80
74
+ },
75
+ {
76
+ "epoch": 0.64,
77
+ "grad_norm": 0.00011813640594482422,
78
+ "learning_rate": 0.00019285540384897073,
79
+ "loss": 0.1796,
80
+ "step": 90
81
+ },
82
+ {
83
+ "epoch": 0.7111111111111111,
84
+ "grad_norm": 0.014821142889559269,
85
+ "learning_rate": 0.00018947293298207635,
86
+ "loss": 0.0,
87
+ "step": 100
88
+ },
89
+ {
90
+ "epoch": 0.7822222222222223,
91
+ "grad_norm": 0.0,
92
+ "learning_rate": 0.0001854792712585539,
93
+ "loss": 0.0002,
94
+ "step": 110
95
+ },
96
+ {
97
+ "epoch": 0.8533333333333334,
98
+ "grad_norm": 0.0,
99
+ "learning_rate": 0.00018090169943749476,
100
+ "loss": 0.0,
101
+ "step": 120
102
+ },
103
+ {
104
+ "epoch": 0.9244444444444444,
105
+ "grad_norm": 0.0003181487263645977,
106
+ "learning_rate": 0.0001757714869760335,
107
+ "loss": 0.4035,
108
+ "step": 130
109
+ },
110
+ {
111
+ "epoch": 0.9955555555555555,
112
+ "grad_norm": 0.0,
113
+ "learning_rate": 0.00017012367842724887,
114
+ "loss": 0.3667,
115
+ "step": 140
116
+ },
117
+ {
118
+ "epoch": 1.0,
119
+ "eval_loss": 2.384185791015625e-07,
120
+ "eval_runtime": 24.7269,
121
+ "eval_samples_per_second": 20.221,
122
+ "eval_steps_per_second": 5.055,
123
+ "step": 141
124
+ },
125
+ {
126
+ "epoch": 1.064,
127
+ "grad_norm": 0.00018907367484644055,
128
+ "learning_rate": 0.00016399685405033167,
129
+ "loss": 0.0714,
130
+ "step": 150
131
+ },
132
+ {
133
+ "epoch": 1.1351111111111112,
134
+ "grad_norm": 48.62416076660156,
135
+ "learning_rate": 0.00015743286626829437,
136
+ "loss": 0.0278,
137
+ "step": 160
138
+ },
139
+ {
140
+ "epoch": 1.2062222222222223,
141
+ "grad_norm": 0.006157203111797571,
142
+ "learning_rate": 0.0001504765537734844,
143
+ "loss": 0.0,
144
+ "step": 170
145
+ },
146
+ {
147
+ "epoch": 1.2773333333333334,
148
+ "grad_norm": 0.016050906851887703,
149
+ "learning_rate": 0.00014317543523384928,
150
+ "loss": 0.0,
151
+ "step": 180
152
+ },
153
+ {
154
+ "epoch": 1.3484444444444446,
155
+ "grad_norm": 0.0,
156
+ "learning_rate": 0.00013557938469225167,
157
+ "loss": 0.0737,
158
+ "step": 190
159
+ },
160
+ {
161
+ "epoch": 1.4195555555555557,
162
+ "grad_norm": 0.0,
163
+ "learning_rate": 0.00012774029087618446,
164
+ "loss": 0.0,
165
+ "step": 200
166
+ },
167
+ {
168
+ "epoch": 1.4906666666666666,
169
+ "grad_norm": 1.4126300811767578e-05,
170
+ "learning_rate": 0.00011971170274514802,
171
+ "loss": 0.0,
172
+ "step": 210
173
+ },
174
+ {
175
+ "epoch": 1.561777777777778,
176
+ "grad_norm": 0.00016976980259642005,
177
+ "learning_rate": 0.00011154846369695863,
178
+ "loss": 0.0,
179
+ "step": 220
180
+ },
181
+ {
182
+ "epoch": 1.6328888888888888,
183
+ "grad_norm": 0.00011539459228515625,
184
+ "learning_rate": 0.00010330633693173082,
185
+ "loss": 0.0,
186
+ "step": 230
187
+ },
188
+ {
189
+ "epoch": 1.704,
190
+ "grad_norm": 0.0006262153037823737,
191
+ "learning_rate": 9.504162453267777e-05,
192
+ "loss": 0.0001,
193
+ "step": 240
194
+ },
195
+ {
196
+ "epoch": 1.775111111111111,
197
+ "grad_norm": 3.820657730102539e-05,
198
+ "learning_rate": 8.681078286579311e-05,
199
+ "loss": 0.0,
200
+ "step": 250
201
+ },
202
+ {
203
+ "epoch": 1.8462222222222222,
204
+ "grad_norm": 0.00015928725770208985,
205
+ "learning_rate": 7.867003692562534e-05,
206
+ "loss": 0.0,
207
+ "step": 260
208
+ },
209
+ {
210
+ "epoch": 1.9173333333333333,
211
+ "grad_norm": 0.0005349747953005135,
212
+ "learning_rate": 7.067499626155354e-05,
213
+ "loss": 0.0,
214
+ "step": 270
215
+ },
216
+ {
217
+ "epoch": 1.9884444444444445,
218
+ "grad_norm": 0.0,
219
+ "learning_rate": 6.28802751081779e-05,
220
+ "loss": 0.0,
221
+ "step": 280
222
+ },
223
+ {
224
+ "epoch": 2.0,
225
+ "eval_loss": 3.0994415283203125e-06,
226
+ "eval_runtime": 24.7326,
227
+ "eval_samples_per_second": 20.216,
228
+ "eval_steps_per_second": 5.054,
229
+ "step": 282
230
+ },
231
+ {
232
+ "epoch": 2.056888888888889,
233
+ "grad_norm": 1.4424324035644531e-05,
234
+ "learning_rate": 5.533911931471936e-05,
235
+ "loss": 0.0,
236
+ "step": 290
237
+ },
238
+ {
239
+ "epoch": 2.128,
240
+ "grad_norm": 2.6166439056396484e-05,
241
+ "learning_rate": 4.810304262187852e-05,
242
+ "loss": 0.0,
243
+ "step": 300
244
+ },
245
+ {
246
+ "epoch": 2.1991111111111112,
247
+ "grad_norm": 9.179115295410156e-05,
248
+ "learning_rate": 4.12214747707527e-05,
249
+ "loss": 0.0,
250
+ "step": 310
251
+ },
252
+ {
253
+ "epoch": 2.2702222222222224,
254
+ "grad_norm": 0.0,
255
+ "learning_rate": 3.4741423847583134e-05,
256
+ "loss": 0.0,
257
+ "step": 320
258
+ },
259
+ {
260
+ "epoch": 2.3413333333333335,
261
+ "grad_norm": 2.7120113372802734e-05,
262
+ "learning_rate": 2.87071551708603e-05,
263
+ "loss": 0.0,
264
+ "step": 330
265
+ },
266
+ {
267
+ "epoch": 2.4124444444444446,
268
+ "grad_norm": 6.794929504394531e-05,
269
+ "learning_rate": 2.315988891431412e-05,
270
+ "loss": 0.0,
271
+ "step": 340
272
+ },
273
+ {
274
+ "epoch": 2.4835555555555557,
275
+ "grad_norm": 1.519918441772461e-05,
276
+ "learning_rate": 1.8137518531330767e-05,
277
+ "loss": 0.0,
278
+ "step": 350
279
+ },
280
+ {
281
+ "epoch": 2.554666666666667,
282
+ "grad_norm": 84.58100891113281,
283
+ "learning_rate": 1.3674351904242611e-05,
284
+ "loss": 0.0247,
285
+ "step": 360
286
+ },
287
+ {
288
+ "epoch": 2.6257777777777775,
289
+ "grad_norm": 0.0,
290
+ "learning_rate": 9.80087698670411e-06,
291
+ "loss": 0.0,
292
+ "step": 370
293
+ },
294
+ {
295
+ "epoch": 2.696888888888889,
296
+ "grad_norm": 0.0,
297
+ "learning_rate": 6.543553540053926e-06,
298
+ "loss": 0.0,
299
+ "step": 380
300
+ },
301
+ {
302
+ "epoch": 2.768,
303
+ "grad_norm": 0.0002226971264462918,
304
+ "learning_rate": 3.924632386315186e-06,
305
+ "loss": 0.0,
306
+ "step": 390
307
+ },
308
+ {
309
+ "epoch": 2.8391111111111114,
310
+ "grad_norm": 0.00040970483678393066,
311
+ "learning_rate": 1.9620034125190644e-06,
312
+ "loss": 0.0,
313
+ "step": 400
314
+ },
315
+ {
316
+ "epoch": 2.910222222222222,
317
+ "grad_norm": 1.4424324035644531e-05,
318
+ "learning_rate": 6.690733646361857e-07,
319
+ "loss": 0.0,
320
+ "step": 410
321
+ },
322
+ {
323
+ "epoch": 2.981333333333333,
324
+ "grad_norm": 0.0001682293659541756,
325
+ "learning_rate": 5.467426590739511e-08,
326
+ "loss": 0.0,
327
+ "step": 420
328
+ },
329
+ {
330
+ "epoch": 3.0,
331
+ "eval_loss": 2.980232238769531e-07,
332
+ "eval_runtime": 24.7472,
333
+ "eval_samples_per_second": 20.204,
334
+ "eval_steps_per_second": 5.051,
335
+ "step": 423
336
+ }
337
+ ],
338
+ "logging_steps": 10,
339
+ "max_steps": 423,
340
+ "num_input_tokens_seen": 0,
341
+ "num_train_epochs": 3,
342
+ "save_steps": 500,
343
+ "stateful_callbacks": {
344
+ "TrainerControl": {
345
+ "args": {
346
+ "should_epoch_stop": false,
347
+ "should_evaluate": false,
348
+ "should_log": false,
349
+ "should_save": true,
350
+ "should_training_stop": true
351
+ },
352
+ "attributes": {}
353
+ }
354
+ },
355
+ "total_flos": 1.5205925781504e+17,
356
+ "train_batch_size": 2,
357
+ "trial_name": null,
358
+ "trial_params": null
359
+ }
checkpoint-423/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f11edac4c464af1d40b4496e89b0bd130f7086055e45713a11fc27241c6b6b5a
3
+ size 5841
checkpoint-423/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8a28a66d4d37807ab821d3ceae78b0e4267a7cb44904f7001080524544528ffb
3
  size 5841
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f11edac4c464af1d40b4496e89b0bd130f7086055e45713a11fc27241c6b6b5a
3
  size 5841