geoffmunn commited on
Commit
0660f29
·
verified ·
1 Parent(s): 3f990d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +220 -3
README.md CHANGED
@@ -1,3 +1,220 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-classification
5
+ - zero-shot-classification
6
+ language:
7
+ - en
8
+ tags:
9
+ - star_trek
10
+ - qwen
11
+ - Qwen3Guard
12
+ pretty_name: Star Trek Classifica
13
+ size_categories:
14
+ - n<1K
15
+ ---
16
+
17
+ # Star Trek Guard Dataset
18
+
19
+ A binary classification dataset for training guard models to identify whether user inputs are related to Star Trek or not. This dataset is designed for fine-tuning language models to act as content filters, ensuring that only Star Trek-related queries are processed by specialized Star Trek AI assistants.
20
+
21
+ ## Dataset Description
22
+
23
+ The Star Trek Guard Dataset contains **5,000 examples** of questions and statements labeled as either:
24
+ - **`related`**: Inputs that are relevant to Star Trek (characters, ships, episodes, concepts, etc.)
25
+ - **`not_related`**: Inputs that are not related to Star Trek (general knowledge, other topics, etc.)
26
+
27
+ ### Dataset Structure
28
+
29
+ Each example in the dataset follows this JSON format:
30
+
31
+ ```json
32
+ {"input": "What is the role of James T. Kirk in Star Trek?", "label": "related"}
33
+ {"input": "What is the capital of France?", "label": "not_related"}
34
+ ```
35
+
36
+ ### Fields
37
+
38
+ - **`input`** (string): The text input/question to be classified
39
+ - **`label`** (string): The classification label, either `"related"` or `"not_related"`
40
+
41
+ ## Dataset Statistics
42
+
43
+ - **Total Examples**: 5,000
44
+ - **Format**: JSONL (JSON Lines)
45
+ - **Task**: Binary Text Classification
46
+ - **Labels**:
47
+ - `related`: Star Trek-related content
48
+ - `not_related`: Non-Star Trek content
49
+
50
+ ## Usage
51
+
52
+ ### Loading the Dataset
53
+
54
+ ```python
55
+ from datasets import load_dataset
56
+
57
+ # Load from Hugging Face Hub
58
+ dataset = load_dataset("your-username/star-trek-guard-dataset")
59
+
60
+ # Or load from local JSONL file
61
+ dataset = load_dataset("json", data_files="star_trek_guard_dataset.jsonl")
62
+ ```
63
+
64
+ ### Example Usage in Training
65
+
66
+ This dataset is designed to be used with the Hugging Face Transformers library for fine-tuning sequence classification models. Here's a basic example:
67
+
68
+ ```python
69
+ from datasets import load_dataset
70
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
71
+
72
+ # Load dataset
73
+ dataset = load_dataset("json", data_files="star_trek_guard_dataset.jsonl")["train"]
74
+
75
+ # Map labels to IDs
76
+ LABEL2ID = {"not_related": 0, "related": 1}
77
+ ID2LABEL = {0: "not_related", 1: "related"}
78
+
79
+ dataset = dataset.map(lambda x: {"labels": LABEL2ID[x["label"]]})
80
+
81
+ # Split into train/test
82
+ dataset = dataset.train_test_split(test_size=0.1)
83
+
84
+ # Load tokenizer and model
85
+ tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-4B", trust_remote_code=True)
86
+ model = AutoModelForSequenceClassification.from_pretrained(
87
+ "Qwen/Qwen3-4B",
88
+ num_labels=2,
89
+ id2label=ID2LABEL,
90
+ label2id=LABEL2ID,
91
+ trust_remote_code=True
92
+ )
93
+
94
+ # Tokenize
95
+ def tokenize_function(examples):
96
+ return tokenizer(
97
+ examples["input"],
98
+ truncation=True,
99
+ padding="max_length",
100
+ max_length=512,
101
+ )
102
+
103
+ tokenized_dataset = dataset.map(
104
+ tokenize_function,
105
+ batched=True,
106
+ remove_columns=["input", "label"]
107
+ )
108
+ ```
109
+
110
+ For a complete training script, see the reference implementation in `train_star_trek_guard.py`.
111
+
112
+ ## Use Cases
113
+
114
+ ### 1. Content Moderation for Star Trek Chatbots
115
+
116
+ This dataset enables training guard models that can filter user inputs before they reach a Star Trek-specific AI assistant. Only Star Trek-related queries are allowed through, ensuring the assistant stays on-topic.
117
+
118
+ ### 2. API-Based Moderation
119
+
120
+ The fine-tuned model can be deployed as a moderation API endpoint:
121
+
122
+ ```python
123
+ # Example API endpoint (see star_trek_api_server.py for full implementation)
124
+ @app.route('/api/moderate', methods=['POST'])
125
+ def moderate():
126
+ data = request.json
127
+ message = data.get('message', '')
128
+
129
+ # Classify the message
130
+ inputs = tokenizer(message, return_tensors="pt", truncation=True, max_length=512)
131
+ outputs = model(**inputs)
132
+ predicted_label = ID2LABEL[outputs.logits.argmax().item()]
133
+
134
+ # Return moderation result
135
+ risk_level = "Safe" if predicted_label == "related" else "Unsafe"
136
+ return jsonify({
137
+ 'risk_level': risk_level,
138
+ 'predicted_label': predicted_label,
139
+ 'confidence': float(torch.softmax(outputs.logits, dim=-1).max())
140
+ })
141
+ ```
142
+
143
+ ### 3. Real-Time Chat Filtering
144
+
145
+ The guard model can be integrated into chat interfaces to provide real-time moderation, blocking non-Star Trek queries before they're sent to the LLM. See `star_trek_chat.html` for a complete implementation example.
146
+
147
+ ## Model Training Recommendations
148
+
149
+ Based on the reference training script, recommended hyperparameters:
150
+
151
+ - **Base Model**: Qwen/Qwen3-4B
152
+ - **Learning Rate**: 2e-4
153
+ - **Batch Size**: 2 (with gradient accumulation of 16)
154
+ - **Epochs**: 3
155
+ - **Max Length**: 512 tokens
156
+ - **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
157
+ - `r=16`
158
+ - `lora_alpha=32`
159
+ - `lora_dropout=0.05`
160
+ - Target modules: `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]`
161
+
162
+ ## Dataset Examples
163
+
164
+ ### Related Examples
165
+
166
+ ```json
167
+ {"input": "What is the role of James T. Kirk in Star Trek?", "label": "related"}
168
+ {"input": "Who portrayed Spock in Star Trek?", "label": "related"}
169
+ {"input": "What is the Prime Directive in Star Trek?", "label": "related"}
170
+ {"input": "How does a warp drive work?", "label": "related"}
171
+ {"input": "What is the 49th Rule of Acquisition?", "label": "related"}
172
+ ```
173
+
174
+ ### Not Related Examples
175
+
176
+ ```json
177
+ {"input": "What is the capital of France?", "label": "not_related"}
178
+ {"input": "What is 2 + 2?", "label": "not_related"}
179
+ {"input": "Is the sifaka endangered?", "label": "not_related"}
180
+ {"input": "When was baseball first played?", "label": "not_related"}
181
+ {"input": "How many employees does Spotify have?", "label": "not_related"}
182
+ ```
183
+
184
+ ## Label Mapping
185
+
186
+ The dataset uses the following label mapping for model training:
187
+
188
+ - `"not_related"` → Class ID `0`
189
+ - `"related"` → Class ID `1`
190
+
191
+ In the context of content moderation:
192
+ - **`related`** = **Safe** (Star Trek-related content, allowed)
193
+ - **`not_related`** = **Unsafe** (Non-Star Trek content, blocked)
194
+
195
+ ## Citation
196
+
197
+ If you use this dataset in your research or project, please cite it appropriately:
198
+
199
+ ```bibtex
200
+ @dataset{star_trek_guard_dataset,
201
+ title={Star Trek Guard Dataset},
202
+ author={Your Name},
203
+ year={2024},
204
+ url={https://huggingface.co/datasets/your-username/star-trek-guard-dataset}
205
+ }
206
+ ```
207
+
208
+ ## License
209
+
210
+ Apache 2.0
211
+
212
+ ## Acknowledgments
213
+
214
+ This dataset was created for training guard models to ensure Star Trek AI assistants remain focused on Star Trek-related content, improving user experience and maintaining topic relevance.
215
+
216
+ ## Related Resources
217
+
218
+ - **Training Script**: See `train_star_trek_guard.py` for a complete fine-tuning implementation
219
+ - **API Server**: See `star_trek_api_server.py` for deployment as a moderation API
220
+ - **Chat Interface**: See `star_trek_chat.html` for integration into a web-based chat application