File size: 6,853 Bytes
0660f29 7080cb5 0660f29 8db1bd0 0660f29 8db1bd0 0660f29 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 |
---
license: apache-2.0
task_categories:
- text-classification
- zero-shot-classification
language:
- en
tags:
- star_trek
- qwen
- Qwen3Guard
pretty_name: Star Trek Classification
size_categories:
- n<1K
---
# Star Trek Guard Dataset
A binary classification dataset for training guard models to identify whether user inputs are related to Star Trek or not. This dataset is designed for fine-tuning language models to act as content filters, ensuring that only Star Trek-related queries are processed by specialized Star Trek AI assistants.
## Dataset Description
The Star Trek Guard Dataset contains **5,000 examples** of questions and statements labeled as either:
- **`related`**: Inputs that are relevant to Star Trek (characters, ships, episodes, concepts, etc.)
- **`not_related`**: Inputs that are not related to Star Trek (general knowledge, other topics, etc.)
### Dataset Structure
Each example in the dataset follows this JSON format:
```json
{"input": "What is the role of James T. Kirk in Star Trek?", "label": "related"}
{"input": "What is the capital of France?", "label": "not_related"}
```
### Fields
- **`input`** (string): The text input/question to be classified
- **`label`** (string): The classification label, either `"related"` or `"not_related"`
## Dataset Statistics
- **Total Examples**: 5,000
- **Format**: JSONL (JSON Lines)
- **Task**: Binary Text Classification
- **Labels**:
- `related`: Star Trek-related content
- `not_related`: Non-Star Trek content
## Usage
### Loading the Dataset
```python
from datasets import load_dataset
# Load from Hugging Face Hub
dataset = load_dataset("geoffmunn/star-trek-guard-dataset")
# Or load from local JSONL file
dataset = load_dataset("json", data_files="star_trek_guard_dataset.jsonl")
```
### Example Usage in Training
This dataset is designed to be used with the Hugging Face Transformers library for fine-tuning sequence classification models. Here's a basic example:
```python
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForSequenceClassification
# Load dataset
dataset = load_dataset("json", data_files="star_trek_guard_dataset.jsonl")["train"]
# Map labels to IDs
LABEL2ID = {"not_related": 0, "related": 1}
ID2LABEL = {0: "not_related", 1: "related"}
dataset = dataset.map(lambda x: {"labels": LABEL2ID[x["label"]]})
# Split into train/test
dataset = dataset.train_test_split(test_size=0.1)
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-4B", trust_remote_code=True)
model = AutoModelForSequenceClassification.from_pretrained(
"Qwen/Qwen3-4B",
num_labels=2,
id2label=ID2LABEL,
label2id=LABEL2ID,
trust_remote_code=True
)
# Tokenize
def tokenize_function(examples):
return tokenizer(
examples["input"],
truncation=True,
padding="max_length",
max_length=512,
)
tokenized_dataset = dataset.map(
tokenize_function,
batched=True,
remove_columns=["input", "label"]
)
```
For a complete training script, see the reference implementation in `train_star_trek_guard.py`.
## Use Cases
### 1. Content Moderation for Star Trek Chatbots
This dataset enables training guard models that can filter user inputs before they reach a Star Trek-specific AI assistant. Only Star Trek-related queries are allowed through, ensuring the assistant stays on-topic.
### 2. API-Based Moderation
The fine-tuned model can be deployed as a moderation API endpoint:
```python
# Example API endpoint (see star_trek_api_server.py for full implementation)
@app.route('/api/moderate', methods=['POST'])
def moderate():
data = request.json
message = data.get('message', '')
# Classify the message
inputs = tokenizer(message, return_tensors="pt", truncation=True, max_length=512)
outputs = model(**inputs)
predicted_label = ID2LABEL[outputs.logits.argmax().item()]
# Return moderation result
risk_level = "Safe" if predicted_label == "related" else "Unsafe"
return jsonify({
'risk_level': risk_level,
'predicted_label': predicted_label,
'confidence': float(torch.softmax(outputs.logits, dim=-1).max())
})
```
### 3. Real-Time Chat Filtering
The guard model can be integrated into chat interfaces to provide real-time moderation, blocking non-Star Trek queries before they're sent to the LLM. See `star_trek_chat.html` for a complete implementation example.
## Model Training Recommendations
Based on the reference training script, recommended hyperparameters:
- **Base Model**: Qwen/Qwen3-4B
- **Learning Rate**: 2e-4
- **Batch Size**: 2 (with gradient accumulation of 16)
- **Epochs**: 3
- **Max Length**: 512 tokens
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
- `r=16`
- `lora_alpha=32`
- `lora_dropout=0.05`
- Target modules: `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]`
## Dataset Examples
### Related Examples
```json
{"input": "What is the role of James T. Kirk in Star Trek?", "label": "related"}
{"input": "Who portrayed Spock in Star Trek?", "label": "related"}
{"input": "What is the Prime Directive in Star Trek?", "label": "related"}
{"input": "How does a warp drive work?", "label": "related"}
{"input": "What is the 49th Rule of Acquisition?", "label": "related"}
```
### Not Related Examples
```json
{"input": "What is the capital of France?", "label": "not_related"}
{"input": "What is 2 + 2?", "label": "not_related"}
{"input": "Is the sifaka endangered?", "label": "not_related"}
{"input": "When was baseball first played?", "label": "not_related"}
{"input": "How many employees does Spotify have?", "label": "not_related"}
```
## Label Mapping
The dataset uses the following label mapping for model training:
- `"not_related"` → Class ID `0`
- `"related"` → Class ID `1`
In the context of content moderation:
- **`related`** = **Safe** (Star Trek-related content, allowed)
- **`not_related`** = **Unsafe** (Non-Star Trek content, blocked)
## Citation
If you use this dataset in your research or project, please cite it appropriately:
```bibtex
@dataset{star_trek_guard_dataset,
title={Star Trek Guard Dataset},
author={Geoff Munn},
year={2025},
url={https://huggingface.co/datasets/geoffmunn/star-trek-guard-dataset}
}
```
## License
Apache 2.0
## Acknowledgments
This dataset was created for training guard models to ensure Star Trek AI assistants remain focused on Star Trek-related content, improving user experience and maintaining topic relevance.
## Related Resources
- **Training Script**: See `train_star_trek_guard.py` for a complete fine-tuning implementation
- **API Server**: See `star_trek_api_server.py` for deployment as a moderation API
- **Chat Interface**: See `star_trek_chat.html` for integration into a web-based chat application |