geoffmunn's picture
Update README.md
7080cb5 verified
metadata
license: apache-2.0
task_categories:
  - text-classification
  - zero-shot-classification
language:
  - en
tags:
  - star_trek
  - qwen
  - Qwen3Guard
pretty_name: Star Trek Classification
size_categories:
  - n<1K

Star Trek Guard Dataset

A binary classification dataset for training guard models to identify whether user inputs are related to Star Trek or not. This dataset is designed for fine-tuning language models to act as content filters, ensuring that only Star Trek-related queries are processed by specialized Star Trek AI assistants.

Dataset Description

The Star Trek Guard Dataset contains 5,000 examples of questions and statements labeled as either:

  • related: Inputs that are relevant to Star Trek (characters, ships, episodes, concepts, etc.)
  • not_related: Inputs that are not related to Star Trek (general knowledge, other topics, etc.)

Dataset Structure

Each example in the dataset follows this JSON format:

{"input": "What is the role of James T. Kirk in Star Trek?", "label": "related"}
{"input": "What is the capital of France?", "label": "not_related"}

Fields

  • input (string): The text input/question to be classified
  • label (string): The classification label, either "related" or "not_related"

Dataset Statistics

  • Total Examples: 5,000
  • Format: JSONL (JSON Lines)
  • Task: Binary Text Classification
  • Labels:
    • related: Star Trek-related content
    • not_related: Non-Star Trek content

Usage

Loading the Dataset

from datasets import load_dataset

# Load from Hugging Face Hub
dataset = load_dataset("geoffmunn/star-trek-guard-dataset")

# Or load from local JSONL file
dataset = load_dataset("json", data_files="star_trek_guard_dataset.jsonl")

Example Usage in Training

This dataset is designed to be used with the Hugging Face Transformers library for fine-tuning sequence classification models. Here's a basic example:

from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForSequenceClassification

# Load dataset
dataset = load_dataset("json", data_files="star_trek_guard_dataset.jsonl")["train"]

# Map labels to IDs
LABEL2ID = {"not_related": 0, "related": 1}
ID2LABEL = {0: "not_related", 1: "related"}

dataset = dataset.map(lambda x: {"labels": LABEL2ID[x["label"]]})

# Split into train/test
dataset = dataset.train_test_split(test_size=0.1)

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-4B", trust_remote_code=True)
model = AutoModelForSequenceClassification.from_pretrained(
    "Qwen/Qwen3-4B",
    num_labels=2,
    id2label=ID2LABEL,
    label2id=LABEL2ID,
    trust_remote_code=True
)

# Tokenize
def tokenize_function(examples):
    return tokenizer(
        examples["input"],
        truncation=True,
        padding="max_length",
        max_length=512,
    )

tokenized_dataset = dataset.map(
    tokenize_function,
    batched=True,
    remove_columns=["input", "label"]
)

For a complete training script, see the reference implementation in train_star_trek_guard.py.

Use Cases

1. Content Moderation for Star Trek Chatbots

This dataset enables training guard models that can filter user inputs before they reach a Star Trek-specific AI assistant. Only Star Trek-related queries are allowed through, ensuring the assistant stays on-topic.

2. API-Based Moderation

The fine-tuned model can be deployed as a moderation API endpoint:

# Example API endpoint (see star_trek_api_server.py for full implementation)
@app.route('/api/moderate', methods=['POST'])
def moderate():
    data = request.json
    message = data.get('message', '')
    
    # Classify the message
    inputs = tokenizer(message, return_tensors="pt", truncation=True, max_length=512)
    outputs = model(**inputs)
    predicted_label = ID2LABEL[outputs.logits.argmax().item()]
    
    # Return moderation result
    risk_level = "Safe" if predicted_label == "related" else "Unsafe"
    return jsonify({
        'risk_level': risk_level,
        'predicted_label': predicted_label,
        'confidence': float(torch.softmax(outputs.logits, dim=-1).max())
    })

3. Real-Time Chat Filtering

The guard model can be integrated into chat interfaces to provide real-time moderation, blocking non-Star Trek queries before they're sent to the LLM. See star_trek_chat.html for a complete implementation example.

Model Training Recommendations

Based on the reference training script, recommended hyperparameters:

  • Base Model: Qwen/Qwen3-4B
  • Learning Rate: 2e-4
  • Batch Size: 2 (with gradient accumulation of 16)
  • Epochs: 3
  • Max Length: 512 tokens
  • Fine-tuning Method: LoRA (Low-Rank Adaptation)
    • r=16
    • lora_alpha=32
    • lora_dropout=0.05
    • Target modules: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]

Dataset Examples

Related Examples

{"input": "What is the role of James T. Kirk in Star Trek?", "label": "related"}
{"input": "Who portrayed Spock in Star Trek?", "label": "related"}
{"input": "What is the Prime Directive in Star Trek?", "label": "related"}
{"input": "How does a warp drive work?", "label": "related"}
{"input": "What is the 49th Rule of Acquisition?", "label": "related"}

Not Related Examples

{"input": "What is the capital of France?", "label": "not_related"}
{"input": "What is 2 + 2?", "label": "not_related"}
{"input": "Is the sifaka endangered?", "label": "not_related"}
{"input": "When was baseball first played?", "label": "not_related"}
{"input": "How many employees does Spotify have?", "label": "not_related"}

Label Mapping

The dataset uses the following label mapping for model training:

  • "not_related" → Class ID 0
  • "related" → Class ID 1

In the context of content moderation:

  • related = Safe (Star Trek-related content, allowed)
  • not_related = Unsafe (Non-Star Trek content, blocked)

Citation

If you use this dataset in your research or project, please cite it appropriately:

@dataset{star_trek_guard_dataset,
  title={Star Trek Guard Dataset},
  author={Geoff Munn},
  year={2025},
  url={https://huggingface.co/datasets/geoffmunn/star-trek-guard-dataset}
}

License

Apache 2.0

Acknowledgments

This dataset was created for training guard models to ensure Star Trek AI assistants remain focused on Star Trek-related content, improving user experience and maintaining topic relevance.

Related Resources

  • Training Script: See train_star_trek_guard.py for a complete fine-tuning implementation
  • API Server: See star_trek_api_server.py for deployment as a moderation API
  • Chat Interface: See star_trek_chat.html for integration into a web-based chat application