|
|
--- |
|
|
license: apache-2.0 |
|
|
task_categories: |
|
|
- text-classification |
|
|
- zero-shot-classification |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- new_zealand |
|
|
- qwen |
|
|
- Qwen3Guard |
|
|
pretty_name: New Zealand Classification |
|
|
size_categories: |
|
|
- n<1K |
|
|
--- |
|
|
|
|
|
# New Zealand Guard Dataset |
|
|
|
|
|
A binary classification dataset for training guard models to identify whether user inputs are related to New Zealand or not. This dataset is designed for fine-tuning language models to act as content filters, ensuring that only New Zealand-related queries are processed by specialised New Zealand AI assistants. |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
The New Zealand Guard Dataset contains **5,000 examples** of questions and statements labeled as either: |
|
|
- **`related`**: Inputs that are relevant to New Zealand (cities, landmarks, famous people etc.) |
|
|
- **`not_related`**: Inputs that are not related to New Zealand (general knowledge, other topics, etc.) |
|
|
|
|
|
### Dataset Structure |
|
|
|
|
|
Each example in the dataset follows this JSON format: |
|
|
|
|
|
```json |
|
|
{"input": "What did Kupe do?", "label": "related"} |
|
|
{"input": "What is the capital of France?", "label": "not_related"} |
|
|
``` |
|
|
|
|
|
### Fields |
|
|
|
|
|
- **`input`** (string): The text input/question to be classified |
|
|
- **`label`** (string): The classification label, either `"related"` or `"not_related"` |
|
|
|
|
|
## Dataset Statistics |
|
|
|
|
|
- **Total Examples**: 5,000 |
|
|
- **Format**: JSONL (JSON Lines) |
|
|
- **Task**: Binary Text Classification |
|
|
- **Labels**: |
|
|
- `related`: New Zealand-related content |
|
|
- `not_related`: Non-New Zealand content |
|
|
|
|
|
## Usage |
|
|
|
|
|
### Loading the Dataset |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load from Hugging Face Hub |
|
|
dataset = load_dataset("geoffmunn/new-zealand-guard-dataset") |
|
|
|
|
|
# Or load from local JSONL file |
|
|
dataset = load_dataset("json", data_files="new_zealand_guard_dataset.jsonl") |
|
|
``` |
|
|
|
|
|
### Example Usage in Training |
|
|
|
|
|
This dataset is designed to be used with the Hugging Face Transformers library for fine-tuning sequence classification models. Here's a basic example: |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
from transformers import AutoTokenizer, AutoModelForSequenceClassification |
|
|
|
|
|
# Load dataset |
|
|
dataset = load_dataset("json", data_files="new_zealand_guard_dataset.jsonl")["train"] |
|
|
|
|
|
# Map labels to IDs |
|
|
LABEL2ID = {"not_related": 0, "related": 1} |
|
|
ID2LABEL = {0: "not_related", 1: "related"} |
|
|
|
|
|
dataset = dataset.map(lambda x: {"labels": LABEL2ID[x["label"]]}) |
|
|
|
|
|
# Split into train/test |
|
|
dataset = dataset.train_test_split(test_size=0.1) |
|
|
|
|
|
# Load tokenizer and model |
|
|
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-4B", trust_remote_code=True) |
|
|
model = AutoModelForSequenceClassification.from_pretrained( |
|
|
"Qwen/Qwen3-4B", |
|
|
num_labels=2, |
|
|
id2label=ID2LABEL, |
|
|
label2id=LABEL2ID, |
|
|
trust_remote_code=True |
|
|
) |
|
|
|
|
|
# Tokenize |
|
|
def tokenize_function(examples): |
|
|
return tokenizer( |
|
|
examples["input"], |
|
|
truncation=True, |
|
|
padding="max_length", |
|
|
max_length=512, |
|
|
) |
|
|
|
|
|
tokenized_dataset = dataset.map( |
|
|
tokenize_function, |
|
|
batched=True, |
|
|
remove_columns=["input", "label"] |
|
|
) |
|
|
``` |
|
|
|
|
|
For a complete training script, see the reference implementation in `train_new_zealand_guard.py`. |
|
|
|
|
|
## Use Cases |
|
|
|
|
|
### 1. Content Moderation for New Zealand Chatbots |
|
|
|
|
|
This dataset enables training guard models that can filter user inputs before they reach a New Zealand-specific AI assistant. Only New Zealand-related queries are allowed through, ensuring the assistant stays on-topic. |
|
|
|
|
|
### 2. API-Based Moderation |
|
|
|
|
|
The fine-tuned model can be deployed as a moderation API endpoint: |
|
|
|
|
|
```python |
|
|
# Example API endpoint (see new_zealand_api_server.py for full implementation) |
|
|
@app.route('/api/moderate', methods=['POST']) |
|
|
def moderate(): |
|
|
data = request.json |
|
|
message = data.get('message', '') |
|
|
|
|
|
# Classify the message |
|
|
inputs = tokenizer(message, return_tensors="pt", truncation=True, max_length=512) |
|
|
outputs = model(**inputs) |
|
|
predicted_label = ID2LABEL[outputs.logits.argmax().item()] |
|
|
|
|
|
# Return moderation result |
|
|
risk_level = "Safe" if predicted_label == "related" else "Unsafe" |
|
|
return jsonify({ |
|
|
'risk_level': risk_level, |
|
|
'predicted_label': predicted_label, |
|
|
'confidence': float(torch.softmax(outputs.logits, dim=-1).max()) |
|
|
}) |
|
|
``` |
|
|
|
|
|
### 3. Real-Time Chat Filtering |
|
|
|
|
|
The guard model can be integrated into chat interfaces to provide real-time moderation, blocking non-New Zealand queries before they're sent to the LLM. See `new_zealand_chat.html` for a complete implementation example. |
|
|
|
|
|
## Model Training Recommendations |
|
|
|
|
|
Based on the reference training script, recommended hyperparameters: |
|
|
|
|
|
- **Base Model**: Qwen/Qwen3-4B |
|
|
- **Learning Rate**: 2e-4 |
|
|
- **Batch Size**: 2 (with gradient accumulation of 16) |
|
|
- **Epochs**: 3 |
|
|
- **Max Length**: 512 tokens |
|
|
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation) |
|
|
- `r=16` |
|
|
- `lora_alpha=32` |
|
|
- `lora_dropout=0.05` |
|
|
- Target modules: `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]` |
|
|
|
|
|
## Dataset Examples |
|
|
|
|
|
### Related Examples |
|
|
|
|
|
```json |
|
|
{"input": "What is Kaikoura known for?", "label": "related"} |
|
|
{"input": "What tourist attractions are in Northland?", "label": "related"} |
|
|
{"input": "What teams has Brendon McCullum played for?", "label": "related"} |
|
|
{"input": "What is the largest lake in New Zealand?", "label": "related"} |
|
|
{"input": "Who is Dame Kiri Te Kanawa?", "label": "related"} |
|
|
``` |
|
|
|
|
|
### Not Related Examples |
|
|
|
|
|
```json |
|
|
{"input": "What is the capital of France?", "label": "not_related"} |
|
|
{"input": "What is 2 + 2?", "label": "not_related"} |
|
|
{"input": "Is the sifaka endangered?", "label": "not_related"} |
|
|
{"input": "When was baseball first played?", "label": "not_related"} |
|
|
{"input": "How many employees does Spotify have?", "label": "not_related"} |
|
|
``` |
|
|
|
|
|
## Label Mapping |
|
|
|
|
|
The dataset uses the following label mapping for model training: |
|
|
|
|
|
- `"not_related"` → Class ID `0` |
|
|
- `"related"` → Class ID `1` |
|
|
|
|
|
In the context of content moderation: |
|
|
- **`related`** = **Safe** (New Zealand-related content, allowed) |
|
|
- **`not_related`** = **Unsafe** (Non-New Zealand content, blocked) |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this dataset in your research or project, please cite it appropriately: |
|
|
|
|
|
```bibtex |
|
|
@dataset{new_zealand_guard_dataset, |
|
|
title={New Zealand Guard Dataset}, |
|
|
author={Geoff Munn}, |
|
|
year={2025}, |
|
|
url={https://huggingface.co/datasets/geoffmunn/new-zealand-guard-dataset} |
|
|
} |
|
|
``` |
|
|
|
|
|
## License |
|
|
|
|
|
Apache 2.0 |
|
|
|
|
|
## Acknowledgments |
|
|
|
|
|
This dataset was created for training guard models to ensure New Zealand AI assistants remain focused on New Zealand-related content, improving user experience and maintaining topic relevance. |
|
|
|
|
|
## Related Resources |
|
|
|
|
|
- **Training Script**: See `train_new_zealand_guard.py` for a complete fine-tuning implementation |
|
|
- **API Server**: See `new_zealand_api_server.py` for deployment as a moderation API |
|
|
- **Chat Interface**: See `new_zealand_chat.html` for integration into a web-based chat application |