SRA-BERT for Hate Speech Detection
Supervised Rational Attention (SRA) fine-tuned BERT model for explainable hate speech detection.
Abstract
The opaque nature of deep learning models presents significant challenges for the ethical deployment of hate speech detection systems. To address this limitation, we introduce Supervised Rational Attention (SRA), a framework that explicitly aligns model attention with human rationales, improving both interpretability and fairness in hate speech classification. SRA integrates a supervised attention mechanism into transformer-based classifiers, optimizing a joint objective that combines standard classification loss with an alignment loss term that minimizes the discrepancy between attention weights and human-annotated rationales. Empirically, SRA achieves 2.4× better explainability compared to current baselines, and produces token-level explanations that are more faithful and human-aligned.
📄 Paper: "Aligning Attention with Human Rationales for Self-Explaining Hate Speech Detection"
🎯 Accepted: AAAI-26 AI Alignment Track
🔗 Demo: Live Demo on HuggingFace Spaces
Model Description
This model is a bert-base-uncased classifier fine-tuned on the HateXplain dataset with Supervised Rational Attention (SRA) — a training method that aligns the model's attention weights with human-annotated rationales.
Key Innovation
Standard BERT attention weights don't reliably explain predictions. SRA supervises a specific attention head (Layer 8, Head 7) to attend to the same tokens that human annotators identified as evidence for their labeling decisions.
Result: Attention weights that actually explain the model's decisions, achieving 2.4× better alignment with human rationales while maintaining classification performance.
Labels
| Label | ID | Description |
|---|---|---|
| Normal | 0 | Non-hateful, non-offensive content |
| Offensive | 1 | Offensive but not hate speech |
| Hate Speech | 2 | Hate speech targeting protected groups |
Usage
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
model_name = "bragee/sra-hate-speech-bert"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
text = "I love spending time with my friends"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=128)
with torch.no_grad():
outputs = model(**inputs)
probs = torch.softmax(outputs.logits, dim=1)
pred = torch.argmax(probs, dim=1).item()
labels = ["Normal", "Offensive", "Hate Speech"]
print(f"Prediction: {labels[pred]} ({probs[0][pred]:.1%})")
Extracting Attention-Based Explanations
The key feature of this model is interpretable attention. Extract attention from the supervised head:
with torch.no_grad():
outputs = model(**inputs, output_attentions=True)
# Extract attention from Layer 8, Head 7 (the supervised head)
attention = outputs.attentions[8][:, 7, :, :] # (batch, seq, seq)
attention_weights = attention.mean(dim=1) # Average over query positions
tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"][0])
for token, weight in zip(tokens, attention_weights[0]):
if token not in ["[CLS]", "[SEP]", "[PAD]"]:
print(f"{token}: {weight:.3f}")
Training Details
Training Data
- Dataset: HateXplain
- Size: ~20,000 posts from Twitter and Gab
- Annotations: 3-class labels + token-level rationales from multiple annotators
Training Procedure
- Base model:
bert-base-uncased - Epochs: 5
- Batch size: 16
- Learning rate: 2e-5
- Max sequence length: 128
SRA Configuration
- Supervised attention head: Layer 8, Head 7
- Attention loss weight (α): 10.0
- Loss function: Cross-entropy + α × MSE(attention, rationale)
The MSE loss is only computed for offensive/hate speech examples where human rationales exist.
Evaluation
Evaluated on the HateXplain test set:
| Metric | Score |
|---|---|
| Macro F1 | 0.68 |
| Accuracy | 0.70 |
| Attention-Rationale Alignment | 2.4× baseline |
Intended Use
- Primary use: Research on explainable AI and hate speech detection
- Demo/educational: Understanding how attention-based explanations work
- Content moderation research: Studying interpretable classifiers
Limitations
- Trained on English Twitter/Gab data — may not generalize to other platforms or languages
- Attention explanations are post-hoc interpretations, not guaranteed causal explanations
- Should not be used as sole arbiter for content moderation decisions
Ethical Considerations
This model is intended for research purposes. Hate speech detection systems can have false positives that disproportionately affect marginalized groups. Human review should always be part of any content moderation pipeline.
Citation
@article{eilertsen2025sra,
title={Aligning Attention with Human Rationales for Self-Explaining Hate Speech Detection},
author={Eilertsen, Brage and Bjørgfinsdóttir, Røskva and Vargas, Francielle and Ramezani-Kebrya, Ali},
journal={arXiv preprint arXiv:2511.07065},
year={2025},
note={Accepted at AAAI-26}
}
Model Card Contact
For questions about this model, please open an issue on the model repository.
- Downloads last month
- 34
Model tree for bragee/sra-hate-speech-bert
Base model
google-bert/bert-base-uncased