BERT Fine-Tuned for Mental Health Classification
This model is a fine-tuned bert-base-uncased transformer trained to classify text inputs into seven mental health categories. It is designed to support emotional analysis in mental health-related applications by detecting signs of psychological distress in user-generated content.
Try It Out
You can interact with the model in real-time via this Streamlit-powered Hugging Face Space:
👉 Live Demo on Hugging Face Spaces
Datasets Used
sai1908/Mental_Health_Condition_Classification
Reddit posts from mental health forums
~80,000 cleaned entries from the original 100,000kamruzzaman-asif/reddit-mental-health-classification
Additional Reddit mental health posts to improve coverage and diversity
Model Overview
- Base Model:
bert-base-uncased - Type: Multi-class text classification (7 labels)
- Framework: Hugging Face Transformers
- Training Method: Trainer API (PyTorch backend)
Target Labels
- Anxiety
- Depression
- Bipolar
- Normal
- Personality Disorder
- Stress
- Suicidal
Training Configuration
| Parameter | Value |
|---|---|
| Epochs | 3 |
| Learning Rate | 2e-5 |
| Batch Size | 16 |
| Max Token Length | 256 |
| Optimizer | AdamW |
| Hardware | 2x NVIDIA Tesla T4 GPUs |
| Total FLOPs | 25,605,736,040,851,200 |
Evaluation Metrics
| Metric | Value |
|---|---|
| Accuracy | 0.9656 |
| Validation Loss | 0.1513 |
| Training Loss | 0.0483 |
| Samples/sec | 65.354 |
| Training Time | ~1.65 hrs |
Example Inference
from transformers import pipeline
classifier = pipeline("text-classification", model="Elite13/bert-finetuned-mental-health")
text = "I'm tired of everything. Nothing makes sense anymore."
result = classifier(text)
print(result)
- Downloads last month
- 22
Datasets used to train Elite13/bert-finetuned-mental-health
Space using Elite13/bert-finetuned-mental-health 1
Evaluation results
- Accuracy on sai1908/Mental_Health_Condition_Classificationself-reported0.966
- Validation Loss on sai1908/Mental_Health_Condition_Classificationself-reported0.151