DistilBERT Fine-Tuned on IMDB Sentiment Dataset

This model is a fine-tuned version of distilbert-base-uncased on the IMDB movie reviews dataset for binary sentiment classification (positive/negative).

Model Details

  • Base model: distilbert-base-uncased
  • Dataset: IMDB (25k train + 25k test reviews)
  • Labels:
    • 0 โ†’ negative
    • 1 โ†’ positive

How to Use

from transformers import pipeline

clf = pipeline("text-classification", model="BhuviMohan/distilbert-imdb-finetuned")
print(clf("The movie was absolutely fantastic!"))

## Training Procedure

- Epochs: 1

- Batch size: 8

- Learning rate: 2e-5

- Optimizer: AdamW

- Hardware: CPU training (Windows)

- Results (Evaluation)

**Accuracy: ~1.0 (on 2k subset)**


Save the file.

---

# โœ… **STEP 5 โ€” Upload model to Hugging Face Hub**

Make sure your token is created with **Write access**.

Then login (if not logged in):

```bash
huggingface-cli login
Downloads last month
-
Safetensors
Model size
67M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using BhuviMohan/distilbert-imdb-finetuned 1