|
|
--- |
|
|
library_name: transformers |
|
|
tags: [text-generation, distilgpt2, fine-tuned, restaurant-reviews] |
|
|
--- |
|
|
|
|
|
# Fine-Tuned DistilGPT2 on Restaurant Reviews |
|
|
|
|
|
This is a fine-tuned version of [DistilGPT2](https://huggingface.co/distilgpt2) using a small dataset of restaurant reviews. The model is trained to generate human-like review completions given a text prompt. |
|
|
|
|
|
--- |
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Model Description |
|
|
|
|
|
This model is a lightweight causal language model based on DistilGPT2, a distilled version of GPT2. It was fine-tuned on a small subset of restaurant reviews to help demonstrate how one can fine-tune and upload a model using Hugging Face and Google Colab with limited resources. |
|
|
|
|
|
- **Developed by:** Sameer Jadaun (Fine-tuning) |
|
|
- **Shared by:** [Sameer2407] |
|
|
- **Model type:** Causal Language Model (Decoder-only transformer) |
|
|
- **Language(s):** English |
|
|
- **License:** Apache 2.0 (inherited from DistilGPT2) |
|
|
- **Finetuned from model:** [distilgpt2](https://huggingface.co/distilgpt2) |
|
|
|
|
|
--- |
|
|
|
|
|
## Uses |
|
|
|
|
|
### Direct Use |
|
|
|
|
|
You can use this model to generate restaurant reviews or autocomplete a review sentence given a starting prompt like: |
|
|
> "The food was" |
|
|
|
|
|
### Downstream Use |
|
|
|
|
|
This model can be further fine-tuned on a larger corpus of restaurant, product, or service-related reviews to make it more robust and production-ready. |
|
|
|
|
|
### Out-of-Scope Use |
|
|
|
|
|
- Not suitable for factual QA tasks. |
|
|
- Should not be used to generate harmful, toxic, or biased content. |
|
|
|
|
|
--- |
|
|
|
|
|
## Bias, Risks, and Limitations |
|
|
|
|
|
This model was trained on a tiny dataset of restaurant reviews and may reflect language biases or poor generation quality due to undertraining. It's only for educational/demo purposes. |
|
|
|
|
|
### Recommendations |
|
|
|
|
|
- Avoid using in production. |
|
|
- Fine-tune further with a more diverse and balanced dataset. |
|
|
|
|
|
--- |
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
|
|
```python |
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained("your-username/fine-tuned-distilgpt2") |
|
|
model = AutoModelForCausalLM.from_pretrained("your-username/fine-tuned-distilgpt2") |
|
|
|
|
|
prompt = "The restaurant was" |
|
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
|
outputs = model.generate(**inputs, max_length=50, do_sample=True) |
|
|
|
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|
|