opt-2.7b-philosophy / README.md
mari-a-12a's picture
Update README.md
c16fef8 verified
metadata
library_name: transformers
tags:
  - chatbot
  - philosopher
  - fine-tuned
  - gpt2
license: mit
language: en

Philosopher Bot 🧘‍♂️🗣️

A fine-tuned GPT-based model designed to speak like a thoughtful philosopher. This is my initial experimental version and not yet optimized for production use.

Model Description

  • Base model: GPT2 (small)
  • Fine-tuned on: philosophical Q&A pairs (custom dataset)
  • Goal: To simulate the reasoning and reflective tone of a philosopher.
  • Limitations: The model may generate abstract, verbose or sometimes irrelevant answers. It is not intended for factual information retrieval or logical precision.

Intended Use

  • Direct Use: Fun philosophical chatbot or role-playing bot for conversation.
  • Out-of-scope: Not suitable for therapy, factual Q&A, or decision-making.

How to Use

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("your-username/philosopher-bot")
model = AutoModelForCausalLM.from_pretrained("your-username/philosopher-bot")

input_text = "What is the meaning of life?"
inputs = tokenizer.encode(input_text, return_tensors="pt")
outputs = model.generate(inputs, max_length=100)
print(tokenizer.decode(outputs[0]))