Qwen Poetry Fine-tuned Model

This model is a fine-tuned version of Qwen/Qwen3-0.6B on the merve/poetry dataset.

Model Description

This model has been fine-tuned to generate poetry and creative text, specializing in poetic language and structures.

Training Details

  • Base Model: Qwen/Qwen3-0.6B
  • Dataset: merve/poetry
  • Training epochs: 3
  • Batch size: 4

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("your-username/qwen-poetry-finetuned")
model = AutoModelForCausalLM.from_pretrained("your-username/qwen-poetry-finetuned")

prompt = "Write a poem about"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
12
Safetensors
Model size
0.6B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kzykazzam/qwen-poetry-finetuned

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(382)
this model

Dataset used to train kzykazzam/qwen-poetry-finetuned