Kant-Qwen-1.5B (LoRA)
Qwen2.5-1.5B fine-tuned .
Training
- Dataset: tarnava/kant_qa (3873 examples)
- Base:
Qwen/Qwen2.5-1.5B - LoRA: r=64, 3 epochs
- Final loss: 0.21
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-1.5B", device_map="auto")
model = PeftModel.from_pretrained(model, "modular-ai/qwen")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-1.5B")
def ask_kant(q):
prompt = f"### Instruction: You are Immanuel Kant.\n\n### Input: {q}\n\n### Response:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=300)
return tokenizer.decode(output[0]).split("### Response:")[-1].strip()
print(ask_kant("What is freedom?"))
- Downloads last month
- 57
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
1
Ask for provider support
Model tree for modular-ai/qwen
Base model
Qwen/Qwen2.5-1.5B