mioskomi's picture
Deleted "enable_thinking=False" due to Qwen 2.5 not having this feature
fa51f4f verified
|
raw
history blame
2.04 kB
metadata
library_name: transformers
tags:
  - medical
datasets:
  - stefan-m-lenz/ICDOPS-QA-2024
language:
  - de
base_model:
  - Qwen/Qwen2.5-7B-Instruct-1M

Model Card for Model stefan-m-lenz/Qwen2.5-7B-Instruct

This model is a PEFT adapter (e.g., LoRA) fine-tuned using the dataset ICDOPS-QA-2024 based on Qwen/Qwen2.5-7B-Instruct-1M. For more information about the training, see the dataset card.

Usage

Package prerequisites:

pip install transformers accelerate peft

Load the model.

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel, PeftConfig

repo_id = "stefan-m-lenz/Qwen-2.5-7B-ICDOPS-QA-2024"
config = PeftConfig.from_pretrained(repo_id, device_map="auto")
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, 
                                             device_map="auto")
model = PeftModel.from_pretrained(model, repo_id, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path,
                                          device_map="auto")

# Test input
test_input = """Welche ICD-10-Kodierung wird für die Tumordiagnose "Bronchialkarzinom, Hauptbronchus" verwendet? Antworte nur mit dem ICD-10 Code."""

input_str = tokenizer.apply_chat_template(
    [{"role": "user", "content": test_input}],
    tokenize=False,
    add_generation_prompt=True,
)

# Generate response
inputs = tokenizer(input_str, return_tensors="pt").to("cuda")
outputs = model.generate(
    **inputs,
    max_new_tokens=7,
    do_sample=False,
    pad_token_id=tokenizer.eos_token_id,
    temperature=None,
    top_p=None,
    top_k=None,
)
generated_tokens = outputs[0, inputs["input_ids"].shape[1]:]
response = tokenizer.decode(generated_tokens, skip_special_tokens=True).strip()

print("Test Input:", test_input)
print("Model Response:", response)