PEFT
Safetensors
GGUF
German
trl
sft
Generated from Trainer
JanPf's picture
Update README.md
a82db7c verified
metadata
library_name: peft
base_model: LSX-UniWue/LLaMmlein_1B_prerelease
tags:
  - trl
  - sft
  - generated_from_trainer
model-index:
  - name: LLaMmlein_1b_chat_all
    results: []
datasets:
  - LSX-UniWue/Guanako
  - FreedomIntelligence/sharegpt-deutsch
  - FreedomIntelligence/alpaca-gpt4-deutsch
language:
  - de
license: other

LLäMmlein 1B Chat

image/png

While the base versions of our LLäMmlein are quite good, our chat versions are research demonstrations and are not ready to be used in settings where close instruction following is necessary. Please check the paper for more details.

This is a chat adapter for the German Tinyllama 1B language model. Find more details on our page and our preprint! We also merged the adapter and converted it to GGUF here.

Run it

import torch
from peft import PeftConfig, PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

torch.manual_seed(42)

# script config
base_model_name = "LSX-UniWue/LLaMmlein_1B_prerelease"
chat_adapter_name = "LSX-UniWue/LLaMmlein_1B_chat_selected"
device = "cuda"  # or mps

# chat history
messages = [
    {
        "role": "user",
        "content": """Na wie geht's?""",
    },
]

# load model
config = PeftConfig.from_pretrained(chat_adapter_name)
base_model = model = AutoModelForCausalLM.from_pretrained(
    base_model_name,
    torch_dtype=torch.bfloat16,
    device_map=device,
)
base_model.resize_token_embeddings(32064)
model = PeftModel.from_pretrained(base_model, chat_adapter_name)
tokenizer = AutoTokenizer.from_pretrained(chat_adapter_name)

# encode message in "ChatML" format
chat = tokenizer.apply_chat_template(
    messages,
    return_tensors="pt",
    add_generation_prompt=True,
).to(device)

# generate response
print(
    tokenizer.decode(
        model.generate(
            chat,
            max_new_tokens=300,
            pad_token_id=tokenizer.pad_token_id,
            eos_token_id=tokenizer.eos_token_id,
        )[0],
        skip_special_tokens=False,
    )
)

Data Take Down