Qwen1.5โ1.8BโChat โ Fitness & Personal Helper (QLoRA Adapter)
Owner: @mouhamadNIbrahim
Base model: Qwen/Qwen1.5-1.8B-Chat
Format: PEFT LoRA adapter (trained via QLoRA / 4โbit NF4)
Use case: concise personal helper & evidenceโminded fitness coach
โ ๏ธ Note: This repo contains only the adapter weights. Load it on top of the base model. For serving engines that donโt support PEFT (e.g., vLLM), first merge the adapter (see below).
๐ฐ TL;DR โ Quickstart
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
BASE = "Qwen/Qwen1.5-1.8B-Chat"
ADAPTER = "mouhamadNIbrahim/qwen15-fitness-qlora-adapter"
tok = AutoTokenizer.from_pretrained(BASE)
model = AutoModelForCausalLM.from_pretrained(BASE, torch_dtype=torch.bfloat16, device_map="auto")
model = PeftModel.from_pretrained(model, ADAPTER).to(model.device)
messages = [
{"role":"system","content":"You are a concise, friendly personal helper and fitness coach. Be evidence-based, practical, and safe."},
{"role":"user","content":"Give me a 4-day full-body plan and macros for a 69 kg male."}
]
text = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = model.generate(**tok([text], return_tensors="pt").to(model.device), max_new_tokens=300)
print(tok.decode(outputs[0], skip_special_tokens=True))
๐ง Whatโs inside
- Finetuned on curated fitness Q&A + helper behaviors (chat format) to give short, actionable responses.
- Keeps Qwenโs ChatML style; training masks only assistant tokens (completionโonly loss).
- LoRA target modules:
q_proj, k_proj, v_proj, o_proj, up_proj, gate_proj, down_proj. - Quantization: bitsandbytes 4โbit NF4 with double quant (QLoRA recipe).
๐ฆ Files
adapter_model.safetensorsโ LoRA weights (LFS)adapter_config.jsonโ PEFT/LoRA config- (optional) tokenizer files โ not required for adapters; base model tokenizer is recommended
chat_template.jinjaโ ChatML template reference
๐ ๏ธ Merge to a single model (optional)
Some runtimes (e.g., vLLM, TGI) need merged weights.
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
BASE = "Qwen/Qwen1.5-1.8B-Chat"
ADPT = "mouhamadNIbrahim/qwen15-fitness-qlora-adapter"
tok = AutoTokenizer.from_pretrained(BASE)
base = AutoModelForCausalLM.from_pretrained(BASE, torch_dtype=torch.bfloat16, device_map="auto")
merged = PeftModel.from_pretrained(base, ADPT).merge_and_unload()
merged.save_pretrained("./qwen15-fitness-merged", safe_serialization=True)
tok.save_pretrained("./qwen15-fitness-merged")
๐ Training details
- Base:
Qwen/Qwen1.5-1.8B-Chat - Method: QLoRA (PEFT/LoRA on 4โbit base)
- Tokenizer: base model tokenizer +
apply_chat_template - Loss: completionโonly (mask nonโassistant tokens)
- LoRA:
r=16,alpha=32,dropout=0.05,bias=none - Targets:
q_proj,k_proj,v_proj,o_proj,up_proj,gate_proj,down_proj - Optimizer:
paged_adamw_8bit - Scheduler: cosine; warmup 3%
- Precision:
bf16if available (fallbackfp16) - Seq length: up to 2048 tokens
- Batching:
per_device_train_batch_size=2,gradient_accumulation_steps=8 - Other: gradient checkpointing enabled
- Hardware: single GPU (tested on RTX 4090 / A4000 / 3090 class)
Reproducibility: see
training_args.binandtrainer_state.jsonin your training runs. Dataset not released; marked asprivate.
๐งช Evaluation (placeholder)
No formal benchmarks included. Manually validated on:
- Workout planning prompts (4โday/5โday splits, fullโbody, push/pull/legs)
- Macro guidance and calorie targets (MifflinโSt Jeor, TDEE assumptions)
- Lifestyle coaching quick answers
If you run quantitative evals (e.g., custom fitness QA set), please share results via PR/issues.
โ๏ธ Intended use & limitations
- Intended use: consumer fitness guidance, scheduling/helpful assistant tasks, basic nutrition coaching.
- Not for: medical diagnosis, treatment, or individualized clinical nutrition. The model can be confident yet wrong.
- Use responsibly: always verify critical advice; consult a professional for injuries/health conditions.
๐ Safety
The finetune includes examples that: request context (age, weight, goals), caution on injuries, recommend professional help when necessary. Still, errors or bias can occur.
๐ Example prompts
- โDesign a 4โday fullโbody plan for a beginner with access to dumbbells only.โ
- โIโm 69 kg, 173 cm, ~8k steps/day. Estimate maintenance calories and macros for lean bulk.โ
- โGive me 3 highโprotein Lebanese meal ideas under 700 kcal each.โ
๐ Training recipe (script sketch)
Uses
transformers,peft,trl,bitsandbytes.
# Key ingredients used in training
from transformers import BitsAndBytesConfig
from peft import LoraConfig
from trl import SFTTrainer, DataCollatorForCompletionOnlyLM
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
)
lora = LoraConfig(
r=16, lora_alpha=32, lora_dropout=0.05, bias="none",
task_type="CAUSAL_LM",
target_modules=["q_proj","k_proj","v_proj","o_proj","up_proj","gate_proj","down_proj"],
)
# Data formatted with tokenizer.apply_chat_template([...])
# and collated with DataCollatorForCompletionOnlyLM(
# response_template="<|im_start|>assistant\n", tokenizer=tokenizer
# )
๐ฅ How to cite
@software{qwen15_fitness_qlora_adapter,
author = {Ibrahim, Mouhamad},
title = {Qwen1.5-1.8B-Chat โ Fitness & Personal Helper (QLoRA Adapter)},
year = {2025},
url = {https://huggingface.co/mouhamadNIbrahim/qwen15-fitness-qlora-adapter}
}
๐ Acknowledgements
- QLoRA: Dettmers et al. (2023)
- PEFT: Mangrulkar et al. (2022)
- Qwen team for the base model and ChatML template
๐ License
- Adapter released under Apacheโ2.0. The base modelโs original license also applies when combining weights.
- Downloads last month
- 30
Model tree for mouhamadNIbrahim/qwen15-fitness-qlora-adapter
Base model
Qwen/Qwen1.5-1.8B-Chat