Model Card for Model ID
Model Details
Model Description
Generating Monthly Weight Increase Rate Plan~
- Developed by: [More Information Needed]
- Model type: adapter
- Language(s) (NLP): korean
- Finetuned from model: Polyglot-ko-1.3b
Usage Example
import time
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# ๐ง Model + checkpoint config
base_model_name = "EleutherAI/polyglot-ko-1.3b"
lora_model_path = "jwywoo/burnfit-lora-v2"
# โ
Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(lora_model_path)
tokenizer.add_special_tokens({"additional_special_tokens": ["<END>"]})
tokenizer.eos_token = "<END>"
# โ
Load base model + LoRA adapter
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
base_model = AutoModelForCausalLM.from_pretrained(base_model_name).to(device)
base_model.resize_token_embeddings(len(tokenizer))
model = PeftModel.from_pretrained(base_model, lora_model_path).to(device)
model.eval()
# For Newbies if you are using colab put this part in different cell
# ๐ Prompt example
prompt = """## ์ง๋ฌธ
- 5/3/1 ํ๋ก๊ทธ๋จ
- **์ด๋๊ฒฝํ**: 2
- **์ฑ๋ณ**: ์ฌ์ฑ
- **์ด๋๋ชฉํ**: 3
- **1RM**:
- ๋ฒค์นํ๋ ์ค: 60
- ์ค์ฟผํธ: 70
- ๋ฐ๋๋ฆฌํํธ: 75
- ์ค๋ฒํค๋ํ๋ ์ค: 40
## ๋ต๋ณ
"""
# โฑ๏ธ Measure time
start = time.time()
inputs = tokenizer(prompt, return_tensors="pt").to(device)
inputs.pop("token_type_ids", None)
eos_token_id = tokenizer.convert_tokens_to_ids("<END>")
outputs = model.generate(
**inputs,
max_new_tokens=300,
eos_token_id=eos_token_id,
do_sample=False,
temperature=0.7,
top_p=0.9,
repetition_penalty=1.1
)
# ๐ Decode and trim
generated = tokenizer.decode(outputs[0])
generated = generated.split("<END>")[0].strip() + "\n<END>"
elapsed = time.time() - start
print(f"\nโฑ๏ธ Inference Time: {elapsed:.2f} seconds")
print("\n๐ง Generated Output:\n")
print(generated)
Example Input
## ์ง๋ฌธ
- 5/3/1ํ๋ก๊ทธ๋จ
- **์ด๋๊ฒฝํ**: 3
- **์ฑ๋ณ**: ๋จ์ฑ
- **์ด๋๋ชฉํ**: 2
- **1RM**:
- ๋ฒค์นํ๋ ์ค: 60kg
- ์ค์ฟผํธ: 70kg
- ๋ฐ๋๋ฆฌํํธ: 70kg
- ์ค๋ฒํค๋ํ๋ ์ค: 70kg
## ๋ต๋ณ
Example Output
## ์ง๋ฌธ
- 5/3/1ํ๋ก๊ทธ๋จ
- **์ด๋๊ฒฝํ**: 3
- **์ฑ๋ณ**: ๋จ์ฑ
- **์ด๋๋ชฉํ**: 2
- **1RM**:
- ๋ฒค์นํ๋ ์ค: 60kg
- ์ค์ฟผํธ: 70kg
- ๋ฐ๋๋ฆฌํํธ: 70kg
- ์ค๋ฒํค๋ํ๋ ์ค: 70kg
## ๋ต๋ณ
{
"program": "5/3/1ํ๋ก๊ทธ๋จ",
"init_weight_rate": "55",
"increase_rate_week": "10",
"increase_rate_set": "10",
"deloading_rate": "30",
"weekly_weight_increase_plan": "Week1\nSET1 55%x5\nSET2 65%x5\nSET3 75%x5+\n\nWeek2\nSET1 65%x3\nSET2 75%x3\nSET3 85%x3+\n\nWeek3\nSET1 75%x5\nSET2 80%x3\nSET3 85%x1+\n\nWeek4\nSET1 30%x5\nSET2 40%x5\nSET3 50%x5\n"
}
- PEFT 0.15.2
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for jwywoo/burnfit-lora-v2
Base model
EleutherAI/polyglot-ko-1.3b