|
|
---
|
|
|
library_name: peft
|
|
|
license: apache-2.0
|
|
|
base_model: Qwen/Qwen2.5-1.5B
|
|
|
tags:
|
|
|
- axolotl
|
|
|
- generated_from_trainer
|
|
|
language:
|
|
|
- zho
|
|
|
- eng
|
|
|
- fra
|
|
|
- spa
|
|
|
- por
|
|
|
- deu
|
|
|
- ita
|
|
|
- rus
|
|
|
- jpn
|
|
|
- kor
|
|
|
- vie
|
|
|
- tha
|
|
|
- ara
|
|
|
model-index:
|
|
|
- name: 8b9a995c-4231-4a1e-8b63-7b8f7ceede42
|
|
|
results: []
|
|
|
---
|
|
|
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
should probably proofread and complete it, then remove this comment. -->
|
|
|
|
|
|
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
|
|
|
<details><summary>See axolotl config</summary>
|
|
|
|
|
|
axolotl version: `0.4.1`
|
|
|
```yaml
|
|
|
adapter: lora
|
|
|
base_model: Qwen/Qwen2.5-1.5B
|
|
|
bf16: true
|
|
|
chat_template: llama3
|
|
|
data_processes: 16
|
|
|
dataset_prepared_path: null
|
|
|
datasets:
|
|
|
- data_files:
|
|
|
- 73459b7bf5308c89_train_data.json
|
|
|
ds_type: json
|
|
|
format: custom
|
|
|
path: /workspace/input_data/73459b7bf5308c89_train_data.json
|
|
|
type:
|
|
|
field_instruction: title
|
|
|
field_output: rating
|
|
|
format: '{instruction}'
|
|
|
no_input_format: '{instruction}'
|
|
|
system_format: '{system}'
|
|
|
system_prompt: ''
|
|
|
debug: null
|
|
|
deepspeed: null
|
|
|
device_map: auto
|
|
|
do_eval: true
|
|
|
early_stopping_patience: 5
|
|
|
eval_batch_size: 4
|
|
|
eval_max_new_tokens: 128
|
|
|
eval_steps: 50
|
|
|
eval_table_size: null
|
|
|
evals_per_epoch: null
|
|
|
flash_attention: false
|
|
|
fp16: false
|
|
|
fsdp: null
|
|
|
fsdp_config: null
|
|
|
gradient_accumulation_steps: 8
|
|
|
gradient_checkpointing: true
|
|
|
group_by_length: true
|
|
|
hub_model_id: nttx/8b9a995c-4231-4a1e-8b63-7b8f7ceede42
|
|
|
hub_repo: null
|
|
|
hub_strategy: checkpoint
|
|
|
hub_token: null
|
|
|
learning_rate: 5.0e-06
|
|
|
load_in_4bit: false
|
|
|
load_in_8bit: false
|
|
|
local_rank: null
|
|
|
logging_steps: 1
|
|
|
lora_alpha: 32
|
|
|
lora_dropout: 0.1
|
|
|
lora_fan_in_fan_out: null
|
|
|
lora_model_dir: null
|
|
|
lora_r: 16
|
|
|
lora_target_linear: true
|
|
|
lr_scheduler: cosine
|
|
|
max_grad_norm: 2.0
|
|
|
max_memory:
|
|
|
0: 70GB
|
|
|
max_steps: 100
|
|
|
micro_batch_size: 4
|
|
|
mlflow_experiment_name: /tmp/73459b7bf5308c89_train_data.json
|
|
|
model_type: AutoModelForCausalLM
|
|
|
num_epochs: 3
|
|
|
optim_args:
|
|
|
adam_beta1: 0.9
|
|
|
adam_beta2: 0.95
|
|
|
adam_epsilon: 1e-5
|
|
|
optimizer: adamw_torch
|
|
|
output_dir: miner_id_24
|
|
|
pad_to_sequence_len: true
|
|
|
resume_from_checkpoint: null
|
|
|
s2_attention: null
|
|
|
sample_packing: false
|
|
|
save_steps: 50
|
|
|
saves_per_epoch: null
|
|
|
sequence_len: 1024
|
|
|
strict: false
|
|
|
tf32: true
|
|
|
tokenizer_type: AutoTokenizer
|
|
|
train_on_inputs: false
|
|
|
trust_remote_code: true
|
|
|
val_set_size: 0.05
|
|
|
wandb_entity: null
|
|
|
wandb_mode: online
|
|
|
wandb_name: 8b9a995c-4231-4a1e-8b63-7b8f7ceede42
|
|
|
wandb_project: Gradients-On-Demand
|
|
|
wandb_run: your_name
|
|
|
wandb_runid: 8b9a995c-4231-4a1e-8b63-7b8f7ceede42
|
|
|
warmup_steps: 10
|
|
|
weight_decay: 0.0
|
|
|
xformers_attention: null
|
|
|
|
|
|
```
|
|
|
|
|
|
</details><br>
|
|
|
|
|
|
# 8b9a995c-4231-4a1e-8b63-7b8f7ceede42
|
|
|
|
|
|
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset.
|
|
|
It achieves the following results on the evaluation set:
|
|
|
- Loss: 3.7474
|
|
|
|
|
|
## Model description
|
|
|
|
|
|
More information needed
|
|
|
|
|
|
## Intended uses & limitations
|
|
|
|
|
|
More information needed
|
|
|
|
|
|
## Training and evaluation data
|
|
|
|
|
|
More information needed
|
|
|
|
|
|
## Training procedure
|
|
|
|
|
|
### Training hyperparameters
|
|
|
|
|
|
The following hyperparameters were used during training:
|
|
|
- learning_rate: 5e-06
|
|
|
- train_batch_size: 4
|
|
|
- eval_batch_size: 4
|
|
|
- seed: 42
|
|
|
- gradient_accumulation_steps: 8
|
|
|
- total_train_batch_size: 32
|
|
|
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
|
|
|
- lr_scheduler_type: cosine
|
|
|
- lr_scheduler_warmup_steps: 10
|
|
|
- training_steps: 100
|
|
|
|
|
|
### Training results
|
|
|
|
|
|
| Training Loss | Epoch | Step | Validation Loss |
|
|
|
|:-------------:|:------:|:----:|:---------------:|
|
|
|
| 1.2985 | 0.0029 | 1 | 5.3030 |
|
|
|
| 4.8329 | 0.1471 | 50 | 4.2333 |
|
|
|
| 4.0707 | 0.2942 | 100 | 3.7474 |
|
|
|
|
|
|
|
|
|
### Framework versions
|
|
|
|
|
|
- PEFT 0.13.2
|
|
|
- Transformers 4.46.0
|
|
|
- Pytorch 2.5.0+cu124
|
|
|
- Datasets 3.0.1
|
|
|
- Tokenizers 0.20.1 |