gemma2b_peft_safe

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Linear merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:


models:
  - model: google/gemma-2-2b-it
    parameters:
        weight: 1.0
  - model: /kaggle/working/peft-harmful-finetune-model
    parameters:
        weight: -1.0           
merge_method: linear
dtype: bfloat16
Downloads last month
2
Safetensors
Model size
3B params
Tensor type
BF16
·
Inference Providers NEW
Input a message to start chatting with sarthak-nik/gemma2b_peft_safe.

Model tree for sarthak-nik/gemma2b_peft_safe

Base model

google/gemma-2-2b
Finetuned
(769)
this model

Paper for sarthak-nik/gemma2b_peft_safe