bert-base-uncased-finetuned-lora-mrpc

This model is a fine-tuned version of bert-base-uncased on the glue dataset. It achieves the following results on the evaluation set:

  • Accuracy: 0.8603
  • F1: 0.8998
  • trainable model parameters: 1181186
  • all model parameters: 110664964
  • percentage of trainable model parameters: 1.07%

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-04
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • weight_decay: 0.01
  • rank: 32
  • lora_alpha: 32
  • lora_dropout: 0.05
  • num_epochs: 5
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train rambodazimi/bert-base-uncased-finetuned-LoRA-MRPC

Evaluation results