VideoMAE_Kinetics_wlasl_100_longtail_200_signer

This model is a fine-tuned version of MCG-NJU/videomae-base-finetuned-kinetics on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4501
  • Accuracy: 0.6982

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 36000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
18.5248 0.005 180 4.6134 0.0148
18.4259 1.0050 360 4.5845 0.0355
18.0926 2.0050 540 4.4842 0.0444
16.9306 3.0050 721 4.2167 0.1302
15.4312 4.005 901 3.8332 0.2426
13.4099 5.0050 1081 3.4658 0.3195
11.4151 6.0050 1261 3.0617 0.4260
9.318 7.0050 1442 2.7168 0.5059
7.5007 8.005 1622 2.3707 0.5858
5.7294 9.0050 1802 2.0365 0.6450
4.2445 10.0050 1982 1.8353 0.6479
3.0008 11.0050 2163 1.5879 0.6686
2.1126 12.005 2343 1.4705 0.6805
1.4048 13.0050 2523 1.3585 0.6657
0.934 14.0050 2703 1.2650 0.7041
0.6172 15.0050 2884 1.1449 0.7367
0.4755 16.005 3064 1.1490 0.6805
0.3145 17.0050 3244 1.1220 0.7041
0.243 18.0050 3424 1.1176 0.7101
0.2305 19.0050 3605 1.1077 0.7071
0.1504 20.005 3785 1.1920 0.6982
0.1342 21.0050 3965 1.2274 0.7012
0.0744 22.0050 4145 1.2504 0.7101
0.1404 23.0050 4326 1.2245 0.7130
0.1019 24.005 4506 1.3000 0.7101
0.0746 25.0050 4686 1.2738 0.7041
0.1144 26.0050 4866 1.3395 0.6953
0.1238 27.0050 5047 1.1683 0.7071
0.0989 28.005 5227 1.3287 0.7071
0.097 29.0050 5407 1.5545 0.6775
0.1004 30.0050 5587 1.3614 0.7041
0.1062 31.0050 5768 1.5166 0.6923
0.1617 32.005 5948 1.3035 0.6923
0.1235 33.0050 6128 1.5919 0.6568
0.1395 34.0050 6308 1.4211 0.6746
0.109 35.0050 6489 1.4501 0.6982

Framework versions

  • Transformers 4.46.1
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.20.1
Downloads last month
138
Safetensors
Model size
86.3M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Shawon16/VideoMAE_Kinetics_wlasl_100_longtail_200_signer

Finetuned
(276)
this model

Evaluation results