ViViT_wlasl_2000_20ep_coR

This model is a fine-tuned version of google/vivit-b-16x2-kinetics400 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.2256
  • Accuracy: 0.3437

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 35720
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
30.7198 0.05 1786 7.5765 0.0023
29.0292 1.0500 3572 6.7339 0.0255
24.0941 2.0500 5358 5.7016 0.0889
18.973 3.0500 7145 4.8953 0.1604
14.3207 4.05 8931 4.2779 0.2188
10.2288 5.0500 10717 3.8242 0.2561
6.8987 6.0500 12503 3.4816 0.2990
4.4195 7.0500 14290 3.3354 0.3105
2.8005 8.05 16076 3.2289 0.3212
1.8191 9.0500 17862 3.1795 0.3200
1.2778 10.0500 19648 3.1637 0.3292
1.0009 11.0500 21435 3.1523 0.3299
0.8082 12.05 23221 3.1508 0.3292
0.7047 13.0500 25007 3.1626 0.3276
0.6152 14.0500 26793 3.1711 0.3327
0.545 15.0500 28580 3.2040 0.3394
0.4952 16.05 30366 3.1936 0.3424
0.4463 17.0500 32152 3.2133 0.3435
0.403 18.0500 33938 3.2240 0.3432
0.3506 19.0499 35720 3.2256 0.3437

Framework versions

  • Transformers 4.46.1
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.20.1
Downloads last month
73
Safetensors
Model size
90.2M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Shawon16/ViViT_wlasl_2000_20ep_coR

Finetuned
(76)
this model

Evaluation results