woodwardmw commited on
Commit
c025d7b
·
verified ·
1 Parent(s): 47f6786

Model save

Browse files
Files changed (1) hide show
  1. README.md +9 -44
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.5873
20
 
21
  ## Model description
22
 
@@ -44,53 +44,18 @@ The following hyperparameters were used during training:
44
  - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 200
47
- - training_steps: 40000
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
- | Training Loss | Epoch | Step | Validation Loss |
53
- |:-------------:|:---------:|:-----:|:---------------:|
54
- | 0.5226 | 55.5797 | 1000 | 0.5344 |
55
- | 0.4834 | 111.1159 | 2000 | 0.5321 |
56
- | 0.473 | 166.6957 | 3000 | 0.5291 |
57
- | 0.4451 | 222.2319 | 4000 | 0.5277 |
58
- | 0.4319 | 277.8116 | 5000 | 0.5404 |
59
- | 0.4307 | 333.3478 | 6000 | 0.5415 |
60
- | 0.4142 | 388.9275 | 7000 | 0.5451 |
61
- | 0.4137 | 444.4638 | 8000 | 0.5432 |
62
- | 0.3918 | 500.0 | 9000 | 0.5481 |
63
- | 0.3879 | 555.5797 | 10000 | 0.5462 |
64
- | 0.3847 | 611.1159 | 11000 | 0.5511 |
65
- | 0.3781 | 666.6957 | 12000 | 0.5572 |
66
- | 0.3771 | 722.2319 | 13000 | 0.5556 |
67
- | 0.3723 | 777.8116 | 14000 | 0.5640 |
68
- | 0.3646 | 833.3478 | 15000 | 0.5602 |
69
- | 0.3583 | 888.9275 | 16000 | 0.5703 |
70
- | 0.3506 | 944.4638 | 17000 | 0.5703 |
71
- | 0.3562 | 1000.0 | 18000 | 0.5678 |
72
- | 0.349 | 1055.5797 | 19000 | 0.5721 |
73
- | 0.3547 | 1111.1159 | 20000 | 0.5700 |
74
- | 0.3482 | 1166.6957 | 21000 | 0.5751 |
75
- | 0.35 | 1222.2319 | 22000 | 0.5767 |
76
- | 0.3375 | 1277.8116 | 23000 | 0.5755 |
77
- | 0.342 | 1333.3478 | 24000 | 0.5778 |
78
- | 0.3539 | 1388.9275 | 25000 | 0.5785 |
79
- | 0.3435 | 1444.4638 | 26000 | 0.5814 |
80
- | 0.3396 | 1500.0 | 27000 | 0.5808 |
81
- | 0.3421 | 1555.5797 | 28000 | 0.5809 |
82
- | 0.332 | 1611.1159 | 29000 | 0.5827 |
83
- | 0.3252 | 1666.6957 | 30000 | 0.5823 |
84
- | 0.3308 | 1722.2319 | 31000 | 0.5854 |
85
- | 0.3281 | 1777.8116 | 32000 | 0.5840 |
86
- | 0.3292 | 1833.3478 | 33000 | 0.5871 |
87
- | 0.3292 | 1888.9275 | 34000 | 0.5886 |
88
- | 0.3214 | 1944.4638 | 35000 | 0.5881 |
89
- | 0.3261 | 2000.0 | 36000 | 0.5864 |
90
- | 0.3275 | 2055.5797 | 37000 | 0.5886 |
91
- | 0.3192 | 2111.1159 | 38000 | 0.5883 |
92
- | 0.3258 | 2166.6957 | 39000 | 0.5878 |
93
- | 0.3227 | 2222.2319 | 40000 | 0.5873 |
94
 
95
 
96
  ### Framework versions
 
16
 
17
  This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.5270
20
 
21
  ## Model description
22
 
 
44
  - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 200
47
+ - num_epochs: 300.0
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
+ | Training Loss | Epoch | Step | Validation Loss |
53
+ |:-------------:|:--------:|:----:|:---------------:|
54
+ | 0.5256 | 55.5797 | 1000 | 0.5491 |
55
+ | 0.4805 | 111.1159 | 2000 | 0.5235 |
56
+ | 0.4692 | 166.6957 | 3000 | 0.5226 |
57
+ | 0.445 | 222.2319 | 4000 | 0.5257 |
58
+ | 0.4348 | 277.8116 | 5000 | 0.5270 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
 
60
 
61
  ### Framework versions