End of training
Browse files
README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
-
base_model:
|
| 4 |
tags:
|
| 5 |
- generated_from_trainer
|
| 6 |
metrics:
|
|
@@ -16,18 +16,19 @@ model-index:
|
|
| 16 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 17 |
should probably proofread and complete it, then remove this comment. -->
|
| 18 |
|
| 19 |
-
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/truonggiabjnh2003-fpt-university/Detect%20AI%20Generated%20Text/runs/
|
| 20 |
-
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/truonggiabjnh2003-fpt-university/Detect%20AI%20Generated%20Text/runs/
|
|
|
|
| 21 |
# mambaformer
|
| 22 |
|
| 23 |
-
This model is a fine-tuned version of [
|
| 24 |
It achieves the following results on the evaluation set:
|
| 25 |
-
- Loss: 0.
|
| 26 |
-
- Accuracy: 0.
|
| 27 |
-
- Precision: 0.
|
| 28 |
-
- Recall: 0.
|
| 29 |
-
- F1: 0.
|
| 30 |
-
- Auroc: 0.
|
| 31 |
|
| 32 |
## Model description
|
| 33 |
|
|
@@ -62,11 +63,27 @@ The following hyperparameters were used during training:
|
|
| 62 |
|
| 63 |
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Auroc |
|
| 64 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|
|
| 65 |
-
| 0.
|
| 66 |
-
| 0.
|
| 67 |
-
| 0.
|
| 68 |
-
| 0.
|
| 69 |
-
| 0.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 70 |
|
| 71 |
|
| 72 |
### Framework versions
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
base_model: binh230/mambaformer
|
| 4 |
tags:
|
| 5 |
- generated_from_trainer
|
| 6 |
metrics:
|
|
|
|
| 16 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
| 17 |
should probably proofread and complete it, then remove this comment. -->
|
| 18 |
|
| 19 |
+
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/truonggiabjnh2003-fpt-university/Detect%20AI%20Generated%20Text/runs/z6i92ua2)
|
| 20 |
+
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/truonggiabjnh2003-fpt-university/Detect%20AI%20Generated%20Text/runs/g6m1faks)
|
| 21 |
+
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/truonggiabjnh2003-fpt-university/Detect%20AI%20Generated%20Text/runs/6afk83hw)
|
| 22 |
# mambaformer
|
| 23 |
|
| 24 |
+
This model is a fine-tuned version of [binh230/mambaformer](https://huggingface.co/binh230/mambaformer) on the None dataset.
|
| 25 |
It achieves the following results on the evaluation set:
|
| 26 |
+
- Loss: 0.4950
|
| 27 |
+
- Accuracy: 0.7747
|
| 28 |
+
- Precision: 0.8314
|
| 29 |
+
- Recall: 0.7747
|
| 30 |
+
- F1: 0.7647
|
| 31 |
+
- Auroc: 0.9429
|
| 32 |
|
| 33 |
## Model description
|
| 34 |
|
|
|
|
| 63 |
|
| 64 |
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Auroc |
|
| 65 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|
|
| 66 |
+
| 0.6026 | 0.0471 | 256 | 0.5927 | 0.6776 | 0.7982 | 0.6776 | 0.6414 | 0.9217 |
|
| 67 |
+
| 0.3481 | 0.0941 | 512 | 0.4428 | 0.8167 | 0.8239 | 0.8167 | 0.8157 | 0.9015 |
|
| 68 |
+
| 0.2434 | 0.1412 | 768 | 0.4749 | 0.7833 | 0.8043 | 0.7833 | 0.7795 | 0.8934 |
|
| 69 |
+
| 0.1975 | 0.1882 | 1024 | 0.5786 | 0.7304 | 0.7949 | 0.7304 | 0.7149 | 0.8979 |
|
| 70 |
+
| 0.1749 | 0.2353 | 1280 | 0.6214 | 0.7157 | 0.7952 | 0.7157 | 0.6952 | 0.9004 |
|
| 71 |
+
| 0.1644 | 0.2824 | 1536 | 0.6323 | 0.7107 | 0.7984 | 0.7107 | 0.6877 | 0.9123 |
|
| 72 |
+
| 0.1556 | 0.3294 | 1792 | 0.6491 | 0.7046 | 0.7990 | 0.7046 | 0.6793 | 0.9161 |
|
| 73 |
+
| 0.1476 | 0.3765 | 2048 | 0.6989 | 0.6884 | 0.7955 | 0.6884 | 0.6573 | 0.9203 |
|
| 74 |
+
| 0.1466 | 0.4235 | 2304 | 0.6633 | 0.7014 | 0.8016 | 0.7014 | 0.6744 | 0.9241 |
|
| 75 |
+
| 0.1405 | 0.4706 | 2560 | 0.6076 | 0.7229 | 0.8071 | 0.7229 | 0.7026 | 0.9250 |
|
| 76 |
+
| 0.1399 | 0.5177 | 2816 | 0.6221 | 0.7164 | 0.8042 | 0.7164 | 0.6943 | 0.9248 |
|
| 77 |
+
| 0.135 | 0.5647 | 3072 | 0.6249 | 0.7150 | 0.8064 | 0.7150 | 0.6920 | 0.9290 |
|
| 78 |
+
| 0.1339 | 0.6118 | 3328 | 0.6109 | 0.7233 | 0.8108 | 0.7233 | 0.7024 | 0.9340 |
|
| 79 |
+
| 0.1297 | 0.6589 | 3584 | 0.5931 | 0.7306 | 0.8127 | 0.7306 | 0.7117 | 0.9360 |
|
| 80 |
+
| 0.1298 | 0.7059 | 3840 | 0.5644 | 0.7439 | 0.8170 | 0.7439 | 0.7282 | 0.9357 |
|
| 81 |
+
| 0.1275 | 0.7530 | 4096 | 0.5526 | 0.7475 | 0.8209 | 0.7475 | 0.7322 | 0.9416 |
|
| 82 |
+
| 0.1268 | 0.8000 | 4352 | 0.5564 | 0.7470 | 0.8203 | 0.7470 | 0.7317 | 0.9412 |
|
| 83 |
+
| 0.1235 | 0.8471 | 4608 | 0.5439 | 0.7537 | 0.8238 | 0.7537 | 0.7396 | 0.9436 |
|
| 84 |
+
| 0.1231 | 0.8942 | 4864 | 0.5051 | 0.7693 | 0.8292 | 0.7693 | 0.7583 | 0.9446 |
|
| 85 |
+
| 0.1222 | 0.9412 | 5120 | 0.5254 | 0.7611 | 0.8241 | 0.7611 | 0.7488 | 0.9405 |
|
| 86 |
+
| 0.1198 | 0.9883 | 5376 | 0.5439 | 0.7534 | 0.8230 | 0.7534 | 0.7394 | 0.9408 |
|
| 87 |
|
| 88 |
|
| 89 |
### Framework versions
|