Adding Evaluation Results
#43
by
leaderboard-pr-bot
- opened
README.md
CHANGED
|
@@ -113,3 +113,17 @@ The following hyperparameters were used during training:
|
|
| 113 |
- Pytorch 1.12.1+cu116
|
| 114 |
- Datasets 2.4.0
|
| 115 |
- Tokenizers 0.12.1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 113 |
- Pytorch 1.12.1+cu116
|
| 114 |
- Datasets 2.4.0
|
| 115 |
- Tokenizers 0.12.1
|
| 116 |
+
|
| 117 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
| 118 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Tincando__fiction_story_generator)
|
| 119 |
+
|
| 120 |
+
| Metric | Value |
|
| 121 |
+
|-----------------------|---------------------------|
|
| 122 |
+
| Avg. | 25.36 |
|
| 123 |
+
| ARC (25-shot) | 23.29 |
|
| 124 |
+
| HellaSwag (10-shot) | 28.68 |
|
| 125 |
+
| MMLU (5-shot) | 26.72 |
|
| 126 |
+
| TruthfulQA (0-shot) | 43.79 |
|
| 127 |
+
| Winogrande (5-shot) | 50.12 |
|
| 128 |
+
| GSM8K (5-shot) | 0.0 |
|
| 129 |
+
| DROP (3-shot) | 4.9 |
|