Update README.md
Browse files
README.md
CHANGED
|
@@ -26,7 +26,21 @@ Read our [blog post]() or our paper (preprint coming soon) for more details!
|
|
| 26 |
`LeoLM/leo-hessianai-13b-chat` is a German chat model built on our foundation model `LeoLM/leo-hessianai-13b` and finetuned on a selection of German instruction datasets.
|
| 27 |
The model performs exceptionally well on writing, explanation and discussion tasks but struggles somewhat with math and advanced reasoning. See our MT-Bench-DE scores:
|
| 28 |
```
|
| 29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
```
|
| 31 |
|
| 32 |
## Model Details
|
|
|
|
| 26 |
`LeoLM/leo-hessianai-13b-chat` is a German chat model built on our foundation model `LeoLM/leo-hessianai-13b` and finetuned on a selection of German instruction datasets.
|
| 27 |
The model performs exceptionally well on writing, explanation and discussion tasks but struggles somewhat with math and advanced reasoning. See our MT-Bench-DE scores:
|
| 28 |
```
|
| 29 |
+
{
|
| 30 |
+
"first_turn": 6.525,
|
| 31 |
+
"second_turn": 5.15,
|
| 32 |
+
"categories": {
|
| 33 |
+
"writing": 6.925,
|
| 34 |
+
"roleplay": 6.7,
|
| 35 |
+
"reasoning": 4.55,
|
| 36 |
+
"math": 3.25,
|
| 37 |
+
"coding": 3.45,
|
| 38 |
+
"extraction": 5.4,
|
| 39 |
+
"stem": 7.55,
|
| 40 |
+
"humanities": 8.875
|
| 41 |
+
},
|
| 42 |
+
"average": 5.8375
|
| 43 |
+
}
|
| 44 |
```
|
| 45 |
|
| 46 |
## Model Details
|