Update README.md
Browse files
README.md
CHANGED
|
@@ -3,12 +3,10 @@ license: apache-2.0
|
|
| 3 |
inference: true
|
| 4 |
library_name: transformers
|
| 5 |
pipeline_tag: text-generation
|
| 6 |
-
|
| 7 |
metrics:
|
| 8 |
- rouge
|
| 9 |
- bleu
|
| 10 |
- bleurt
|
| 11 |
-
|
| 12 |
model-index:
|
| 13 |
- name: ibleducation/ibl-tutoring-7B-32k
|
| 14 |
results:
|
|
@@ -16,33 +14,35 @@ model-index:
|
|
| 16 |
name: truthfulqa_gen
|
| 17 |
type: text-generation
|
| 18 |
dataset:
|
| 19 |
-
type: truthful_qa
|
| 20 |
-
name: Truthful QA
|
| 21 |
metrics:
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
|
|
|
|
|
|
| 46 |
---
|
| 47 |
|
| 48 |
# ibleducation/ibl-tutoring-7B-32k
|
|
@@ -84,7 +84,7 @@ Mistrallite was chosen because its it's performance when compared to Mistral-7B-
|
|
| 84 |
- **Language:** English
|
| 85 |
- **Finetuned from weights:** [Mistrallite](https://huggingface.co/amazon/MistralLite)
|
| 86 |
- **Finetuned on data:**
|
| 87 |
-
- ibl-best-practices-instructor-dataset
|
| 88 |
- **Model License:** Apache 2.0
|
| 89 |
|
| 90 |
## How to Use ibl-tutoring-7B-32k from Python Code (HuggingFace transformers) ##
|
|
@@ -134,4 +134,4 @@ for seq in sequences:
|
|
| 134 |
**Important** - Use the prompt template below for ibl-tutoring-7B-32k:
|
| 135 |
```
|
| 136 |
<|prompter|>{prompt}</s><|assistant|>
|
| 137 |
-
```
|
|
|
|
| 3 |
inference: true
|
| 4 |
library_name: transformers
|
| 5 |
pipeline_tag: text-generation
|
|
|
|
| 6 |
metrics:
|
| 7 |
- rouge
|
| 8 |
- bleu
|
| 9 |
- bleurt
|
|
|
|
| 10 |
model-index:
|
| 11 |
- name: ibleducation/ibl-tutoring-7B-32k
|
| 12 |
results:
|
|
|
|
| 14 |
name: truthfulqa_gen
|
| 15 |
type: text-generation
|
| 16 |
dataset:
|
| 17 |
+
type: truthful_qa
|
| 18 |
+
name: Truthful QA
|
| 19 |
metrics:
|
| 20 |
+
- type: bleurt
|
| 21 |
+
name: bleurt_max
|
| 22 |
+
value: -0.4572
|
| 23 |
+
- type: bleurt
|
| 24 |
+
name: bleurt_acc
|
| 25 |
+
value: 0.4231
|
| 26 |
+
- type: bleurt
|
| 27 |
+
name: bleurt_diff
|
| 28 |
+
value: -0.0825
|
| 29 |
+
- type: bleu
|
| 30 |
+
name: bleu_max
|
| 31 |
+
value: 18.652
|
| 32 |
+
- type: bleu
|
| 33 |
+
name: bleu_acc
|
| 34 |
+
value: 0.4018
|
| 35 |
+
- type: bleu
|
| 36 |
+
name: bleu_diff
|
| 37 |
+
value: -2.2541
|
| 38 |
+
- type: rouge
|
| 39 |
+
name: rouge1_max
|
| 40 |
+
value: 40.0851
|
| 41 |
+
- type: rouge
|
| 42 |
+
name: rougeL_diff
|
| 43 |
+
value: -4.0046
|
| 44 |
+
datasets:
|
| 45 |
+
- ibleducation/ibl-best-practices-instructor-dataset
|
| 46 |
---
|
| 47 |
|
| 48 |
# ibleducation/ibl-tutoring-7B-32k
|
|
|
|
| 84 |
- **Language:** English
|
| 85 |
- **Finetuned from weights:** [Mistrallite](https://huggingface.co/amazon/MistralLite)
|
| 86 |
- **Finetuned on data:**
|
| 87 |
+
- [ibleducation/ibl-best-practices-instructor-dataset](https://huggingface.co/datasets/ibleducation/ibl-best-practices-instructor-dataset)
|
| 88 |
- **Model License:** Apache 2.0
|
| 89 |
|
| 90 |
## How to Use ibl-tutoring-7B-32k from Python Code (HuggingFace transformers) ##
|
|
|
|
| 134 |
**Important** - Use the prompt template below for ibl-tutoring-7B-32k:
|
| 135 |
```
|
| 136 |
<|prompter|>{prompt}</s><|assistant|>
|
| 137 |
+
```
|