Update README.md
#47
by
ziqin
- opened
README.md
CHANGED
|
@@ -8,14 +8,14 @@ inference:
|
|
| 8 |
temperature: 0.7
|
| 9 |
---
|
| 10 |
|
| 11 |
-
# Model Card for Mistral-7B-v0.
|
| 12 |
-
|
| 13 |
The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
|
| 14 |
Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.
|
| 15 |
-
|
| 16 |
For full details of this model please read our [Release blog post](https://mistral.ai/news/announcing-mistral-7b/)
|
| 17 |
-
|
| 18 |
-
## Model
|
| 19 |
|
| 20 |
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
|
| 21 |
- Grouped-Query Attention
|
|
|
|
| 8 |
temperature: 0.7
|
| 9 |
---
|
| 10 |
|
| 11 |
+
# Model Card for Mistral-7B-v0.1111
|
| 12 |
+
zxc
|
| 13 |
The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
|
| 14 |
Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.
|
| 15 |
+
dfd
|
| 16 |
For full details of this model please read our [Release blog post](https://mistral.ai/news/announcing-mistral-7b/)
|
| 17 |
+
dfv
|
| 18 |
+
## Model Architecturefd
|
| 19 |
|
| 20 |
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
|
| 21 |
- Grouped-Query Attention
|