Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
datasets:
|
| 3 |
+
- flozi00/conversations
|
| 4 |
+
language:
|
| 5 |
+
- de
|
| 6 |
+
---
|
| 7 |
+
|
| 8 |
+
## This project is sponsored by [  ](https://www.primeline-solutions.com/de/server/nach-einsatzzweck/gpu-rendering-hpc/)
|
| 9 |
+
|
| 10 |
+
# Model Card
|
| 11 |
+
|
| 12 |
+
This model is an finetuned version for german instructions and conversations in style of Alpaca. "### Assistant:" "### User:"
|
| 13 |
+
The dataset used is deduplicated and cleaned, with no codes inside. The focus is on instruction following and conversational tasks.
|
| 14 |
+
|
| 15 |
+
The model archictecture is based on Llama version 2 with 7B parameters, trained on 100% renewable energy powered hardware.
|
| 16 |
+
|
| 17 |
+
This work is contributed by private research of [flozi00](https://huggingface.co/flozi00)
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
Join discussions about german llm research, and plan larger training runs together: https://join.slack.com/t/slack-dtc7771/shared_invite/zt-219keplqu-hLwjm0xcFAOX7enERfBz0Q
|