Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,16 @@
|
|
| 1 |
---
|
| 2 |
license: gpl-3.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: gpl-3.0
|
| 3 |
+
datasets:
|
| 4 |
+
- BelleGroup/generated_train_0.5M_CN
|
| 5 |
+
- JosephusCheung/GuanacoDataset
|
| 6 |
+
- Chinese-Vicuna/guanaco_belle_merge_v1.0
|
| 7 |
+
language:
|
| 8 |
+
- zh
|
| 9 |
+
tags:
|
| 10 |
+
- alpaca
|
| 11 |
+
- Chinese-Vicuna
|
| 12 |
+
- llama
|
| 13 |
---
|
| 14 |
+
|
| 15 |
+
This is a Chinese instruction-tuning lora checkpoint based on llama-7B(1epoch) from [this repo's](https://github.com/Facico/Chinese-Vicuna) work.
|
| 16 |
+
Specially, this is the 4bit version trained with qlora
|