Commit
·
1e1b99b
1
Parent(s):
1f7b856
Update README.md
Browse files
README.md
CHANGED
|
@@ -22,4 +22,20 @@ print(model.num_parameters()) # 770_940
|
|
| 22 |
model.push_to_hub(repo_name, private=False)
|
| 23 |
tokenizer.push_to_hub(repo_name, private=False)
|
| 24 |
config.push_to_hub(repo_name, private=False)
|
| 25 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
model.push_to_hub(repo_name, private=False)
|
| 23 |
tokenizer.push_to_hub(repo_name, private=False)
|
| 24 |
config.push_to_hub(repo_name, private=False)
|
| 25 |
+
```
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
Use the following configuration in [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to run a complete experiment in **5 seconds** using the default dataset and default settings otherwise:
|
| 29 |
+
|
| 30 |
+
```yaml
|
| 31 |
+
Validation Size: 0.1
|
| 32 |
+
Data Sample: 0.1
|
| 33 |
+
Max Length Prompt: 32
|
| 34 |
+
Max Length Answer: 32
|
| 35 |
+
Max Length: 64
|
| 36 |
+
Backbone Dtype: float16
|
| 37 |
+
Gradient Checkpointing: False
|
| 38 |
+
Batch Size: 8
|
| 39 |
+
Max Length Inference: 16
|
| 40 |
+
```
|
| 41 |
+
|