RichardErkhov commited on
Commit
7f3234e
·
verified ·
1 Parent(s): e4d9ead

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +65 -0
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ baby-llama-58m - bnb 4bits
11
+ - Model creator: https://huggingface.co/timinar/
12
+ - Original model: https://huggingface.co/timinar/baby-llama-58m/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ license: unknown
20
+ language:
21
+ - en
22
+ ---
23
+
24
+ # Baby Llama
25
+
26
+ Our submission to the `strict-small` track of the [BabyLM challenge](https://babylm.github.io/index.html).
27
+
28
+ Baby Llama is a 58M-parameter model, distilled from an ensemble consisting of LLaMA-360M and GPT2-705M, both trained on the `babylm_10M` dataset.
29
+
30
+ See the associated [paper](https://arxiv.org/abs/2308.02019) for a detailed discussion of the training procedure and of the model performance.
31
+ The training code is available at [https://github.com/timinar/BabyLlama](https://github.com/timinar/BabyLlama).
32
+
33
+ ### Hyperparameters for the tasks that require fine-tuning
34
+
35
+ When evaluating the model on the [tasks that require fine-tuning](https://github.com/babylm/evaluation-pipeline/tree/main#fine-tuning),
36
+ we noticed that the [default hyperparameters](https://github.com/babylm/evaluation-pipeline/tree/main#hyperparameters)
37
+ suggested by the BabyLM organizers lead to severe overfitting in a number of tasks.
38
+ To avoid this issue, we have re-tuned those hyperparameters.
39
+ The sets of hyperparameters selected for each task are listed in the table below.
40
+
41
+ | Task | Maximum learning rate | Batch size | Maximum epochs | Patience | Evaluate every (steps) | Random seed |
42
+ | ---- | ------------- | ---------- | -------- | -------- | ---------- | ---- |
43
+ | CoLA | 4e-5 | 64 | 3 | 10 | 20 | 12 |
44
+ | SST-2 | 5e-5 | 64 | 6 | 10 | 200 | 12 |
45
+ | MRPC | 3e-5 | 64 | 3 | 10 | 20 | 12 |
46
+ | QQP | 4e-5 | 64 | 10 | 10 | 1000 | 12 |
47
+ | MNLI | 5e-5 | 64 | 6 | 10 | 200 | 12 |
48
+ | MNLI-mm |5e-5 | 64 | 6 | 10 | 200 | 12 |
49
+ | QNLI | 5e-5 | 64 | 6 | 10 | 200 | 12 |
50
+ | RTE | 5e-5 | 64 | 6 | 10 | 200 | 12 |
51
+ | BoolQ | 3e-4 | 16 | 10 | 10 | 10| 12 |
52
+ | MultiRC | 1e-4 | 64 | 7 | 10 | 1000 | 42 |
53
+ | WSC | 5e-7 | 1 | 10 | 1000 | 2000 | 12 |
54
+ | CR (Control) | 5e-5 | 64 | 10 | 10 | 100 | 12 |
55
+ | LC (Control) | 1e-3 | 64 | 1 | 2 | 10 | 12 |
56
+ | MV (Control) | 5e-5 | 64 | 6 | 10 | 200 | 12 |
57
+ | RP (Control) | 1e-3 | 64 | 1 | 10 | 10 | 12 |
58
+ | SC (Control) | 1e-3 | 64 | 2 | 10 | 10 | 12 |
59
+ | CR\_LC | 1e-3 | 64 | 2 | 10 | 10 | 12 |
60
+ | CR\_RTP | 5e-5 | 64 | 6 | 10 | 200 | 12 |
61
+ | MV\_LC | 5e-5 | 64 | 6 | 10 | 200 | 12 |
62
+ | MV\_RTP | 5e-5 | 64 | 6 | 10 | 200 | 12 |
63
+ | SC\_LC | 1e-3 | 64 | 2 | 10 | 10 | 12 |
64
+ | SC\_RP | 1e-3 | 64 | 2 | 10 | 10 | 12 |
65
+