sirine21 commited on
Commit
9353036
·
verified ·
1 Parent(s): d5186f3

gptneo-finetuned-123

Browse files
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: mit
4
+ base_model: EleutherAI/gpt-neo-1.3B
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: gptneo-finetuned-seo
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # gptneo-finetuned-seo
16
+
17
+ This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset.
18
+
19
+ ## Model description
20
+
21
+ More information needed
22
+
23
+ ## Intended uses & limitations
24
+
25
+ More information needed
26
+
27
+ ## Training and evaluation data
28
+
29
+ More information needed
30
+
31
+ ## Training procedure
32
+
33
+ ### Training hyperparameters
34
+
35
+ The following hyperparameters were used during training:
36
+ - learning_rate: 2e-05
37
+ - train_batch_size: 1
38
+ - eval_batch_size: 8
39
+ - seed: 42
40
+ - gradient_accumulation_steps: 8
41
+ - total_train_batch_size: 8
42
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
+ - lr_scheduler_type: linear
44
+ - num_epochs: 3
45
+ - mixed_precision_training: Native AMP
46
+
47
+ ### Training results
48
+
49
+
50
+
51
+ ### Framework versions
52
+
53
+ - Transformers 4.51.3
54
+ - Pytorch 2.6.0+cu124
55
+ - Datasets 2.14.4
56
+ - Tokenizers 0.21.1
model-00001-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:85963c0d12ebc5fc2a2b28bfd81b83df914d533f461a4b9d65689d67d6d106da
3
  size 4993794184
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c60115ae1ec493b0896da3cc628fa0bf1b5cdfd1523abd73d2ef16982c8ef7ba
3
  size 4993794184
model-00002-of-00002.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bf950e9d43ed03d7f8e70754345c33f23127381d2b98eaa1b95ca6972ea0cc60
3
  size 268543784
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5db1fba3267d474683f972a05ada5fd35e1e44ab9ef43571e3c4d29a67133165
3
  size 268543784
special_tokens_map.json CHANGED
@@ -13,13 +13,7 @@
13
  "rstrip": false,
14
  "single_word": false
15
  },
16
- "pad_token": {
17
- "content": "<|endoftext|>",
18
- "lstrip": false,
19
- "normalized": true,
20
- "rstrip": false,
21
- "single_word": false
22
- },
23
  "unk_token": {
24
  "content": "<|endoftext|>",
25
  "lstrip": false,
 
13
  "rstrip": false,
14
  "single_word": false
15
  },
16
+ "pad_token": "<|endoftext|>",
 
 
 
 
 
 
17
  "unk_token": {
18
  "content": "<|endoftext|>",
19
  "lstrip": false,
tokenizer_config.json CHANGED
@@ -16,15 +16,8 @@
16
  "eos_token": "<|endoftext|>",
17
  "errors": "replace",
18
  "extra_special_tokens": {},
19
- "max_length": 128,
20
  "model_max_length": 2048,
21
- "pad_to_multiple_of": null,
22
  "pad_token": "<|endoftext|>",
23
- "pad_token_type_id": 0,
24
- "padding_side": "right",
25
- "stride": 0,
26
  "tokenizer_class": "GPT2Tokenizer",
27
- "truncation_side": "right",
28
- "truncation_strategy": "longest_first",
29
  "unk_token": "<|endoftext|>"
30
  }
 
16
  "eos_token": "<|endoftext|>",
17
  "errors": "replace",
18
  "extra_special_tokens": {},
 
19
  "model_max_length": 2048,
 
20
  "pad_token": "<|endoftext|>",
 
 
 
21
  "tokenizer_class": "GPT2Tokenizer",
 
 
22
  "unk_token": "<|endoftext|>"
23
  }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a196bd8bed1d5b6c503dd859176403b01f5611f98effe6223f22ca3f4b5470e1
3
+ size 5304
vocab.json CHANGED
The diff for this file is too large to render. See raw diff