Update README.md
Browse files
README.md
CHANGED
|
@@ -14,6 +14,9 @@ An 81-million parameter LLM using GPT-2 encodings.
|
|
| 14 |
Trained using 10GB of USENET posts along with over 1 GB of miscellaneous BBS posts, digitized books, and text documents.
|
| 15 |
Supervised fine-tuning should be performed before use.
|
| 16 |
|
|
|
|
|
|
|
|
|
|
| 17 |
## Technical Information
|
| 18 |
| | |
|
| 19 |
|---------------------------------|----:|
|
|
|
|
| 14 |
Trained using 10GB of USENET posts along with over 1 GB of miscellaneous BBS posts, digitized books, and text documents.
|
| 15 |
Supervised fine-tuning should be performed before use.
|
| 16 |
|
| 17 |
+
## Purpose of GPT-Usenet
|
| 18 |
+
LLMs are all currently focused on becoming larger and larger, able to do more and more. However, this just makes them jack of all trades, master of none. GPT-Usenet takes a different approach. Instead of trying to do everything perfectly, GPT-Usenet offers a digital stem cell, which can then be finetuned into a single, specialized role and run in parallel with copies of itself.
|
| 19 |
+
|
| 20 |
## Technical Information
|
| 21 |
| | |
|
| 22 |
|---------------------------------|----:|
|