Update README.md
Browse files
README.md
CHANGED
|
@@ -12,6 +12,16 @@ language:
|
|
| 12 |
- en
|
| 13 |
---
|
| 14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
# leftyfeep/ape-fiction-2-mistral-nemo-Q5_K_M-GGUF
|
| 16 |
This model was converted to GGUF format from [`leftyfeep/ape-fiction-2-mistral-nemo`](https://huggingface.co/leftyfeep/ape-fiction-2-mistral-nemo) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 17 |
Refer to the [original model card](https://huggingface.co/leftyfeep/ape-fiction-2-mistral-nemo) for more details on the model.
|
|
|
|
| 12 |
- en
|
| 13 |
---
|
| 14 |
|
| 15 |
+
(A)lgorithmic (P)attern (E)mulation - Fiction!
|
| 16 |
+
|
| 17 |
+
Same thing I did before in [version 1](https://huggingface.co/leftyfeep/ape-fiction-mistral-nemo), but with a much larger dataset. I included the full-length stories and novels up to 100k context dataset that I used for version 1. To that dataset I merged three other gutenberg-based datasets, which split the text into chapters:
|
| 18 |
+
|
| 19 |
+
- https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1
|
| 20 |
+
- https://huggingface.co/datasets/nbeerbower/gutenberg2-dpo
|
| 21 |
+
- https://huggingface.co/datasets/leftyfeep/fiction-chapters-24kmax
|
| 22 |
+
|
| 23 |
+
The merged dataset has about 7200 entries.
|
| 24 |
+
|
| 25 |
# leftyfeep/ape-fiction-2-mistral-nemo-Q5_K_M-GGUF
|
| 26 |
This model was converted to GGUF format from [`leftyfeep/ape-fiction-2-mistral-nemo`](https://huggingface.co/leftyfeep/ape-fiction-2-mistral-nemo) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
| 27 |
Refer to the [original model card](https://huggingface.co/leftyfeep/ape-fiction-2-mistral-nemo) for more details on the model.
|