Update README.md
Browse files%3C!-- HTML_TAG_END -->
README.md
CHANGED
|
@@ -19,6 +19,11 @@ base_model:
|
|
| 19 |
|
| 20 |
# NeuralStar_AlphaWriter_4x7b
|
| 21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
NeuralStar_AlphaWriter_4x7b is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
| 23 |
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
|
| 24 |
* [FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B](https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B)
|
|
|
|
| 19 |
|
| 20 |
# NeuralStar_AlphaWriter_4x7b
|
| 21 |
|
| 22 |
+
I was blown away by the writing results I was getting from mlabonne/Beyonder-4x7B-v3 while writing in NovelCrafter.
|
| 23 |
+
Inspired by his LLM Course and fueled by his LazyMergeKit, I couldnt help but wonder what a writing model would be like if all 4 “experts” excelled in creative writing.
|
| 24 |
+
I present NeuralStar-AlphaWriter-4x7b
|
| 25 |
+
|
| 26 |
+
|
| 27 |
NeuralStar_AlphaWriter_4x7b is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
| 28 |
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
|
| 29 |
* [FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B](https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B)
|