Update README.md
Browse files
README.md
CHANGED
|
@@ -121,10 +121,12 @@ datasets:
|
|
| 121 |
- FreedomIntelligence/medical-o1-reasoning-SFT
|
| 122 |
---
|
| 123 |
|
| 124 |
-
#
|
| 125 |
|
| 126 |
Este modelo fue purgado. La versión anterior contaba con errores debido a dos modelos que causaban asquerosos errores a la mezcla.
|
| 127 |
|
|
|
|
|
|
|
| 128 |
## Bahamuth 3.2 1B
|
| 129 |
<center>
|
| 130 |
<img src="https://i.ibb.co/HTpGdMQg/Behemoth.webp" alt="Behemoth" border="0">
|
|
@@ -138,17 +140,42 @@ Este modelo fue purgado. La versión anterior contaba con errores debido a dos m
|
|
| 138 |
|
| 139 |
Metafóricamente, su nombre ha llegado a ser usado para connotar algo extremadamente grande o poderoso.
|
| 140 |
|
| 141 |
-
|
|
|
|
|
|
|
|
|
|
| 142 |
- Más de 50 modelos mezclados.
|
| 143 |
- 39 datasets.
|
| 144 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 145 |
---
|
| 146 |
|
| 147 |
-
###
|
| 148 |
|
| 149 |
-
|
| 150 |
|
| 151 |
-
###
|
| 152 |
|
| 153 |
The following models were included in the merge:
|
| 154 |
* [jtatman/llama-3.2-1b-lewd-mental-occult](https://huggingface.co/jtatman/llama-3.2-1b-lewd-mental-occult)
|
|
@@ -216,7 +243,7 @@ The following models were included in the merge:
|
|
| 216 |
* [orange67/merged-llama-3.2-1b](https://huggingface.co/orange67/merged-llama-3.2-1b)
|
| 217 |
* [nicoboss/Llama-3.2-1B-Instruct-Uncensored](https://huggingface.co/nicoboss/Llama-3.2-1B-Instruct-Uncensored)
|
| 218 |
|
| 219 |
-
###
|
| 220 |
|
| 221 |
The following YAML configuration was used to produce this model:
|
| 222 |
|
|
|
|
| 121 |
- FreedomIntelligence/medical-o1-reasoning-SFT
|
| 122 |
---
|
| 123 |
|
| 124 |
+
# VERSIÓN PURGADA
|
| 125 |
|
| 126 |
Este modelo fue purgado. La versión anterior contaba con errores debido a dos modelos que causaban asquerosos errores a la mezcla.
|
| 127 |
|
| 128 |
+
---
|
| 129 |
+
|
| 130 |
## Bahamuth 3.2 1B
|
| 131 |
<center>
|
| 132 |
<img src="https://i.ibb.co/HTpGdMQg/Behemoth.webp" alt="Behemoth" border="0">
|
|
|
|
| 140 |
|
| 141 |
Metafóricamente, su nombre ha llegado a ser usado para connotar algo extremadamente grande o poderoso.
|
| 142 |
|
| 143 |
+
---
|
| 144 |
+
|
| 145 |
+
## CARACTERISTICAS
|
| 146 |
+
|
| 147 |
- Más de 50 modelos mezclados.
|
| 148 |
- 39 datasets.
|
| 149 |
|
| 150 |
+
**MULTIUSO**
|
| 151 |
+
|
| 152 |
+
<center>
|
| 153 |
+
|
| 154 |
+
**ROLEPLAY**
|
| 155 |
+
<img src="https://i.ibb.co/dJ1gvJZh/IMG-20250315-045046.jpg" alt="IMG-20250315-045046" border="0">
|
| 156 |
+
|
| 157 |
+
**LITERATURA**
|
| 158 |
+
|
| 159 |
+
**MATEMÁTICAS**
|
| 160 |
+
|
| 161 |
+
**QUÍMICA**
|
| 162 |
+
|
| 163 |
+
**GEOGRAFÍA**
|
| 164 |
+
|
| 165 |
+
</center>
|
| 166 |
+
|
| 167 |
+
---
|
| 168 |
+
|
| 169 |
+
## Koboldcpp
|
| 170 |
+
|
| 171 |
+
|
| 172 |
---
|
| 173 |
|
| 174 |
+
### Método de mezcla
|
| 175 |
|
| 176 |
+
Este modelo ha sido mezclado usando [Model Stock](https://arxiv.org/abs/2403.19522) como método de mezcla haciendo uso del modelo [xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora](https://huggingface.co/xdrshjr/llama3.2_1b_uncensored_5000_8epoch_lora) como base.
|
| 177 |
|
| 178 |
+
### Modelos Mezcladod
|
| 179 |
|
| 180 |
The following models were included in the merge:
|
| 181 |
* [jtatman/llama-3.2-1b-lewd-mental-occult](https://huggingface.co/jtatman/llama-3.2-1b-lewd-mental-occult)
|
|
|
|
| 243 |
* [orange67/merged-llama-3.2-1b](https://huggingface.co/orange67/merged-llama-3.2-1b)
|
| 244 |
* [nicoboss/Llama-3.2-1B-Instruct-Uncensored](https://huggingface.co/nicoboss/Llama-3.2-1B-Instruct-Uncensored)
|
| 245 |
|
| 246 |
+
### Configuracion
|
| 247 |
|
| 248 |
The following YAML configuration was used to produce this model:
|
| 249 |
|