Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
library_name: transformers
|
| 3 |
+
license: apache-2.0
|
| 4 |
+
base_model:
|
| 5 |
+
- grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B
|
| 6 |
+
tags:
|
| 7 |
+
- generated_from_trainer
|
| 8 |
+
datasets:
|
| 9 |
+
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
|
| 10 |
+
- anthracite-org/stheno-filtered-v1.1
|
| 11 |
+
- PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
|
| 12 |
+
- Gryphe/Sonnet3.5-Charcard-Roleplay
|
| 13 |
+
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
|
| 14 |
+
- anthracite-org/kalo-opus-instruct-22k-no-refusal
|
| 15 |
+
- anthracite-org/nopm_claude_writing_fixed
|
| 16 |
+
- anthracite-org/kalo_opus_misc_240827
|
| 17 |
+
model-index:
|
| 18 |
+
- name: Epiculous/NovaSpark
|
| 19 |
+
results: []
|
| 20 |
+
---
|
| 21 |
+
|
| 22 |
+

|
| 23 |
+
|
| 24 |
+
Switching things up a bit since the last slew of models were all 12B, we now have NovaSpark! NovaSpark is an 8B model trained on GrimJim's [abliterated](https://huggingface.co/grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B) version of arcee's [SuperNova-lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite).
|
| 25 |
+
The hope is abliteration will remove some of the inherant refusals and censorship of the original model, however I noticed that finetuning on GrimJim's model undid some of the abliteration, therefore more than likely abiliteration will have to be reapplied to the resulting model to reinforce it.
|
| 26 |
+
|
| 27 |
+
# Quants!
|
| 28 |
+
[full](https://huggingface.co/Epiculous/NovaSpark) / [exl2](https://huggingface.co/Epiculous/NovaSpark-exl2) / <strong>gguf</strong>
|
| 29 |
+
|
| 30 |
+
## Prompting
|
| 31 |
+
This model is trained on llama instruct template, the prompting structure goes a little something like this:
|
| 32 |
+
|
| 33 |
+
```
|
| 34 |
+
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
| 35 |
+
|
| 36 |
+
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
| 37 |
+
|
| 38 |
+
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
### Context and Instruct
|
| 42 |
+
This model is trained on llama-instruct, please use that Context and Instruct template.
|
| 43 |
+
|
| 44 |
+
### Current Top Sampler Settings
|
| 45 |
+
[Smooth Creativity](https://files.catbox.moe/0ihfir.json): Credit to Juelsman for researching this one!<br/>
|
| 46 |
+
[Variant Chimera](https://files.catbox.moe/h7vd45.json): Credit to Numbra!<br/>
|
| 47 |
+
[Spicy_Temp](https://files.catbox.moe/9npj0z.json) <br/>
|
| 48 |
+
[Violet_Twilight-Nitral-Special](https://files.catbox.moe/ot54u3.json) <br/>
|