⚠️ Warning: This model can produce narratives and RP that contain violent and graphic erotic content. Adjust your system prompt accordingly. Also, use ChatML format for best results.

DarkForest-20B-v2.0-fp32-upscaled-abliterated has been renamed to DarkForest 20B v2.0 Erebus Edition in spirit of the original Shinen/Erebus models by mrseeker87, as it is fully uncensored, creative, and NSFW. The higher float32 upscaling enhances the quality even further.

DarkForest 20B v2.0 Erebus Edition

🌲 DarkForest 20B v2.0 Erebus Edition

Erebus Hellscape

The Abyss Gazes Back

This is a chaotic and destructive merge of pre-trained language models, forged in the flames of mergekit. Like the primordial darkness of Erebus, this model consumes complexity and radiates raw power. It is designed for tasks requiring intense creativity, dark fantasy roleplay, and unyielding instruction following.

This is a rebuild/resurrection of Dark Forest 20B v2.0, made using 32 bit source files in place of FP16 wherever possible for superior quality. DavidAU released something very similar but only in GGUF format, meaning I couldn't ablate it directly. His method of float32 upscaling inspired me to do the same so that DarkForest could be ablated.

The result is that quantizing these float32 safetensors directly to GGUF produces better prose, while ablation is gentle enough to only slightly damage KL divergence in return for 100% refusal elimination (compared to 79% without abliteration), no jailbreaks required. They can also be down-converted to float16 for easier merging with the included python script.

v2 was chosen over v3 because the latter implemented breadcrumbs, which seemed less cohesive than dare_ties. v2 also has reports of being better at RP/ERP.

I am releasing safetensors of pre and post-ablation checkpoints, along with their Compliance scores, and the yamls used to make this.

Notes:

  • This upscale of DarkForest requires ChatML tokenizer settings.
  • This is the ablated version. Go here for the unablated version.
🌲πŸ”₯πŸ¦‡πŸ‘ΉπŸπŸ‰πŸ‘ΊπŸ•·οΈπŸŒ‹πŸŒ³

The Forging

Method of Fusion

This entity was frankenmerged using a combination of [passthrough] and [dare_ties], per TeeZee's formula.

Sacrificed Models

The following entities were consumed in the process:

Volcanic Parameters

  1. I remerged this (with adjustments commented) https://huggingface.co/TeeZee/DarkForest-20B-v2.0/resolve/main/darkforest_v2_step1.yml
slices:
  - sources:
    - model: TeeZee/Orca-2-13b_flat # Already FP32
      layer_range: [0, 16]
  - sources:
    - model: KoboldAI/LLaMA2-13B-Erebus-v3 # FP16 only
      layer_range: [8, 24]
  - sources:
    - model: TeeZee/Orca-2-13b_flat # Already FP32
      layer_range: [17, 32]
  - sources:
    - model: KoboldAI/LLaMA2-13B-Erebus-v3 # FP16 only
      layer_range: [25, 40]
merge_method: passthrough
dtype: float32 # changed from float16
  1. I then remerged this https://huggingface.co/TeeZee/DarkForest-20B-v2.0/resolve/main/darkforest_v2_step2.yml
models:
  - model: ../step1_20B
  - model: backyardai/Psyonic-Cetacean-32bit-20B # upscaled from jebcarter/psyonic-cetacean-20B
    parameters:
      weight: 0.5
      density: 1.0
  - model: TeeZee/BigMaid-20B-v2.0 # upscaled from TeeZee/BigMaid-20B-v1.0
    parameters:
      weight: 0.5
      density: 1.0
merge_method: dare_ties
base_model: ../step1_20B
parameters:
  int8_mask: true # no need to set to false
dtype: float32 # changed from bfloat16
name: darkforestv2_dire_ties
  1. I then ran it through abliteration.
darkforest-20b.yml (click to view)
# python measure.py -m A:\LLM\DarkForest-20B-v2.0-fp32-upscaled -o A:\LLM\DarkForest-20B-v2.0-fp32-upscaled\ablit_df --batch-size 8
# python analyze.py A:\LLM\DarkForest-20B-v2.0-fp32-upscaled\ablit_df -c
# sharded_ablate.py darkforest-20b.yml

# The model to be ablated.
model: A:\LLM\DarkForest-20B-v2.0-fp32-upscaled

# The measurement file generated by measure.py for the Gemma 2 9B model.
measurements: A:\LLM\DarkForest-20B-v2.0-fp32-upscaled\ablit_df

# The directory where the new, ablated model will be saved.
output: A:\LLM\DarkForest-20B-v2.0-fp32-upscaled-abliterated

# The list of ablation operations to perform.
# Strategy: Use the single best refusal direction from the peak signal layer (46)
# and apply it across all layers.
ablate:
  - layer: 0
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 1
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 2
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 3
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 4
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 5
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 6
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 7
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 8
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 9
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 10
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 11
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 12
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 13
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 14
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 15
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 16
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 17
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 18
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 19
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 20
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 21
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 22
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 23
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 24
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 25
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 26
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 27
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 28
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 29
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 30
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 31
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 32
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 33
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 34
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 35
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 36
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 37
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 38
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 39
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 40
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 41
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 42
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 43
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 44
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 45
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 46
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 47
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 48
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 49
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 50
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 51
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 52
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 53
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 54
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 55
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 56
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 57
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 58
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 59
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 60
    measurement: 46
    scale: 1.5
    sparsity: 0.00
  - layer: 61
    measurement: 46
    scale: 1.5
    sparsity: 0.00

Unleash the Fire

Quantizations are supported for different GPU configurations:
Downloads last month
143
Safetensors
Model size
20B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Naphula/DarkForest-20B-v2.0-Erebus-Edition

Collections including Naphula/DarkForest-20B-v2.0-Erebus-Edition