GGUF
Not-For-All-Audiences
nsfw

Merge of OpenHermes and Dolphin with 2x Noromaid DPO, trying to add a little more brain in the model, while being smaller than a 8x7b.

It seems to work well.

Description

This repo contains GGUF files of OpenDolphinMaid-4x7b.

Models and LoRA used

  • NeverSleep/Noromaid-7B-0.4-DPO x 2
  • teknium/OpenHermes-2.5-Mistral-7B
  • cognitivecomputations/dolphin-2.6-mistral-7b-dpo

Prompt template: Chatml

<|im_start|>system
{sysprompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

If you want to support me, you can here.

Downloads last month
17
GGUF
Model size
24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support