QLoRA Finetune Llama 3 Instruct 8B + OpenHermes 2.5

This model is based on Llama-3-8b, and is governed by META LLAMA 3 COMMUNITY LICENSE AGREEMENT

Llama 3 Instruct 8B 4-bit from unsloth, finetuned with the OpenHermes 2.5 dataset on my home PC on one 24GB 4090.

Special care was taken to preserve and reinforce proper eos token structure.

Source Model

Chat with llama.cpp

llama.cpp/main -ngl 33 -c 0 --interactive-first --color -e --in-prefix '<|start_header_id|>user<|end_header_id|>\n\n' --in-suffix '<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n' -r '<|eot_id|>' -m ./llama-3-8b-Instruct-OpenHermes-2.5-QLoRA.Q4_K_M.gguf

Downloads last month
23
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yaystevek/llama-3-8b-Instruct-OpenHermes-2.5-QLoRA-GGUF

Dataset used to train yaystevek/llama-3-8b-Instruct-OpenHermes-2.5-QLoRA-GGUF