Uploaded model

This model was fine-tuned for 3 epochs on the RAG-12.000 dataset to perform better in RAG pipelines. Using RAGAs metrics on own testset, mean performance increased: Faithfulness 0.783 -> 0.822, Answer Correctness 0.591 -> 0.613, Answer Relevance 0.765 -> 0.874 and mean inference time decreased 9.95s -> 6.12s

  • Developed by: R0bfried
  • License: apache-2.0
  • Finetuned from model : unsloth/Meta-Llama-3.1-8B-Instruct-unsloth-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
4
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for R0bfried/RAG12000-LLaMA3.1-8B-gguf

Dataset used to train R0bfried/RAG12000-LLaMA3.1-8B-gguf