--- license: llama3 library_name: peft tags: - alignment-handbook - trl - orpo - generated_from_trainer - llama-cpp - gguf-my-lora base_model: statking/Meta-Llama-3-8B-Instruct-ORPO-QLoRA datasets: - HuggingFaceH4/ultrafeedback_binarized model-index: - name: Meta-Llama-3-8B-Instruct-ORPO-QLoRA results: [] --- # wuqiqi/Meta-Llama-3-8B-Instruct-ORPO-QLoRA-F16-GGUF This LoRA adapter was converted to GGUF format from [`statking/Meta-Llama-3-8B-Instruct-ORPO-QLoRA`](https://huggingface.co/statking/Meta-Llama-3-8B-Instruct-ORPO-QLoRA) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space. Refer to the [original adapter repository](https://huggingface.co/statking/Meta-Llama-3-8B-Instruct-ORPO-QLoRA) for more details. ## Use with llama.cpp ```bash # with cli llama-cli -m base_model.gguf --lora Meta-Llama-3-8B-Instruct-ORPO-QLoRA-f16.gguf (...other args) # with server llama-server -m base_model.gguf --lora Meta-Llama-3-8B-Instruct-ORPO-QLoRA-f16.gguf (...other args) ``` To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).