mkenfenheuer/Mistral-7B-Instruct-v0.3-ha-function-calling-lora-Q8_0-GGUF
This LoRA adapter was converted to GGUF format from mkenfenheuer/Mistral-7B-Instruct-v0.3-ha-function-calling-lora via the ggml.ai's GGUF-my-lora space.
Refer to the original adapter repository for more details.
Use with llama.cpp
# with cli
llama-cli -m base_model.gguf --lora Mistral-7B-Instruct-v0.3-ha-function-calling-lora-q8_0.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora Mistral-7B-Instruct-v0.3-ha-function-calling-lora-q8_0.gguf (...other args)
To know more about LoRA usage with llama.cpp server, refer to the llama.cpp server documentation.
- Downloads last month
- 8
Hardware compatibility
Log In
to view the estimation
8-bit
Model tree for mkenfenheuer/Mistral-7B-Instruct-v0.3-ha-function-calling-lora-Q8_0-GGUF
Base model
mistralai/Mistral-7B-v0.3
Finetuned
mistralai/Mistral-7B-Instruct-v0.3