--- license: apache-2.0 base_model: - liuhaotian/llava-v1.6-mistral-7b --- This is Q8_0 quantization model of Llava1.6. Run it by llama_cpp ```python # !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Steven0090/llava1.6-Mistral-7B-Instruct-v0.2-gguf", filename="Mistral-7B-Instruct-v0.2-Q8_0.gguf", ) ```