L3.3-70B-Animus-V1-GGUF
GGUF model files for L3.3-70B-Animus-V1 (original base: L3.3-70B-Animus-V1).
This repository contains the following quantization: Q5_K_M.
Files
L3.3-70B-Animus-V1-Q5_K_M.gguf
Converted and quantized using llama.cpp.
- Downloads last month
- 292
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Darkhn-Quants/L3.3-70B-Animus-V1-GGUF
Base model
meta-llama/Llama-3.1-70B
Finetuned
meta-llama/Llama-3.3-70B-Instruct
Finetuned
SicariusSicariiStuff/Negative_LLAMA_70B
Finetuned
Darkhn/L3.3-70B-Animus-V1