Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Nexesenex
/
Codellama-2-7b-Miniguanaco-Mistral-GGUF
like
3
GGUF
License:
llama2
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Codellama-2-7b-Miniguanaco-Mistral-GGUF
31.2 GB
1 contributor
History:
10 commits
Nexesenex
Upload Varunk29_codellama-2-7b-miniguanaco-Mistral-f16.gguf
0f94500
verified
almost 2 years ago
.gitattributes
Safe
2.34 kB
Upload Varunk29_codellama-2-7b-miniguanaco-Mistral-f16.gguf
almost 2 years ago
Codellama-2-7b-Miniguanaco-Mistral-AR-Q3_K_M.gguf
3.3 GB
xet
Rename Codellama-2-7b-Miniguanaco-Mistral-f16-AR-Q3_K_M.gguf to Codellama-2-7b-Miniguanaco-Mistral-AR-Q3_K_M.gguf
about 2 years ago
Codellama-2-7b-Miniguanaco-Mistral-AR-Q4_K_M.gguf
4.08 GB
xet
Rename Varunk29_codellama-2-7b-miniguanaco-Mistral-f16-AR-Q4_K_M.gguf to Codellama-2-7b-Miniguanaco-Mistral-AR-Q4_K_M.gguf
about 2 years ago
Codellama-2-7b-Miniguanaco-Mistral-AR-Q5_K_M.gguf
4.78 GB
xet
Rename Varunk29_codellama-2-7b-miniguanaco-Mistral-f16-AR-Q5_K_M.gguf to Codellama-2-7b-Miniguanaco-Mistral-AR-Q5_K_M.gguf
about 2 years ago
Codellama-2-7b-Miniguanaco-Mistral-AR-Q6_K.gguf
5.53 GB
xet
Upload Codellama-2-7b-Miniguanaco-Mistral-AR-Q6_K.gguf
about 2 years ago
README.md
Safe
461 Bytes
Update README.md
about 2 years ago
Varunk29_codellama-2-7b-miniguanaco-Mistral-f16.gguf
13.5 GB
xet
Upload Varunk29_codellama-2-7b-miniguanaco-Mistral-f16.gguf
almost 2 years ago