Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Downtown-Case
/
GLM-4.5-Base-128GB-RAM-IQ2_KL-GGUF
like
1
Text Generation
GGUF
English
Chinese
imatrix
conversational
ik_llama.cpp
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
fce0aa6
GLM-4.5-Base-128GB-RAM-IQ2_KL-GGUF
128 GB
1 contributor
History:
8 commits
Downtown-Case
Update README.md
fce0aa6
verified
2 months ago
.gitattributes
Safe
1.79 kB
Upload folder using huggingface_hub
3 months ago
GLM-4.5-Base.imatrix.gguf
688 MB
xet
Upload folder using huggingface_hub
3 months ago
GLM-4.5-IQ2_KL-00001-of-00003.gguf
Safe
47.9 GB
xet
Upload folder using huggingface_hub
3 months ago
GLM-4.5-IQ2_KL-00002-of-00003.gguf
Safe
47.8 GB
xet
Upload folder using huggingface_hub
3 months ago
GLM-4.5-IQ2_KL-00003-of-00003.gguf
Safe
31.1 GB
xet
Upload folder using huggingface_hub
3 months ago
README.md
2.95 kB
Update README.md
2 months ago
customquant.sh
Safe
1.09 kB
Upload folder using huggingface_hub
3 months ago