Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

legraphista
/
xLAM-8x7b-r-IMat-GGUF

Text Generation
GGUF
PyTorch
English
function-calling
LLM Agent
tool-use
mistral
quantized
GGUF
quantization
imat
imatrix
static
16bit
8bit
6bit
5bit
4bit
3bit
2bit
1bit
conversational
Model card Files Files and versions
xet
Community
xLAM-8x7b-r-IMat-GGUF
275 GB
  • 1 contributor
History: 21 commits
legraphista's picture
legraphista
Upload xLAM-8x7b-r.FP16/xLAM-8x7b-r.FP16-00001-of-00004.gguf with huggingface_hub
744016e verified about 1 year ago
  • xLAM-8x7b-r.BF16
    Upload xLAM-8x7b-r.BF16/xLAM-8x7b-r.BF16-00001-of-00004.gguf with huggingface_hub about 1 year ago
  • xLAM-8x7b-r.FP16
    Upload xLAM-8x7b-r.FP16/xLAM-8x7b-r.FP16-00001-of-00004.gguf with huggingface_hub about 1 year ago
  • xLAM-8x7b-r.Q8_0
    Upload xLAM-8x7b-r.Q8_0/xLAM-8x7b-r.Q8_0-00002-of-00003.gguf with huggingface_hub about 1 year ago
  • .gitattributes
    2.62 kB
    Upload xLAM-8x7b-r.FP16/xLAM-8x7b-r.FP16-00001-of-00004.gguf with huggingface_hub about 1 year ago
  • README.md
    6.51 kB
    Upload README.md with huggingface_hub about 1 year ago
  • imatrix.dat
    25.7 MB
    xet
    Upload imatrix.dat with huggingface_hub about 1 year ago
  • imatrix.dataset
    280 kB
    Upload imatrix.dataset with huggingface_hub about 1 year ago
  • imatrix.log
    11.9 kB
    Upload imatrix.log with huggingface_hub about 1 year ago
  • xLAM-8x7b-r.Q6_K.gguf
    38.4 GB
    xet
    Upload xLAM-8x7b-r.Q6_K.gguf with huggingface_hub about 1 year ago