Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

legraphista
/
xLAM-7b-r-IMat-GGUF

Text Generation
GGUF
PyTorch
English
function-calling
LLM Agent
tool-use
mistral
quantized
GGUF
quantization
imat
imatrix
static
16bit
8bit
6bit
5bit
4bit
3bit
2bit
1bit
conversational
Model card Files Files and versions
xet
Community
xLAM-7b-r-IMat-GGUF
13.6 GB
  • 1 contributor
History: 10 commits
legraphista's picture
legraphista
Upload README.md with huggingface_hub
6620705 verified over 1 year ago
  • .gitattributes
    1.68 kB
    Upload xLAM-7b-r.Q6_K.gguf with huggingface_hub over 1 year ago
  • README.md
    6.08 kB
    Upload README.md with huggingface_hub over 1 year ago
  • imatrix.dat
    4.99 MB
    xet
    Upload imatrix.dat with huggingface_hub over 1 year ago
  • imatrix.dataset
    280 kB
    Upload imatrix.dataset with huggingface_hub over 1 year ago
  • imatrix.log
    11.7 kB
    Upload imatrix.log with huggingface_hub over 1 year ago
  • xLAM-7b-r.Q6_K.gguf
    5.94 GB
    xet
    Upload xLAM-7b-r.Q6_K.gguf with huggingface_hub over 1 year ago
  • xLAM-7b-r.Q8_0.gguf
    7.7 GB
    xet
    Upload xLAM-7b-r.Q8_0.gguf with huggingface_hub over 1 year ago