lmganon123's picture
Update README.md
7da2388

Quantized : https://huggingface.co/Sao10K/Euryale-L2-70B

With : https://github.com/turboderp/exllamav2

Fits into 24GB of ram with 4096 context. Unfortunately it seems to be dumbed down a bit too much by compression. Files also include measurement.json that can be used to speed up quantization process for other BPW size.

Measurement done with default parameters and https://huggingface.co/datasets/wikitext/tree/refs%2Fconvert%2Fparquet/wikitext-103-raw-v1/test


license: other