Update README.md
Browse files
README.md
CHANGED
|
@@ -17,8 +17,18 @@ This repo is the result of converting to GGML and quantising.
|
|
| 17 |
|
| 18 |
## Repositories available
|
| 19 |
|
| 20 |
-
* [MPT-7B: 4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/MPT-7B).
|
| 21 |
-
* [MPT-7B-Instruct: 4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/MPT-7B-Instruct).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
## Compatibilty
|
| 24 |
|
|
@@ -44,16 +54,6 @@ bin/mpt -m /path/to/mpt7b-instructggmlv2.ggmlv2.q4_0.bin -t 8 -n 512 -p "Write a
|
|
| 44 |
|
| 45 |
Please see the ggml repo for other build options.
|
| 46 |
|
| 47 |
-
## Provided files
|
| 48 |
-
| Name | Quant method | Bits | Size | RAM required | Use case |
|
| 49 |
-
| ---- | ---- | ---- | ---- | ---- | ----- |
|
| 50 |
-
`mpt7b-instructggmlv2.ggmlv2.q4_0.bin` | q4_0 | 4bit | 4.21GB | 7.0GB | 4-bit. |
|
| 51 |
-
`mpt7b-instructggmlv2.ggmlv2.q4_1.bin` | q4_0 | 4bit | 4.63GB | 7.5GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
| 52 |
-
`mpt7b-instructggmlv2.ggmlv2.q5_0.bin` | q5_0 | 5bit | 4.63GB | 7.5GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
| 53 |
-
`mpt7b-instructggmlv2.ggmlv2.q5_1.bin` | q5_1 | 5bit | 5.06GB | 7.5GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
|
| 54 |
-
`mpt7b-instructggmlv2.ggmlv2.q8_0.bin` | q8_0 | 8bit | 7.58GB | 9.0GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
|
| 55 |
-
`mpt7b-instructggmlv2.ggmlv2.fp16.bin` | fp16 | 16bit | GB | GB | Full 16-bit. |
|
| 56 |
-
|
| 57 |
|
| 58 |
# Original model card: MPT-7B-Instruct
|
| 59 |
|
|
|
|
| 17 |
|
| 18 |
## Repositories available
|
| 19 |
|
| 20 |
+
* [MPT-7B: 4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/MPT-7B-GGML).
|
| 21 |
+
* [MPT-7B-Instruct: 4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/MPT-7B-Instruct-GGML).
|
| 22 |
+
|
| 23 |
+
## Provided files
|
| 24 |
+
| Name | Quant method | Bits | Size | RAM required | Use case |
|
| 25 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
| 26 |
+
`mpt7b-instructggmlv2.ggmlv2.q4_0.bin` | q4_0 | 4bit | 4.21GB | 7.0GB | 4-bit. |
|
| 27 |
+
`mpt7b-instructggmlv2.ggmlv2.q4_1.bin` | q4_0 | 4bit | 4.63GB | 7.5GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
| 28 |
+
`mpt7b-instructggmlv2.ggmlv2.q5_0.bin` | q5_0 | 5bit | 4.63GB | 7.5GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
| 29 |
+
`mpt7b-instructggmlv2.ggmlv2.q5_1.bin` | q5_1 | 5bit | 5.06GB | 7.5GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
|
| 30 |
+
`mpt7b-instructggmlv2.ggmlv2.q8_0.bin` | q8_0 | 8bit | 7.58GB | 9.0GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
|
| 31 |
+
`mpt7b-instructggmlv2.ggmlv2.fp16.bin` | fp16 | 16bit | GB | GB | Full 16-bit. |
|
| 32 |
|
| 33 |
## Compatibilty
|
| 34 |
|
|
|
|
| 54 |
|
| 55 |
Please see the ggml repo for other build options.
|
| 56 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
|
| 58 |
# Original model card: MPT-7B-Instruct
|
| 59 |
|