Update README.md
Browse files
README.md
CHANGED
|
@@ -18,41 +18,27 @@ This repo contains GGML files for for CPU inference using [llama.cpp](https://gi
|
|
| 18 |
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/wizardLM-7B-GPTQ)
|
| 19 |
* [Unquantised model in HF format](https://huggingface.co/TheBloke/wizardLM-7B-HF)
|
| 20 |
|
| 21 |
-
##
|
| 22 |
-
| Name | Quant method | Bits | Size | RAM required | Use case |
|
| 23 |
-
| ---- | ---- | ---- | ---- | ---- | ----- |
|
| 24 |
-
`WizardLM-7B.GGML.q4_0.bin` | q4_0 | 4bit | 4.2GB | 6GB | Maximum compatibility |
|
| 25 |
-
`WizardLM-7B.GGML.q4_2.bin` | q4_2 | 4bit | 4.2GB | 6GB | Best compromise between resources, speed and quality |
|
| 26 |
-
`WizardLM-7B.GGML.q5_0.bin` | q5_0 | 5bit | 4.63GB | 7GB | Brand new 5bit method. Potentially higher quality than 4bit, at cost of slightly higher resources. |
|
| 27 |
-
`WizardLM-7B.GGML.q5_1.bin` | q5_1 | 5bit | 5.0GB | 7GB | Brand new 5bit method. Slightly higher resource usage than q5_0.|
|
| 28 |
-
|
| 29 |
-
* The q4_0 file provides lower quality, but maximal compatibility. It will work with past and future versions of llama.cpp
|
| 30 |
-
* The q4_2 file offers the best combination of performance and quality. This format is still subject to change and there may be compatibility issues, see below.
|
| 31 |
-
* The q5_0 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_0.
|
| 32 |
-
* The q5_1 file is using brand new 5bit method released 26th April. This is the 5bit equivalent of q4_1.
|
| 33 |
-
|
| 34 |
-
## q4_2 compatibility
|
| 35 |
-
|
| 36 |
-
q4_2 is a relatively new 4bit quantisation method offering improved quality. However they are still under development and their formats are subject to change.
|
| 37 |
|
| 38 |
-
|
| 39 |
|
| 40 |
-
|
| 41 |
|
| 42 |
-
|
| 43 |
|
| 44 |
-
##
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
|
|
|
| 49 |
|
| 50 |
## How to run in `llama.cpp`
|
| 51 |
|
| 52 |
I use the following command line; adjust for your tastes and needs:
|
| 53 |
|
| 54 |
```
|
| 55 |
-
./main -t 18 -m WizardLM-7B.GGML.
|
| 56 |
### Instruction:
|
| 57 |
Write a story about llamas
|
| 58 |
### Response:"
|
|
@@ -65,9 +51,9 @@ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argumen
|
|
| 65 |
|
| 66 |
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
| 67 |
|
| 68 |
-
Note: at this time text-generation-webui
|
| 69 |
|
| 70 |
-
**Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5)
|
| 71 |
|
| 72 |
# Original model info
|
| 73 |
|
|
|
|
| 18 |
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/wizardLM-7B-GPTQ)
|
| 19 |
* [Unquantised model in HF format](https://huggingface.co/TheBloke/wizardLM-7B-HF)
|
| 20 |
|
| 21 |
+
## REQUIRES LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
+
llama.cpp recently made a breaking change to its quantisation methods.
|
| 24 |
|
| 25 |
+
I have re-quantised the GGML files in this repo. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
|
| 26 |
|
| 27 |
+
The previous files, which will still work in older versions of llama.cpp, can be found in branch `previous_llama`.
|
| 28 |
|
| 29 |
+
## Provided files
|
| 30 |
+
| Name | Quant method | Bits | Size | RAM required | Use case |
|
| 31 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
| 32 |
+
`WizardLM-7B.GGML.q4_0.bin` | q4_0 | 4bit | 4.2GB | 6GB | 4bit. |
|
| 33 |
+
`WizardLM-7B.GGML.q5_0.bin` | q5_0 | 5bit | 4.63GB | 7GB | Higher quality inference than 4bit, at cost of slightly higher resources. |
|
| 34 |
+
`WizardLM-7B.GGML.q5_1.bin` | q5_1 | 5bit | 5.0GB | 7GB | Higher quality and resource usage again. |
|
| 35 |
|
| 36 |
## How to run in `llama.cpp`
|
| 37 |
|
| 38 |
I use the following command line; adjust for your tastes and needs:
|
| 39 |
|
| 40 |
```
|
| 41 |
+
./main -t 18 -m WizardLM-7B.GGML.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
| 42 |
### Instruction:
|
| 43 |
Write a story about llamas
|
| 44 |
### Response:"
|
|
|
|
| 51 |
|
| 52 |
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
| 53 |
|
| 54 |
+
Note: at this time text-generation-webui may not support the new llama.cpp quantisation methods (May 12th)
|
| 55 |
|
| 56 |
+
**Thireus** has written a [great guide on how to update it to the latest llama.cpp code](https://huggingface.co/TheBloke/wizardLM-7B-GGML/discussions/5) to get support for thew newer files quicker.
|
| 57 |
|
| 58 |
# Original model info
|
| 59 |
|