Update README.md
Browse files
README.md
CHANGED
|
@@ -18,6 +18,11 @@ tags:
|
|
| 18 |
- Version 1 (v.1) of MedGPT is the very first version of MedGPT and the training dataset has been kept simple and small with only 60 examples.
|
| 19 |
- This repo includes the quantized models in the GGUF format. There is a separate repo called [valeriojob/MedGPT-Gemma2-9B-BA-v.1](https://huggingface.co/valeriojob/MedGPT-Gemma2-9B-BA-v.1) that includes the default 16bit format of the model as well as the LoRA adapters of the model.
|
| 20 |
- This model was quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
## Model description
|
| 23 |
|
|
|
|
| 18 |
- Version 1 (v.1) of MedGPT is the very first version of MedGPT and the training dataset has been kept simple and small with only 60 examples.
|
| 19 |
- This repo includes the quantized models in the GGUF format. There is a separate repo called [valeriojob/MedGPT-Gemma2-9B-BA-v.1](https://huggingface.co/valeriojob/MedGPT-Gemma2-9B-BA-v.1) that includes the default 16bit format of the model as well as the LoRA adapters of the model.
|
| 20 |
- This model was quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
| 21 |
+
- This model is available in the following quantization formats:
|
| 22 |
+
- BF16
|
| 23 |
+
- Q4_K_M
|
| 24 |
+
- Q5_K_M
|
| 25 |
+
- Q8_0
|
| 26 |
|
| 27 |
## Model description
|
| 28 |
|