Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) nvlm_adapt_basic_model_16bit - bnb 8bits - Model creator: https://huggingface.co/Danielrahmai1991/ - Original model: https://huggingface.co/Danielrahmai1991/nvlm_adapt_basic_model_16bit/ Original model description: --- base_model: nvidia/Llama3-ChatQA-2-8B language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- # Uploaded model - **Developed by:** Danielrahmai1991 - **License:** apache-2.0 - **Finetuned from model :** nvidia/Llama3-ChatQA-2-8B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)