--- base_model: - vinhnx90/vt-qwen-3b-GRPO-merged-16bit tags: - bnb-my-repo - text-generation-inference - transformers - unsloth - qwen2 - trl - grpo license: apache-2.0 language: - en --- # vinhnx90/vt-qwen-3b-GRPO-merged-16bit (Quantized) ## Description This model is a quantized version of the original model [`vinhnx90/vt-qwen-3b-GRPO-merged-16bit`](https://huggingface.co/vinhnx90/vt-qwen-3b-GRPO-merged-16bit). It's quantized using the BitsAndBytes library to 4-bit using the [bnb-my-repo](https://huggingface.co/spaces/bnb-community/bnb-my-repo) space. ## Quantization Details - **Quantization Type**: int4 - **bnb_4bit_quant_type**: fp4 - **bnb_4bit_use_double_quant**: True - **bnb_4bit_compute_dtype**: bfloat16 - **bnb_4bit_quant_storage**: bfloat16 # 📄 Original Model Information # Uploaded model - **Developed by:** vinhnx90 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)