--- language: en license: apache-2.0 tags: - fine-tuned - gemma - lora - gemma-garage base_model: google/gemma-3-1b-pt pipeline_tag: text-generation --- # test-bench-1 Fine-tuned google/gemma-3-1b-pt model from Gemma Garage This model was fine-tuned using [Gemma Garage](https://github.com/your-repo/gemma-garage), a platform for fine-tuning Gemma models with LoRA. ## Model Details - **Base Model**: google/gemma-3-1b-pt - **Fine-tuning Method**: LoRA (Low-Rank Adaptation) - **Training Platform**: Gemma Garage - **Fine-tuned on**: 2025-08-15 ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("LucasFMartins/test-bench-1") model = AutoModelForCausalLM.from_pretrained("LucasFMartins/test-bench-1") # Generate text inputs = tokenizer("Your prompt here", return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=100) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ## Training Details This model was fine-tuned using the Gemma Garage platform with the following configuration: - Request ID: 5387d555-f470-4731-bbfc-657c3c719f23 - Training completed on: 2025-08-15 17:30:36 UTC For more information about Gemma Garage, visit [our GitHub repository](https://github.com/your-repo/gemma-garage).