TiTan
Collection
Smaller models, fine tuned on generating titles and tags.
•
9 items
•
Updated
•
2
A fine-tuned Gemma 3 4B model, specialized in generating short conversation titles and relevant tags.
This model is a fine-tuned version of google/gemma-3-4b-it using the Unsloth framework with LoRA (Low-Rank Adaptation) for efficient training.
Title and tag generation.
The titles-n-tags set was specifically created for finetuning models on titling and tagging. This was done by generating two titles and two sets of tags for each entry, and using a combination of a judge LLM and a scoring algorithm to determine the winning combination.
from unsloth import FastLanguageModel
import torch
# Load model and tokenizer
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="theprint/TiTan-Gemma3-4B",
max_seq_length=4096,
dtype=None,
load_in_4bit=True,
)
# Enable inference mode
FastLanguageModel.for_inference(model)
# Example usage
inputs = tokenizer(["Your prompt here"], return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"theprint/TiTan-Gemma3-4B",
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("theprint/TiTan-Gemma3-4B")
# Example usage
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Your question here"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs, max_new_tokens=256, temperature=0.7, do_sample=True)
response = tokenizer.decode(outputs[0][inputs.shape[-1]:], skip_special_tokens=True)
print(response)
Quantized GGUF versions are available in the gguf/ directory for use with llama.cpp:
TiTan-Gemma3-4B-f16.gguf (8688.3 MB) - 16-bit float (original precision, largest file)TiTan-Gemma3-4B-q3_k_m.gguf (2276.3 MB) - 3-bit quantization (medium quality)TiTan-Gemma3-4B-q4_k_m.gguf (2734.6 MB) - 4-bit quantization (medium, recommended for most use cases)TiTan-Gemma3-4B-q5_k_m.gguf (3138.7 MB) - 5-bit quantization (medium, good quality)TiTan-Gemma3-4B-q6_k.gguf (3568.1 MB) - 6-bit quantization (high quality)TiTan-Gemma3-4B-q8_0.gguf (4619.2 MB) - 8-bit quantization (very high quality)# Download a quantized version (q4_k_m recommended for most use cases)
wget https://huggingface.co/theprint/TiTan-Gemma3-4B/resolve/main/gguf/TiTan-Gemma3-4B-q4_k_m.gguf
# Run with llama.cpp
./llama.cpp/main -m TiTan-Gemma3-4B-q4_k_m.gguf -p "Your prompt here" -n 256
May provide incorrect information.
If you use this model, please cite:
@misc{titan_gemma3_4b,
title={TiTan-Gemma3-4B: Fine-tuned google/gemma-3-4b-it},
author={theprint},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/theprint/TiTan-Gemma3-4B}
}
Base model
google/gemma-3-4b-pt