| license: gemma | |
| base_model: google/gemma-2b-it | |
| tags: | |
| - quantized | |
| - int4 | |
| - mobile | |
| - tflite | |
| - gemma | |
| library_name: mediapipe | |
| # Gemma AI Models | |
| This directory contains the downloaded Gemma 3N (2B parameters) model files required for the M.AI application's on-device AI features. | |
| ## Included Files | |
| 1. **`gemma-1.1-2b-it-gpu-int4.bin`** (~1.35 GB) | |
| * **Description:** The main Large Language Model, instruction-tuned and quantized (INT4) for efficient GPU inference on mobile devices. | |
| * **Source:** Google's Gemma 1.1 2B IT TFLite repository (Hugging Face). | |
| * **SHA-256:** `53f7defdb5554dd517a6863d2d605a144d14361d6c486bbeaac2870497b75747` | |
| 2. **`tokenizer.spm`** (~4.2 MB) | |
| * **Description:** The SentencePiece tokenizer model used to process text input before sending it to the LLM. | |
| * **Source:** Google's Gemma 2B repository (Hugging Face). | |
| * **SHA-256:** `61a7b147390c64585d6c3543dd6fc636906c9af3865a5548f27f31aee1d4c8e2` | |