File size: 1,503 Bytes
100af0a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 |
---
license: apache-2.0
datasets:
- allenai/MADLAD-400
language:
- ig
base_model:
- allenai/OLMo-2-1124-7B-Instruct
---
# OLMo 2 1124 7B Instruct for Igbo: GMT (12.5% gradient dropping)
This model is built on top of OLMo 2 1124 7B Instruct adapted for Igbo using 200M target language tokens sampled from MADLAD-400. The model is adapted using the GMT approach with 12.5% gradient dropping.
## Model Description
- **Language:** Igbo
- **License:** Apache 2.0
- **Fine-tuned from model:** [allenai/OLMo-2-1124-7B-Instruct](https://huggingface.co/allenai/OLMo-2-1124-7B-Instruct)
## Model Sources
- **Repository:** https://github.com/gucci-j/ssu
- **Paper:** https://arxiv.org/abs/2512.04844
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"ssu-project/OLMo-2-1124-7B-Instruct-ig-gmt_0.125"
)
tokenizer = AutoTokenizer.from_pretrained(
"ssu-project/OLMo-2-1124-7B-Instruct-ig-gmt_0.125"
)
```
## Citation
```
@misc{yamaguchi2025mitigatingcatastrophicforgettingtarget,
title={Mitigating Catastrophic Forgetting in Target Language Adaptation of LLMs via Source-Shielded Updates},
author={Atsuki Yamaguchi and Terufumi Morishita and Aline Villavicencio and Nikolaos Aletras},
year={2025},
eprint={2512.04844},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2512.04844},
}
```
|