atsuki-yamaguchi's picture
Upload README.md with huggingface_hub
d2f5f9d verified
metadata
license: apache-2.0
datasets:
  - allenai/MADLAD-400
language:
  - ig
base_model:
  - allenai/OLMo-2-1124-13B-Instruct

OLMo 2 1124 13B Instruct for Igbo: SSU-Rand

This model is built on top of OLMo 2 1124 13B Instruct adapted for Igbo using 200M target language tokens sampled from MADLAD-400. The model is adapted using the SSU-Rand approach (i.e., randomly selecting parameters to update by column).

Model Description

Model Sources

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained(
    "ssu-project/OLMo-2-1124-13B-Instruct-ig-random"
)
tokenizer = AutoTokenizer.from_pretrained(
    "ssu-project/OLMo-2-1124-13B-Instruct-ig-random"
)

Citation

@misc{yamaguchi2025mitigatingcatastrophicforgettingtarget,
    title={Mitigating Catastrophic Forgetting in Target Language Adaptation of LLMs via Source-Shielded Updates}, 
    author={Atsuki Yamaguchi and Terufumi Morishita and Aline Villavicencio and Nikolaos Aletras},
    year={2025},
    eprint={2512.04844},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2512.04844}, 
}