license: apache-2.0
language:
- en
metrics:
- accuracy
base_model:
- llava-hf/llava-1.5-7b-hf
- OpenGVLab/InternVL-Chat-ViT-6B-Vicuna-7B
base_model_relation: adapter
tags:
- router
- MLLM-CL
- llava
- internvl
- MR-LoRA
pipeline_tag: visual-question-answering
library_name: transformers
datasets:
- MLLM-CL/MLLM-CL-ReplayData
MLLM-CL Benchmark Description
MLLM-CL is a novel benchmark encompassing domain and ability continual learning, where the former focuses on independently and identically distributed (IID) evaluation across evolving mainstream domains, whereas the latter evaluates on non-IID scenarios with emerging model ability. For more details, please refer to:
MLLM-CL: Continual Learning for Multimodal Large Language Models [paper], [code].
Hongbo Zhao, Fei Zhu, Haiyang Guo, Meng Wang, Rundong Wang, Gaofeng Meng, Zhaoxiang Zhang
Usage
This repo is used to open-source MR-LoRA's router LoRA, including 4 branches.
Citation
@article{zhao2025mllm,
title={MLLM-CL: Continual Learning for Multimodal Large Language Models},
author={Zhao, Hongbo and Zhu, Fei and Guo, Haiyang and Wang, Meng and Wang, Rundong and Meng, Gaofeng and Zhang, Zhaoxiang},
journal={arXiv preprint arXiv:2506.05453},
year={2025}
}
Contact
Please post an issue on our GitHub.
About us: MLLM-CL Community
We are the members from MLLM-CL, an open-source community focused on Continual learning of Multimodal Large Language Models. If you are interested in our community, feel free to contact us on GitHub or by email.