Update README.md
Browse files
README.md
CHANGED
|
@@ -33,3 +33,26 @@ configs:
|
|
| 33 |
- split: train
|
| 34 |
path: data/train-*
|
| 35 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
- split: train
|
| 34 |
path: data/train-*
|
| 35 |
---
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
# mmJEE-Eval: A Bilingual Multimodal Benchmark for Exam-Style Evaluation of Vision-Language Models
|
| 39 |
+
|
| 40 |
+
<div align="center">
|
| 41 |
+
<!-- Badges -->
|
| 42 |
+
<a href="https://arxiv.org/abs/COMING_SOON">
|
| 43 |
+
<img src="https://img.shields.io/badge/arXiv-Coming%20Soon-B31B1B?style=for-the-badge&logo=arxiv&logoColor=white" alt="arXiv">
|
| 44 |
+
</a>
|
| 45 |
+
<a href="https://huggingface.co/datasets/ArkaMukherjee/mmJEE-Eval">
|
| 46 |
+
<img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Dataset-yellow?style=for-the-badge" alt="Hugging Face Dataset">
|
| 47 |
+
</a>
|
| 48 |
+
<a href="https://mmjee-eval.github.io">
|
| 49 |
+
<img src="https://img.shields.io/badge/🌐%20Website-mmjee--eval-blue?style=for-the-badge" alt="Website">
|
| 50 |
+
</a>
|
| 51 |
+
<a href="#license">
|
| 52 |
+
<img src="https://img.shields.io/badge/License-MIT-green?style=for-the-badge" alt="License">
|
| 53 |
+
</a>
|
| 54 |
+
</div>
|
| 55 |
+
|
| 56 |
+
## Introduction
|
| 57 |
+
|
| 58 |
+
mmJEE-Eval is a multimodal and bilingual dataset for LLM evaluation comprising 1,460 challenging questions from seven years (2019-2025) of India's JEE Advanced competitive examination. We evaluate 17 state-of-the-art VLMs, finding that open models (from 7B-400B) struggle significantly (maxing at 40-50%) as compared to frontier models from Google and OpenAI (77-84%). mmJEE-Eval is significantly more challenging than the text-only JEEBench, the only other well-established dataset on JEE Advanced problems, with performance drops of 18-56% across all models. Our findings, especially metacognitive self-correction abilities, cross-lingual consistency, and human evaluation of reasoning quality, demonstrate that contemporary VLMs still show authentic scientific reasoning deficits despite strong question-solving capabilities (as evidenced by high Pass@K accuracies), establishing mmJEE-Eval as a challenging complementary benchmark that effectively discriminates between model capabilities.
|