winvswon78 commited on
Commit
5b80cbc
·
verified ·
1 Parent(s): 6b41b30

Upload folder using huggingface_hub

Browse files
All/emma_all_subjects.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95f98e975894cba2b77a322e19f10db3cf77def6ebaf3434b010280a858521e2
3
+ size 3534939
All/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95f98e975894cba2b77a322e19f10db3cf77def6ebaf3434b010280a858521e2
3
+ size 3534939
Chemistry/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7fbe04b7164e909986fe1a9bffe928cfcdc3cb92a5d9c9994ea2c19e0ba3b4a
3
+ size 415466
Coding/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:871ed7941b23b4287a1e3d05f8efe980b45a0f434e384b66d70f716511890856
3
+ size 1693180
Math/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ea6b6f276b1e1cabf6ecc5bb5ba478d53b4ca5bf079f843975cc1122708a251
3
+ size 857062
Physics/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5454f6d4d36cf5072e8ac7dfa9fd9be151664152372744a3ad8533d70061080d
3
+ size 566203
README.md ADDED
@@ -0,0 +1,294 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ size_categories:
5
+ - 1K<n<10K
6
+ task_categories:
7
+ - question-answering
8
+ - visual-question-answering
9
+ - multiple-choice
10
+ dataset_info:
11
+ - config_name: Chemistry
12
+ features:
13
+ - name: pid
14
+ dtype: string
15
+ - name: question
16
+ dtype: string
17
+ - name: options
18
+ sequence: string
19
+ - name: answer
20
+ dtype: string
21
+ - name: image_1
22
+ dtype: image
23
+ - name: image_2
24
+ dtype: image
25
+ - name: image_3
26
+ dtype: image
27
+ - name: image_4
28
+ dtype: image
29
+ - name: image_5
30
+ dtype: image
31
+ - name: solution
32
+ dtype: string
33
+ - name: subject
34
+ dtype: string
35
+ - name: task
36
+ dtype: string
37
+ - name: category
38
+ dtype: string
39
+ - name: source
40
+ dtype: string
41
+ - name: type
42
+ dtype: string
43
+ - name: context
44
+ dtype: string
45
+ splits:
46
+ - name: test
47
+ num_bytes: 49337131.36
48
+ num_examples: 1176
49
+ download_size: 38090732
50
+ dataset_size: 49337131.36
51
+ - config_name: Coding
52
+ features:
53
+ - name: pid
54
+ dtype: string
55
+ - name: question
56
+ dtype: string
57
+ - name: options
58
+ sequence: string
59
+ - name: answer
60
+ dtype: string
61
+ - name: image_1
62
+ dtype: image
63
+ - name: image_2
64
+ dtype: image
65
+ - name: image_3
66
+ dtype: image
67
+ - name: image_4
68
+ dtype: image
69
+ - name: image_5
70
+ dtype: image
71
+ - name: solution
72
+ dtype: string
73
+ - name: subject
74
+ dtype: string
75
+ - name: task
76
+ dtype: string
77
+ - name: category
78
+ dtype: string
79
+ - name: source
80
+ dtype: string
81
+ - name: type
82
+ dtype: string
83
+ - name: context
84
+ dtype: string
85
+ splits:
86
+ - name: test
87
+ num_bytes: 201047028.0
88
+ num_examples: 564
89
+ download_size: 156921633
90
+ dataset_size: 201047028.0
91
+ - config_name: Math
92
+ features:
93
+ - name: pid
94
+ dtype: string
95
+ - name: question
96
+ dtype: string
97
+ - name: options
98
+ sequence: string
99
+ - name: answer
100
+ dtype: string
101
+ - name: image_1
102
+ dtype: image
103
+ - name: image_2
104
+ dtype: image
105
+ - name: image_3
106
+ dtype: image
107
+ - name: image_4
108
+ dtype: image
109
+ - name: image_5
110
+ dtype: image
111
+ - name: solution
112
+ dtype: string
113
+ - name: subject
114
+ dtype: string
115
+ - name: task
116
+ dtype: string
117
+ - name: category
118
+ dtype: string
119
+ - name: source
120
+ dtype: string
121
+ - name: type
122
+ dtype: string
123
+ - name: context
124
+ dtype: string
125
+ splits:
126
+ - name: test
127
+ num_bytes: 55727097.0
128
+ num_examples: 892
129
+ download_size: 49594723
130
+ dataset_size: 55727097.0
131
+ - config_name: Physics
132
+ features:
133
+ - name: pid
134
+ dtype: string
135
+ - name: question
136
+ dtype: string
137
+ - name: options
138
+ sequence: string
139
+ - name: answer
140
+ dtype: string
141
+ - name: image_1
142
+ dtype: image
143
+ - name: image_2
144
+ dtype: image
145
+ - name: image_3
146
+ dtype: image
147
+ - name: image_4
148
+ dtype: image
149
+ - name: image_5
150
+ dtype: image
151
+ - name: solution
152
+ dtype: string
153
+ - name: subject
154
+ dtype: string
155
+ - name: task
156
+ dtype: string
157
+ - name: category
158
+ dtype: string
159
+ - name: source
160
+ dtype: string
161
+ - name: type
162
+ dtype: string
163
+ - name: context
164
+ dtype: string
165
+ splits:
166
+ - name: test
167
+ num_bytes: 20512520.0
168
+ num_examples: 156
169
+ download_size: 13597019
170
+ dataset_size: 20512520.0
171
+ - config_name: All
172
+ features:
173
+ - name: pid
174
+ dtype: string
175
+ - name: question
176
+ dtype: string
177
+ - name: options
178
+ sequence: string
179
+ - name: answer
180
+ dtype: string
181
+ - name: image_1
182
+ dtype: image
183
+ - name: image_2
184
+ dtype: image
185
+ - name: image_3
186
+ dtype: image
187
+ - name: image_4
188
+ dtype: image
189
+ - name: image_5
190
+ dtype: image
191
+ - name: solution
192
+ dtype: string
193
+ - name: subject
194
+ dtype: string
195
+ - name: task
196
+ dtype: string
197
+ - name: category
198
+ dtype: string
199
+ - name: source
200
+ dtype: string
201
+ - name: type
202
+ dtype: string
203
+ - name: context
204
+ dtype: string
205
+ splits:
206
+ - name: test
207
+ num_bytes: 326623776.36
208
+ num_examples: 2788
209
+ download_size: 258203107
210
+ dataset_size: 326623776.36
211
+ configs:
212
+ - config_name: Chemistry
213
+ data_files:
214
+ - split: test
215
+ path: Chemistry/test-*
216
+ - config_name: Coding
217
+ data_files:
218
+ - split: test
219
+ path: Coding/test-*
220
+ - config_name: Math
221
+ data_files:
222
+ - split: test
223
+ path: Math/test-*
224
+ - config_name: Physics
225
+ data_files:
226
+ - split: test
227
+ path: Physics/test-*
228
+ - config_name: All
229
+ data_files:
230
+ - split: test
231
+ path: All/test-*
232
+ tags:
233
+ - chemistry
234
+ - physics
235
+ - math
236
+ - coding
237
+ ---
238
+
239
+ ## Dataset Description
240
+
241
+ **EMMA (Enhanced MultiModal reAsoning)** is a benchmark targeting organic multimodal reasoning across mathematics, physics, chemistry, and coding.
242
+ EMMA tasks demand advanced cross-modal reasoning that cannot be solved by thinking separately in each modality, offering an enhanced test suite for MLLMs' reasoning capabilities.
243
+
244
+ EMMA is composed of 2,788 problems, of which 1,796 are newly constructed, across four domains. Within each subject, we further provide fine-grained labels for each question based on the specific skills it measures.
245
+
246
+ <p align="center">
247
+ <img src="https://huggingface.co/datasets/luckychao/EMMA/resolve/main/emma_composition.jpg" width="30%"> <br>
248
+ </p>
249
+
250
+ ## Paper Information
251
+
252
+ - Paper: https://www.arxiv.org/abs/2501.05444
253
+ - Code: https://github.com/hychaochao/EMMA
254
+ - Project: https://emma-benchmark.github.io/
255
+
256
+
257
+ ### Data Format
258
+
259
+ The dataset is provided in jsonl format and contains the following attributes:
260
+
261
+ ```
262
+ {
263
+ "pid": [string] Problem ID, e.g., “math_1”,
264
+ "question": [string] The question text,
265
+ "options": [list] Choice options for multiple-choice problems. For free-form problems, this could be a 'none' value,
266
+ "answer": [string] The correct answer for the problem,
267
+ "image_1": [image] ,
268
+ "image_2": [image] ,
269
+ "image_3": [image] ,
270
+ "image_4": [image] ,
271
+ "image_5": [image] ,
272
+ "solution": [string] The detailed thinking steps required to solve the problem,
273
+ "subject": [string] The subject of data, e.g., “Math”, “Physics”...,
274
+ "task": [string] The task of the problem, e.g., “Code Choose Vis”,
275
+ "category": [string] The category of the problem, e.g., “2D Transformation”,
276
+ "source": [string] The original source dataset of the data, e.g., “math-vista”. For handmade data, this could be “Newly annotated” ,
277
+ "type": [string] Types of questions, e.g., “Multiple Choice”, “Open-ended”,
278
+ "context": [string] Background knowledge required for the question. For problems without context, this could be a 'none' value,
279
+ }
280
+ ```
281
+
282
+ ## Citation
283
+
284
+ ```
285
+ @misc{hao2025mllmsreasonmultimodalityemma,
286
+ title={Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark},
287
+ author={Yunzhuo Hao and Jiawei Gu and Huichen Will Wang and Linjie Li and Zhengyuan Yang and Lijuan Wang and Yu Cheng},
288
+ year={2025},
289
+ eprint={2501.05444},
290
+ archivePrefix={arXiv},
291
+ primaryClass={cs.CV},
292
+ url={https://arxiv.org/abs/2501.05444},
293
+ }
294
+ ```
emma_composition.jpg ADDED

Git LFS Details

  • SHA256: 85b5f2833a59897494016a82e5201cca292441af0f87c035ca3cf872c900c857
  • Pointer size: 132 Bytes
  • Size of remote file: 3.45 MB