File size: 1,041 Bytes
59c3666
 
 
 
 
 
 
 
 
 
 
 
0608b0b
40a15ce
0608b0b
40a15ce
59c3666
40a15ce
0608b0b
40a15ce
224e2f1
 
40a15ce
 
 
0608b0b
40a15ce
 
 
 
 
 
 
 
 
 
 
59c3666
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
task_categories:
- image-text-to-text
language:
- en
tags:
- chart-understanding
- vlm
- multimodal
- clip
---

# Data for CLIP Training on Chart Task

This repository contains the CLIP Training data implementation from our paper "[On the Perception Bottleneck of VLMs for Chart Understanding](https://arxiv.org/abs/2503.18435)".

Code: https://github.com/hkust-nlp/Vision4Chart

## Data Details

- Data Source: Mainly chart tasks data like ChartQA, FigureQA, and DVQA.
- Data overview: Each data contains image, a correct caption and wrong caption.

## Citation

If you find this data useful in your research, please consider citing our paper:

```bibtex
@misc{liu2025perceptionbottleneckvlmschart,
      title={On the Perception Bottleneck of VLMs for Chart Understanding}, 
      author={Junteng Liu and Weihao Zeng and Xiwen Zhang and Yijun Wang and Zifei Shan and Junxian He},
      year={2025},
      eprint={2503.18435},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2503.18435}, 
}
```