Datasets:

Modalities:
Image
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
IQ50 / README.md
luodian's picture
Update README.md
c93836c verified
metadata
dataset_info:
  features:
    - name: question_id
      dtype: string
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: query_image_0
      dtype: image
    - name: query_image_1
      dtype: image
    - name: query_image_2
      dtype: image
    - name: query_image_3
      dtype: image
    - name: query_image_4
      dtype: image
    - name: query_image_5
      dtype: image
    - name: query_image_6
      dtype: image
    - name: query_image_7
      dtype: image
    - name: answer_image_a
      dtype: image
    - name: answer_image_b
      dtype: image
    - name: answer_image_c
      dtype: image
    - name: answer_image_d
      dtype: image
    - name: answer_image_e
      dtype: image
    - name: answer_image_f
      dtype: image
  splits:
    - name: test
      num_bytes: 12321743
      num_examples: 50
  download_size: 10640175
  dataset_size: 12321743
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

Dataset Card for "IQ50"

Large-scale Multi-modality Models Evaluation Suite

Accelerating the development of large-scale multi-modality models (LMMs) with lmms-eval

🏠 Homepage | 📚 Documentation | 🤗 Huggingface Datasets

This Dataset

This is a formatted version of IQ50. It is used in our lmms-eval pipeline to allow for one-click evaluations of large multi-modality models.

@article{huang2023language,
  title={Language is not all you need: Aligning perception with language models},
  author={Huang, Shaohan and Dong, Li and Wang, Wenhui and Hao, Yaru and Singhal, Saksham and Ma, Shuming and Lv, Tengchao and Cui, Lei and Mohammed, Owais Khan and Liu, Qiang and others},
  journal={arXiv preprint arXiv:2302.14045},
  volume={1},
  number={2},
  pages={3},
  year={2023}
}

More Information needed