CogIP-Bench / README.md
foolen's picture
Upload dataset
0ef7271 verified
metadata
task_categories:
  - question-answering
language:
  - en
tags:
  - cognition
  - emotional_valence
  - funniness
  - memorability
  - aesthetics
size_categories:
  - 1K<n<10K
license: cc-by-4.0
dataset_info:
  features:
    - name: id
      dtype: string
    - name: image
      dtype:
        image:
          decode: false
    - name: dimension
      dtype: string
    - name: prompt
      dtype: string
    - name: score
      dtype: float32
  splits:
    - name: train
      num_bytes: 145969790
      num_examples: 3200
    - name: test
      num_bytes: 22545428
      num_examples: 480
  download_size: 165420288
  dataset_size: 168515218
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*

CogIP-Bench: Cognition Image Property Benchmark

90a73d7a39db648d3dfd442e6efed570

CogIP-Bench is a comprehensive benchmark designed to evaluate and align Multimodal Large Language Models (MLLMs) with human subjective cognitive perception. While current MLLMs excel at objective recognition ("what is in the image"), they often struggle with subjective properties ("how the image feels"). This gap is what the CogIP-Bench aims to measure.

This dataset evaluates models across four key cognitive dimensions: Aesthetics, Funniness, Emotional Valence, and Memorability.

📂 Dataset Structure & Files

The dataset is organized into two main formats: standard benchmark files (.jsonl) for evaluation and a source JSON file (.json) used for Supervised Fine-Tuning (SFT).

1. Benchmark Data (metadata_train.jsonl, metadata_test.jsonl)

These files contain the image-prompt pairs and ground truth scores used to benchmark model performance against human judgments.

  • metadata_train.jsonl: The training split, containing 3,200 examples across the four dimensions.
  • metadata_test.jsonl: The testing split, used for final evaluation.

Data Fields

Each line in the .jsonl files represents a single datapoint:

Field Type Description
id string A unique identifier for the image (e.g., "000800").
image Image The image file (stored locally in the dataset structure, e.g., "images/Aesthetics/000800.jpg").
prompt string The specific instruction given to the MLLM, employing the "Describe-then-Predict" strategy.
score float32 The ground truth human-preference score, which is the target value for the model to predict (e.g., 4.105).

Example Entry (from metadata_test.jsonl)

This example shows a prompt for the Aesthetics sub-task, which includes detailed instructions and the scoring scale.

{
  "id": "000800", 
  "image": "images/Aesthetics/000800.jpg", 
  "prompt": "<image>  \n**Visual Aesthetics Analysis Sub-Task (Aesthetics):** \nIn this sub-task, you are asked to assess the aesthetic appeal of the image based on elements such as visual harmony, composition, color, lighting, and emotional impact. Your goal is to provide a descriptive label that captures the overall aesthetic quality of the image, followed by a numerical score that reflects its aesthetic value.\n\nPlease first give a description label for the corresponding image, then predict the scores based on the following rules:  \n- (0.0, 3.5, 'very low')  \n- (3.5, 5.0, 'low')  \n- (5.0, 6.5, 'medium')  \n- (6.5, 8.0, 'high')  \n- (8.0, 10.1, 'very high')  \n\nThe score should be a number with exactly three decimal places (e.g., 7.234).  \n\nPlease return only the label and the scores number, nothing else.", 
  "score": 4.105
}

2. SFT Data (original_cognition.json)

  • Filename: original_cognition.json
  • Purpose: This is the original JSON file used for Supervised Fine-Tuning (SFT) on the MLLM. This file contains the data formatted for training the MLLM to output the structured response that includes both the label and the numerical score, thereby aligning its output with human cognitive judgments. This is the source file used to generate the structured .jsonl data.

🧠 Cognitive Dimensions

The benchmark evaluates four distinct subjective properties, each with a specific scale and corresponding labels used in the prompt.

Dimension Description Typical Scale Scoring Buckets
Aesthetics Assesses visual appeal, harmony, and composition. 0.0 to 10.0 Very Low, Low, Medium, High, Very High
Funniness Measures the humorous or amusing quality of an image. 0.0 to 10.0 Very Low, Low, Medium, High, Very High
Emotional Valence Captures the emotional tone (positive to negative). -3.0 to 3.0 (Mapped to 1-10) Negative, Neutral, Positive
Memorability Reflects the likelihood of an image being remembered. 0.0 to 1.0 (Mapped to 1-10) Very Low, Low, Medium, High, Very High

36de37faa08cd9d1aae35bbf6b2e92a1

a926e52b247cb3623a770078a4fc1a6b