Datasets:
File size: 5,280 Bytes
ea2a915 6b50286 ea2a915 6b50286 ea2a915 6b50286 0ef7271 6b50286 0ef7271 ea2a915 6b50286 0ef7271 ea2a915 f93e1e0 d872121 f93e1e0 7291c27 8776b90 7291c27 00580eb dbf34dd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
---
task_categories:
- question-answering
language:
- en
tags:
- cognition
- emotional_valence
- funniness
- memorability
- aesthetics
size_categories:
- 1K<n<10K
license: cc-by-4.0
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype:
image:
decode: false
- name: dimension
dtype: string
- name: prompt
dtype: string
- name: score
dtype: float32
splits:
- name: train
num_bytes: 145969790.0
num_examples: 3200
- name: test
num_bytes: 22545428.0
num_examples: 480
download_size: 165420288
dataset_size: 168515218.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# CogIP-Bench: Cognition Image Property Benchmark

**CogIP-Bench** is a comprehensive benchmark designed to evaluate and align Multimodal Large Language Models (MLLMs) with human subjective cognitive perception. While current MLLMs excel at objective recognition ("what is in the image"), they often struggle with subjective properties ("how the image feels"). This gap is what the **CogIP-Bench** aims to measure.
This dataset evaluates models across four key cognitive dimensions: **Aesthetics**, **Funniness**, **Emotional Valence**, and **Memorability**.
## 📂 Dataset Structure & Files
The dataset is organized into two main formats: standard benchmark files (`.jsonl`) for evaluation and a source JSON file (`.json`) used for Supervised Fine-Tuning (SFT).
### 1. Benchmark Data (`metadata_train.jsonl`, `metadata_test.jsonl`)
These files contain the image-prompt pairs and ground truth scores used to benchmark model performance against human judgments.
* **`metadata_train.jsonl`**: The training split, containing **3,200 examples** across the four dimensions.
* **`metadata_test.jsonl`**: The testing split, used for final evaluation.
#### **Data Fields**
Each line in the `.jsonl` files represents a single datapoint:
| Field | Type | Description |
| :--- | :--- | :--- |
| `id` | `string` | A unique identifier for the image (e.g., `"000800"`). |
| `image` | `Image` | The image file (stored locally in the dataset structure, e.g., `"images/Aesthetics/000800.jpg"`). |
| `prompt` | `string` | The specific instruction given to the MLLM, employing the **"Describe-then-Predict"** strategy. |
| `score` | `float32` | The ground truth human-preference score, which is the target value for the model to predict (e.g., `4.105`). |
#### **Example Entry (from `metadata_test.jsonl`)**
This example shows a prompt for the Aesthetics sub-task, which includes detailed instructions and the scoring scale.
```json
{
"id": "000800",
"image": "images/Aesthetics/000800.jpg",
"prompt": "<image> \n**Visual Aesthetics Analysis Sub-Task (Aesthetics):** \nIn this sub-task, you are asked to assess the aesthetic appeal of the image based on elements such as visual harmony, composition, color, lighting, and emotional impact. Your goal is to provide a descriptive label that captures the overall aesthetic quality of the image, followed by a numerical score that reflects its aesthetic value.\n\nPlease first give a description label for the corresponding image, then predict the scores based on the following rules: \n- (0.0, 3.5, 'very low') \n- (3.5, 5.0, 'low') \n- (5.0, 6.5, 'medium') \n- (6.5, 8.0, 'high') \n- (8.0, 10.1, 'very high') \n\nThe score should be a number with exactly three decimal places (e.g., 7.234). \n\nPlease return only the label and the scores number, nothing else.",
"score": 4.105
}
```
### 2. SFT Data (`original_cognition.json`)
* **Filename:** `original_cognition.json`
* **Purpose:** This is the original JSON file used for **Supervised Fine-Tuning (SFT)** on the MLLM. This file contains the data formatted for training the MLLM to output the structured response that includes both the label and the numerical score, thereby aligning its output with human cognitive judgments. This is the source file used to generate the structured `.jsonl` data.
---
## 🧠 Cognitive Dimensions
The benchmark evaluates four distinct subjective properties, each with a specific scale and corresponding labels used in the `prompt`.
| Dimension | Description | Typical Scale | Scoring Buckets |
| :--- | :--- | :--- | :--- |
| **Aesthetics** | Assesses visual appeal, harmony, and composition. | 0.0 to 10.0 | Very Low, Low, Medium, High, Very High |
| **Funniness** | Measures the humorous or amusing quality of an image. | 0.0 to 10.0 | Very Low, Low, Medium, High, Very High |
| **Emotional Valence** | Captures the emotional tone (positive to negative). | -3.0 to 3.0 (Mapped to 1-10) | Negative, Neutral, Positive |
| **Memorability** | Reflects the likelihood of an image being remembered. | 0.0 to 1.0 (Mapped to 1-10) | Very Low, Low, Medium, High, Very High |
%3C%2Fspan%3E%3C%2Fspan%3E
%3C%2Fspan%3E%3C%2Fspan%3E
|