foolen commited on
Commit
f93e1e0
·
verified ·
1 Parent(s): c8289a5

add more explanation of the dataset

Browse files

![bench2](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F67daba9b9c49701f60496af3%2FcE8Xkuj4INvP9T842vw6O.png)%3Cbr%2F%3E!%5Bbench1%5D(https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F67daba9b9c49701f60496af3%2F8YM0wMEhrsiTAzWOW2hxb.png)%3C!-- HTML_TAG_END -->

Files changed (1) hide show
  1. README.md +39 -30
README.md CHANGED
@@ -1,30 +1,39 @@
1
- ---
2
- dataset_info:
3
- features:
4
- - name: id
5
- dtype: string
6
- - name: image
7
- dtype:
8
- image:
9
- decode: false
10
- - name: prompt
11
- dtype: string
12
- - name: score
13
- dtype: float32
14
- splits:
15
- - name: train
16
- num_bytes: 145918590.0
17
- num_examples: 3200
18
- - name: test
19
- num_bytes: 53594464.919
20
- num_examples: 1719
21
- download_size: 207408977
22
- dataset_size: 199513054.919
23
- configs:
24
- - config_name: default
25
- data_files:
26
- - split: train
27
- path: data/train-*
28
- - split: test
29
- path: data/test-*
30
- ---
 
 
 
 
 
 
 
 
 
 
1
+ # CogIP-Bench: Cognition Image Property Benchmark
2
+
3
+ **CogIP-Bench** is a comprehensive benchmark designed to evaluate and align Multimodal Large Language Models (MLLMs) with human subjective cognitive perception. While current MLLMs excel at objective recognition ("what is in the image"), they often struggle with subjective properties ("how the image feels"). This gap is what the **CogIP-Bench** aims to measure.
4
+
5
+ This dataset evaluates models across four key cognitive dimensions: **Aesthetics**, **Funniness**, **Emotional Valence**, and **Memorability**.
6
+
7
+ ## 📂 Dataset Structure & Files
8
+
9
+ The dataset is organized into two main formats: standard benchmark files (`.jsonl`) for evaluation and a source JSON file (`.json`) used for Supervised Fine-Tuning (SFT).
10
+
11
+ ### 1. Benchmark Data (`metadata_train.jsonl`, `metadata_test.jsonl`)
12
+
13
+ These files contain the image-prompt pairs and ground truth scores used to benchmark model performance against human judgments.
14
+
15
+ * **`metadata_train.jsonl`**: The training split, containing **3,200 examples** across the four dimensions.
16
+ * **`metadata_test.jsonl`**: The testing split, used for final evaluation.
17
+
18
+ #### **Data Fields**
19
+
20
+ Each line in the `.jsonl` files represents a single datapoint:
21
+
22
+ | Field | Type | Description |
23
+ | :--- | :--- | :--- |
24
+ | `id` | `string` | A unique identifier for the image (e.g., `"000800"`). |
25
+ | `image` | `Image` | The image file (stored locally in the dataset structure, e.g., `"images/Aesthetics/000800.jpg"`). |
26
+ | `prompt` | `string` | The specific instruction given to the MLLM, employing the **"Describe-then-Predict"** strategy. |
27
+ | `score` | `float32` | The ground truth human-preference score, which is the target value for the model to predict (e.g., `4.105`). |
28
+
29
+ #### **Example Entry (from `metadata_test.jsonl`)**
30
+
31
+ This example shows a prompt for the Aesthetics sub-task, which includes detailed instructions and the scoring scale.
32
+
33
+ ```json
34
+ {
35
+ "id": "000800",
36
+ "image": "images/Aesthetics/000800.jpg",
37
+ "prompt": "<image> \n**Visual Aesthetics Analysis Sub-Task (Aesthetics):** \nIn this sub-task, you are asked to assess the aesthetic appeal of the image based on elements such as visual harmony, composition, color, lighting, and emotional impact. Your goal is to provide a descriptive label that captures the overall aesthetic quality of the image, followed by a numerical score that reflects its aesthetic value.\n\nPlease first give a description label for the corresponding image, then predict the scores based on the following rules: \n- (0.0, 3.5, 'very low') \n- (3.5, 5.0, 'low') \n- (5.0, 6.5, 'medium') \n- (6.5, 8.0, 'high') \n- (8.0, 10.1, 'very high') \n\nThe score should be a number with exactly three decimal places (e.g., 7.234). \n\nPlease return only the label and the scores number, nothing else.",
38
+ "score": 4.105
39
+ }