foolen commited on
Commit
ea2a915
·
verified ·
1 Parent(s): f93e1e0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md CHANGED
@@ -1,3 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # CogIP-Bench: Cognition Image Property Benchmark
2
 
3
  **CogIP-Bench** is a comprehensive benchmark designed to evaluate and align Multimodal Large Language Models (MLLMs) with human subjective cognitive perception. While current MLLMs excel at objective recognition ("what is in the image"), they often struggle with subjective properties ("how the image feels"). This gap is what the **CogIP-Bench** aims to measure.
 
1
+ ---
2
+ task_categories:
3
+ - question-answering
4
+ language:
5
+ - en
6
+ tags:
7
+ - cognition
8
+ - emotional_valence
9
+ - funniness
10
+ - memorability
11
+ - aesthetics
12
+ size_categories:
13
+ - 1K<n<10K
14
+
15
+ license: "cc-by-4.0"
16
+
17
+ dataset_info:
18
+ features:
19
+ - name: id
20
+ dtype: string
21
+ - name: image
22
+ dtype:
23
+ image:
24
+ decode: false
25
+ - name: prompt
26
+ dtype: string
27
+ - name: score
28
+ dtype: float32
29
+ splits:
30
+ - name: train
31
+ num_bytes: 145918590.0
32
+ num_examples: 3200
33
+ - name: test
34
+ num_bytes: 53594464.919
35
+ num_examples: 1719
36
+ download_size: 207408977
37
+ dataset_size: 199513054.919
38
+ configs:
39
+ - config_name: default
40
+ data_files:
41
+ - split: train
42
+ path: data/train-*
43
+ - split: test
44
+ path: data/test-*
45
+ ---
46
+
47
+
48
+
49
+
50
+
51
  # CogIP-Bench: Cognition Image Property Benchmark
52
 
53
  **CogIP-Bench** is a comprehensive benchmark designed to evaluate and align Multimodal Large Language Models (MLLMs) with human subjective cognitive perception. While current MLLMs excel at objective recognition ("what is in the image"), they often struggle with subjective properties ("how the image feels"). This gap is what the **CogIP-Bench** aims to measure.