Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -44,10 +44,10 @@ configs:
|
|
| 44 |
path: data/test-*
|
| 45 |
---
|
| 46 |
|
| 47 |
-

|
| 48 |
-
|
| 49 |
# CogIP-Bench: Cognition Image Property Benchmark
|
| 50 |
|
|
|
|
|
|
|
| 51 |
**CogIP-Bench** is a comprehensive benchmark designed to evaluate and align Multimodal Large Language Models (MLLMs) with human subjective cognitive perception. While current MLLMs excel at objective recognition ("what is in the image"), they often struggle with subjective properties ("how the image feels"). This gap is what the **CogIP-Bench** aims to measure.
|
| 52 |
|
| 53 |
This dataset evaluates models across four key cognitive dimensions: **Aesthetics**, **Funniness**, **Emotional Valence**, and **Memorability**.
|
|
|
|
| 44 |
path: data/test-*
|
| 45 |
---
|
| 46 |
|
|
|
|
|
|
|
| 47 |
# CogIP-Bench: Cognition Image Property Benchmark
|
| 48 |
|
| 49 |
+

|
| 50 |
+
|
| 51 |
**CogIP-Bench** is a comprehensive benchmark designed to evaluate and align Multimodal Large Language Models (MLLMs) with human subjective cognitive perception. While current MLLMs excel at objective recognition ("what is in the image"), they often struggle with subjective properties ("how the image feels"). This gap is what the **CogIP-Bench** aims to measure.
|
| 52 |
|
| 53 |
This dataset evaluates models across four key cognitive dimensions: **Aesthetics**, **Funniness**, **Emotional Valence**, and **Memorability**.
|