The Tenth NTIRE 2025 Efficient Super-Resolution Challenge Report Paper • 2504.10686 • Published Apr 14
MMIG-Bench: Towards Comprehensive and Explainable Evaluation of Multi-Modal Image Generation Models Paper • 2505.19415 • Published May 26 • 2
MMPerspective: Do MLLMs Understand Perspective? A Comprehensive Benchmark for Perspective Perception, Reasoning, and Robustness Paper • 2505.20426 • Published May 26 • 7
Video-LMM Post-Training: A Deep Dive into Video Reasoning with Large Multimodal Models Paper • 2510.05034 • Published Oct 6 • 46
Caption Anything in Video: Fine-grained Object-centric Captioning via Spatiotemporal Multimodal Prompting Paper • 2504.05541 • Published Apr 7 • 15
Why Reasoning Matters? A Survey of Advancements in Multimodal Reasoning (v1) Paper • 2504.03151 • Published Apr 4 • 15
VERIFY: A Benchmark of Visual Explanation and Reasoning for Investigating Multimodal Reasoning Fidelity Paper • 2503.11557 • Published Mar 14 • 22
Unveiling Visual Perception in Language Models: An Attention Head Analysis Approach Paper • 2412.18108 • Published Dec 24, 2024 • 1
VidComposition: Can MLLMs Analyze Compositions in Compiled Videos? Paper • 2411.10979 • Published Nov 17, 2024
Caption Anything: Interactive Image Description with Diverse Multimodal Controls Paper • 2305.02677 • Published May 4, 2023
Video Understanding with Large Language Models: A Survey Paper • 2312.17432 • Published Dec 29, 2023 • 3
Emo-Avatar: Efficient Monocular Video Style Avatar through Texture Rendering Paper • 2402.00827 • Published Feb 1, 2024 • 2
AVicuna: Audio-Visual LLM with Interleaver and Context-Boundary Alignment for Temporal Referential Dialogue Paper • 2403.16276 • Published Mar 24, 2024
V2Xum-LLM: Cross-Modal Video Summarization with Temporal Prompt Instruction Tuning Paper • 2404.12353 • Published Apr 18, 2024
AIM 2024 Challenge on Video Saliency Prediction: Methods and Results Paper • 2409.14827 • Published Sep 23, 2024 • 1