An example of image caption (left) and video caption (right). By analyzing the components of captions, we conclude 12 dimensions (9 static dimensions and 4 dynamic dimensions with object number shares on both static and dynamic), which all contribute to a detailed and comprehensive caption. The static dimensions are shared in both images and videos. For video data, there are additional dynamic dimensions as they need to be judged with temporal relations.
The data source count and distribution of each dimension. We collect nearly 1,000 images/videos for each dimension, crawl parts of data by ourselves, and sample some data from existing datasets to ensure diversity.
The annotation distribution of each dimension. We statistic different dimensions with different types. We count the frequency in object categories, character identification, and action as most of the descriptions only appear one time. For spatial relation, we summarize 4 categories and count their numbers. For style, camera angle, and camera movement, we count the samples of each category. For others, we plot bar charts to count and show the most frequent samples.
Radar map visualization of F1-score on five representitive MLLMs.
The precision, recall, and F1-score of closed-source models and 72B open-source models on all dimensions. The precision represents the accuracy of what the models have described. The recall shows how many visual elements in the image can be described correctly. F1-Score is the harmonic mean of precision and recall. For video inputs, we send the whole video for Gemini, and uniformly sample 50 frames for GPT due to the API limitation of maximum number of images.
Evaluation example of Object Number dimension on three SOTA models.
Evaluation example of Cemara Angle dimension on three SOTA models.
@article{liu2025good,
title={What Is a Good Caption? A Comprehensive Visual Caption Benchmark for Evaluating Both Correctness and Thoroughness},
author={Liu, Zhihang and Xie, Chen-Wei and Wen, Bin and Yu, Feiwu and Chen, Jixuan and Zhang, Boqiang and Yang, Nianzu and Li, Pandeng and Li, Yinglu and Gao, Zuan and Zheng, Yun and Xie, Hongtao},
journal={arXiv preprint arXiv:2502.14914},
year={2025}
}