Learning the What and How of Annotation in Video Object Segmentation
Thanos Delatolas1,2
Vicky Kalogeiton3
Dim P. Papadopoulos1,2
1 2 3
[Paper (WACV 2024)]
[Code]
[Extended Abstract (ICCV-W 2023)]
WACV 2024

Abstract

Video Object Segmentation (VOS) is crucial for several applications, from video editing to video data generation. Training a VOS model requires an abundance of manually labeled training videos. The de-facto traditional way of annotating objects requires humans to draw detailed segmentation masks on the target objects at each video frame. This annotation process, however, is tedious and time-consuming. To reduce this annotation cost, in this paper, we propose EVA-VOS, a human-in-the-loop annotation framework for video object segmentation. Unlike the traditional approach, we introduce an agent that predicts iteratively both which frame ("What") to annotate and which annotation type ("How") to use. Then, the annotator annotates only the selected frame that is used to update a VOS module, leading to significant gains in annotation time. We conduct experiments on the MOSE and the DAVIS datasets and we show that: (a) EVA-VOS leads to masks with accuracy close to the human agreement 3.5x faster than the standard way of annotating videos; (b) our frame selection achieves state-of-the-art performance; (c) EVA-VOS yields significant performance gains in terms of annotation time compared to all other methods and baselines.

Video


Quantitative results

We report the J&F accuracy as a function of annotation time in hours. (a) The effect of the frame selection stage (for fair comparison we use the same annotation type for all approaches). (b) The effect of the annotation selection stage using the same frame selection (oracle) for all approaches. (c) The results of our full pipeline. In our experiments, we consider two annotation types: mask drawing and corrective clicks. For simplicity, we denote mask drawing as Mask and corrective clicks as Clicks.

Bibtex

					@inproceedings{delatolas2024learning,
						title={Learning the What and How of Annotation in Video Object Segmentation}, 
						author={Thanos Delatolas and Vicky Kalogeiton and Dim P. Papadopoulos},
						year={2024},
						booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}
					}
                

Qualitative results

After 5 minutes of annotation:

After 15 minutes of annotation:

After 5 minutes of annotation:

After 15 minutes of annotation:

After 5 minutes of annotation:

After 15 minutes of annotation:

After 5 minutes of annotation:

After 15 minutes of annotation:

After 5 minutes of annotation:

After 15 minutes of annotation:


Acknowledgements

Dim P. Papadopoulos was supported by the DFF Sapere Aude Starting Grant "ACHILLES". Vicky Kalogeiton was supported by a Hi! PARIS grant and the ANR-22-CE23-0007. We would like to thank Paraskevas Pegios, Jens Parslov, Erik Riise, and Yasser Benigmim for proofreading. This template was originally made by Phillip Isola and Richard Zhang; the code can be found here.