Learning to Track Instances without Video Annotations

CVPR 2021 (Oral)


Yang Fu, Sifei Liu, Umar Iqbal, Shalini De Mello,
Humphrey Shi, Jan Kautz,

Paper Code(Soon)

Tracking segmentation masks of multiple instances has been intensively studied, but still faces two fundamental challenges: 1) the requirement of large-scale, frame-wise annotation, and 2) the complexity of two-stage approaches. To resolve these challenges, we introduce a novel semi-supervised framework by learning instance tracking networks with only a labeled image dataset and unlabeled video sequences. With an instance contrastive objective, we learn an embedding to discriminate each instance from the others. We show that even when only trained with images, the learned feature representation is robust to instance appearance variations, and is thus able to track objects steadily across frames. We further enhance the tracking capability of the embedding by learning correspondence from unlabeled videos in a self-supervised manner. In addition, we integrate this module into single-stage instance segmentation and pose estimation frameworks, which significantly reduce the computational complexity of tracking compared to two-stage networks. We conduct experiments on the YouTube-VIS and PoseTrack datasets. Without any video annotation efforts, our proposed method can achieve comparable or even better performance than most fully-supervised methods.

Bibtex


@inproceedings{fu2021track, author = {Fu, Yang and Liu, Sifei and Iqbal, Umar and De Mello, Shalini and Shi, Humphrey and Kautz, Jan}, title = {Learning to Track Instances without Video Annotations}, booktitle = {Proceedings of the IEEE conference on computer vision and pattern recognition}, year={2021} }