Multi-Object Tracking and Segmentation (MOTS) Evaluation


KITTI MOTS will be part of the RobMOTS Challenge at CVPR 21. Deadline June 11.

The Multi-Object and Segmentation (MOTS) benchmark [2] consists of 21 training sequences and 29 test sequences. It is based on the KITTI Tracking Evaluation 2012 and extends the annotations to the Multi-Object and Segmentation (MOTS) task. To this end, we added dense pixelwise segmentation labels for every object. We evaluate submitted results using the metrics HOTA, CLEAR MOT and MT/PT/ML. We rank methods by HOTA [1]. Our development kit and github evaluation code provides details about the data format as well as utility functions for reading and writing the label files. (adapted for the segmentation case). Evaluation is performed using the code from the TrackEval repository.

We recommend that everyone who uses this benchmark reads this blog post for an overview of the HOTA metrics.


[1] J. Luiten, A. Os̆ep, P. Dendorfer, P. Torr, A. Geiger, L. Leal-Taixé, B. Leibe: HOTA: A Higher Order Metric for Evaluating Multi-object Tracking. IJCV 2020.
[2] P. Voigtlaender, M. Krause, A. Os̆ep, J. Luiten, B. Sekar, A. Geiger, B. Leibe: MOTS: Multi-Object Tracking and Segmentation. CVPR 2019.

Note: On 25.02.2020 we have updated the evaluation to use the HOTA metrics as the main evaluation metrics, and to show results as plots to enable better comparison over various aspects of tracking. We have re-calculated the results for all methods. Please download the new evaluation code. Please report these new numbers for all future submissions. The previous leaderboards before the changes will remain live for now, but after some time they will stop being updated. They can be found here.

Please address any questions or feedback about KITTI tracking or KITTI mots evaluation to Jonathon Luiten at luiten@vision.rwth-aachen.de.

Important Policy Update: As more and more non-published work and re-implementations of existing work is submitted to KITTI, we have established a new policy: from now on, only submissions with significant novelty that are leading to a peer-reviewed paper in a conference or journal are allowed. Minor modifications of existing algorithms or student research projects are not allowed. Such work must be evaluated on a split of the training set. To ensure that our policy is adopted, new users must detail their status, describe their work and specify the targeted venue during registration. Furthermore, we will regularly delete all entries that are 6 months old but are still anonymous or do not have a paper associated with them. For conferences, 6 month is enough to determine if a paper has been accepted and to add the bibliography information. For longer review cycles, you need to resubmit your results.
Additional information used by the methods
  • Stereo: Method uses left and right (stereo) images
  • Laser Points: Method uses point clouds from Velodyne laser scanner
  • GPS: Method uses GPS information
  • Online: Online method (frame-by-frame processing, no latency)
  • Additional training data: Use of additional data sources for training (see details)

CAR



This figure as: png pdf

This figure as: png pdf

This figure as: png pdf

This figure as: png pdf

KITTI MOTS will be part of the RobMOTS Challenge at CVPR 21. Deadline June 11.

Method Setting Code HOTA DetA AssA DetRe DetPr AssRe AssPr LocA sMOTSA
1 ViP-DeepLab 76.38 % 82.70 % 70.93 % 88.70 % 88.77 % 75.86 % 86.00 % 90.75 % 81.03 %
S. Qiao, Y. Zhu, H. Adam, A. Yuille and L. Chen: ViP-DeepLab: Learning Visual Perception with Depth-aware Video Panoptic Segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2021.
2 EagerMOT code 74.66 % 76.11 % 73.75 % 79.59 % 90.24 % 76.27 % 92.70 % 90.46 % 74.53 %
A. Kim, A. Osep and L. Leal-Taix'e: EagerMOT: 3D Multi-Object Tracking via Sensor Fusion. IEEE International Conference on Robotics and Automation (ICRA) 2021.
3 MOTSFusion code 73.63 % 75.44 % 72.39 % 78.32 % 90.78 % 75.53 % 89.97 % 90.29 % 74.98 %
J. Luiten, T. Fischer and B. Leibe: Track to Reconstruct and Reconstruct to Track. IEEE Robotics and Automation Letters 2020.
4 OPITrack code 73.04 % 79.44 % 67.97 % 85.90 % 85.64 % 75.66 % 80.02 % 88.57 % 78.02 %
Y. Gao, H. Xu, Y. Zheng, J. Li and X. Gao: An Object Point Set Inductive Tracker for Multi-Object Tracking and Segmentation. IEEE Transactions on Image Processing 2022.
5 ReMOTS 71.61 % 78.32 % 65.98 % 83.51 % 87.42 % 68.03 % 92.61 % 89.33 % 75.92 %
F. Yang, X. Chang, C. Dang, Z. Zheng, S. Sakti, S. Nakamura and Y. Wu: ReMOTS: Self-Supervised Refining Multi- Object Tracking and Segmentation. 2020.
6 SearchTrack code 71.46 % 76.76 % 67.12 % 81.16 % 87.00 % 71.44 % 85.84 % 88.08 % 74.85 %
Z. Tsai, Y. Tsai, C. Wang, H. Liao, Y. Lin and Y. Chuang: SearchTrack: Multiple Object Tracking with Object-Customized Search and Motion-Aware Features. BMVC 2022.
7 MAF_HDA
This is an online method (no batch processing).
code 70.00 % 78.39 % 62.96 % 82.43 % 88.81 % 67.41 % 84.26 % 89.44 % 77.19 %
Y. Song, Y. Yoon, K. Yoon and M. Jeon: Multi-Object Tracking and Segmentation with Embedding Mask-based Affinity Fusion in Hierarchical Data Association. IEEE Access 2022.
8 STC-Seg 62.81 % 68.67 % 58.32 % 73.67 % 81.81 % 62.22 % 83.53 % 84.93 % 66.22 %
Y. Liqi, W. Qifan, M. Siqi, W. Jingang and C. Yu: Solve the Puzzle of Instance Segmentation in Videos: A Weakly Supervised Framework with Spatio-Temporal Collaboration. IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) 2022.
9 PointTrack 61.95 % 79.38 % 48.83 % 85.77 % 85.66 % 79.07 % 56.35 % 88.52 % 78.50 %
Z. Xu, W. Zhang, X. Tan, W. Yang, H. Huang, S. Wen, E. Ding and L. Huang: Segment as Points for Efficient Online Multi-Object Tracking and Segmentation. Proceedings of the European Conference on Computer Vision (ECCV) 2020.
10 TrackR-CNN code 56.63 % 69.90 % 46.53 % 74.63 % 84.18 % 63.13 % 62.33 % 86.60 % 66.97 %
P. Voigtlaender, M. Krause, A. O\usep, J. Luiten, B. Sekar, A. Geiger and B. Leibe: MOTS: Multi-Object Tracking and Segmentation. CVPR 2019.
11 GMPHD_SAF
This is an online method (no batch processing).
55.14 % 77.01 % 39.76 % 81.57 % 87.29 % 69.22 % 49.42 % 88.72 % 75.39 %
Y. Song and M. Jeon: Online Multi-Object Tracking and Segmentation with GMPHD Filter and Simple Affinity Fusion. arXiv preprint arXiv:2009.00100 2020.
Table as LaTeX | Only published Methods


PEDESTRIAN



This figure as: png pdf

This figure as: png pdf

This figure as: png pdf

This figure as: png pdf

KITTI MOTS will be part of the RobMOTS Challenge at CVPR 21. Deadline June 11.

Method Setting Code HOTA DetA AssA DetRe DetPr AssRe AssPr LocA sMOTSA
1 ViP-DeepLab 64.31 % 70.69 % 59.48 % 75.71 % 81.77 % 67.52 % 74.92 % 84.40 % 68.76 %
S. Qiao, Y. Zhu, H. Adam, A. Yuille and L. Chen: ViP-DeepLab: Learning Visual Perception with Depth-aware Video Panoptic Segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2021.
2 OPITrack code 60.38 % 62.45 % 60.05 % 65.65 % 81.61 % 64.33 % 82.23 % 83.55 % 61.05 %
Y. Gao, H. Xu, Y. Zheng, J. Li and X. Gao: An Object Point Set Inductive Tracker for Multi-Object Tracking and Segmentation. IEEE Transactions on Image Processing 2022.
3 ReMOTS 58.81 % 67.96 % 52.38 % 71.86 % 82.22 % 54.40 % 88.23 % 84.18 % 65.97 %
F. Yang, X. Chang, C. Dang, Z. Zheng, S. Sakti, S. Nakamura and Y. Wu: ReMOTS: Self-Supervised Refining Multi- Object Tracking and Segmentation. 2020.
4 MAF_HDA
This is an online method (no batch processing).
code 57.99 % 66.34 % 51.69 % 69.87 % 82.73 % 57.89 % 75.25 % 84.43 % 65.00 %
Y. Song, Y. Yoon, K. Yoon and M. Jeon: Multi-Object Tracking and Segmentation with Embedding Mask-based Affinity Fusion in Hierarchical Data Association. IEEE Access 2022.
5 EagerMOT code 57.65 % 60.30 % 56.19 % 63.45 % 81.58 % 60.19 % 83.35 % 83.65 % 58.08 %
A. Kim, A. Osep and L. Leal-Taix'e: EagerMOT: 3D Multi-Object Tracking via Sensor Fusion. IEEE International Conference on Robotics and Automation (ICRA) 2021.
6 SearchTrack code 57.63 % 63.66 % 53.12 % 67.59 % 77.78 % 58.96 % 73.36 % 80.89 % 60.61 %
Z. Tsai, Y. Tsai, C. Wang, H. Liao, Y. Lin and Y. Chuang: SearchTrack: Multiple Object Tracking with Object-Customized Search and Motion-Aware Features. BMVC 2022.
7 MG-MOTS
This is an online method (no batch processing).
56.85 % 57.81 % 56.61 % 61.63 % 77.28 % 60.98 % 78.93 % 81.07 % 54.39 %
J. Seong: Online and real-time mask-guided multi- person tracking and segmentation. Pattern Recognition Letters 2023.
8 MPNTrackSeg code 55.50 % 60.45 % 52.04 % 64.67 % 75.15 % 59.76 % 70.45 % 79.29 % 57.29 %
G. Bras\'o, O. Cetintas and L. Leal-Taix\'e: Multi-Object Tracking and Segmentation Via Neural Message Passing. International Journal of Computer Vision 2022.
9 PointTrack 54.44 % 62.29 % 48.08 % 65.49 % 81.17 % 64.97 % 58.66 % 83.28 % 61.47 %
Z. Xu, W. Zhang, X. Tan, W. Yang, H. Huang, S. Wen, E. Ding and L. Huang: Segment as Points for Efficient Online Multi-Object Tracking and Segmentation. Proceedings of the European Conference on Computer Vision (ECCV) 2020.
10 MOTSFusion code 54.04 % 60.83 % 49.45 % 64.13 % 81.47 % 56.68 % 70.44 % 83.71 % 58.75 %
J. Luiten, T. Fischer and B. Leibe: Track to Reconstruct and Reconstruct to Track. IEEE Robotics and Automation Letters 2020.
11 GMPHD_SAF
This is an online method (no batch processing).
49.33 % 65.45 % 38.32 % 69.62 % 80.98 % 57.88 % 49.77 % 83.82 % 62.87 %
Y. Song and M. Jeon: Online Multi-Object Tracking and Segmentation with GMPHD Filter and Simple Affinity Fusion. arXiv preprint arXiv:2009.00100 2020.
12 STC-Seg 43.89 % 45.93 % 43.65 % 48.14 % 75.16 % 50.50 % 67.24 % 79.06 % 42.57 %
Y. Liqi, W. Qifan, M. Siqi, W. Jingang and C. Yu: Solve the Puzzle of Instance Segmentation in Videos: A Weakly Supervised Framework with Spatio-Temporal Collaboration. IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) 2022.
13 TrackR-CNN code 41.93 % 53.75 % 33.84 % 57.85 % 72.51 % 45.30 % 50.74 % 78.03 % 47.31 %
P. Voigtlaender, M. Krause, A. O\usep, J. Luiten, B. Sekar, A. Geiger and B. Leibe: MOTS: Multi-Object Tracking and Segmentation. CVPR 2019.
Table as LaTeX | Only published Methods


Citation

When using this dataset in your research, we will be happy if you cite us:
@inproceedings{Voigtlaender2019CVPR,
  author = {Paul Voigtlaender and Michael Krause and Aljosa Osep and Jonathon Luiten and Berin Balachandar Gnana Sekar and Andreas Geiger and Bastian Leibe},
  title = {MOTS: Multi-Object Tracking and Segmentation},
  booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2019}
}

@article{Luiten2020IJCV,
  author = {Jonathon Luiten and Aljosa Osep and Patrick Dendorfer and Philip Torr and Andreas Geiger and Laura Leal-Taixe and Bastian Leibe},
  title = {HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking},
  journal = {International Journal of Computer Vision (IJCV)},
  year = {2020}
}



eXTReMe Tracker