Object Tracking Evaluation (2D bounding-boxes)


The object tracking benchmark consists of 21 training sequences and 29 test sequences. Despite the fact that we have labeled 8 different classes, only the classes 'Car' and 'Pedestrian' are evaluated in our benchmark, as only for those classes enough instances for a comprehensive evaluation have been labeled. The labeling process has been performed in two steps: First we hired a set of annotators, to label 3D bounding boxes as tracklets in point clouds. Since for a pedestrian tracklet, a single 3D bounding box tracklet (dimensions have been fixed) often fits badly, we additionally labeled the left/right boundaries of each object by making use of Mechanical Turk. We also collected labels of the object's occlusion state, and computed the object's truncation via backprojecting a car/pedestrian model into the image plane. We evaluate submitted results using the metrics HOTA, CLEAR MOT and MT/PT/ML. We rank methods by HOTA. Our development kit and github evaluation code provides details about the data format as well as utility functions for reading and writing the label files.

Evaluation is performed using the code from the TrackEval repository.

We recommend that everyone who uses this benchmark reads this blog post for an overview of the HOTA metrics.


The goal in the object tracking task is to estimate object tracklets for the classes 'Car' and 'Pedestrian'. We evaluate 2D 0-based bounding boxes in each image. We like to encourage people to add a confidence measure for every particular frame for this track. For evaluation we only consider detections/objects larger than 25 pixel (height) in the image and do not count Vans as false positives for cars or Sitting Persons as false positives for Pedestrians due to their similarity in appearance. As evaluation criterion we follow the HOTA metrics [1], while also evaluating the CLEARMOT [2] and Mostly-Tracked/Partly-Tracked/Mostly-Lost [3] metrics. Methods are ranked overall by HOTA, and bold numbers indicate the best method for each particular metric. To make the methods comparable, the time for object detection is not included in the specified runtime.

[1] J. Luiten, A. Os̆ep, P. Dendorfer, P. Torr, A. Geiger, L. Leal-Taixé, B. Leibe: HOTA: A Higher Order Metric for Evaluating Multi-object Tracking. IJCV 2020.
[2] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[3] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.

Note: On 25.02.2021 we have updated the evaluation to use the HOTA metrics as the main evaluation metrics, and to show results as plots to enable better comparison over various aspects of tracking. Furthermore, the definition of previously used evaluation metrics such as MOTA have been updated to match modern definitions (such as used in MOTChallenge) in order to unify metrics across benchmarks. Now ID-switches are counted for cases where the ID changes after a gap in either gt or predicted tracks, and when assigning IDs the algorithm has a preferences for extending current tracks (minimizing the number of ID-switches) if possible. We have re-calculated the results for all methods. Please download the new evaluation code. Please report these new numbers for all future submissions. The previous leaderboards before the changes will remain live for now and can be found here, but after some time they will stop being updated.

Please address any questions or feedback about KITTI tracking or KITTI mots evaluation to Jonathon Luiten at luiten@vision.rwth-aachen.de.

Important Policy Update: As more and more non-published work and re-implementations of existing work is submitted to KITTI, we have established a new policy: from now on, only submissions with significant novelty that are leading to a peer-reviewed paper in a conference or journal are allowed. Minor modifications of existing algorithms or student research projects are not allowed. Such work must be evaluated on a split of the training set. To ensure that our policy is adopted, new users must detail their status, describe their work and specify the targeted venue during registration. Furthermore, we will regularly delete all entries that are 6 months old but are still anonymous or do not have a paper associated with them. For conferences, 6 month is enough to determine if a paper has been accepted and to add the bibliography information. For longer review cycles, you need to resubmit your results.
Additional information used by the methods
  • Stereo: Method uses left and right (stereo) images
  • Laser Points: Method uses point clouds from Velodyne laser scanner
  • GPS: Method uses GPS information
  • Online: Online method (frame-by-frame processing, no latency)
  • Additional training data: Use of additional data sources for training (see details)

CAR



This figure as: png pdf

This figure as: png pdf

This figure as: png pdf

This figure as: png pdf

Method Setting Code HOTA DetA AssA DetRe DetPr AssRe AssPr LocA MOTA
1 Anonymous 81.08 % 78.36 % 84.59 % 83.59 % 85.14 % 87.90 % 90.58 % 87.52 % 91.61 %
2 PC-TCNN
This method makes use of Velodyne laser scans.
80.90 % 78.46 % 84.13 % 84.22 % 84.58 % 87.46 % 90.47 % 87.48 % 91.70 %
H. Wu, Q. Li, C. Wen, X. Li, X. Fan and C. Wang: Tracklet Proposal Network for Multi-Object Tracking on Point Clouds. IJCAI 2021.
3 Rethink MOT 80.39 % 77.88 % 83.64 % 84.23 % 83.57 % 87.63 % 88.90 % 87.07 % 91.53 %
4 CAMO-MOT
This method makes use of Velodyne laser scans.
79.99 % 76.34 % 84.45 % 81.16 % 84.59 % 87.27 % 90.30 % 86.66 % 90.38 %
5 RAM
This is an online method (no batch processing).
79.53 % 78.79 % 80.94 % 82.54 % 86.33 % 84.21 % 88.77 % 87.15 % 91.61 %
P. Tokmakov, A. Jabri, J. Li and A. Gaidon: Object Permanence Emerges in a Random Walk along Memory. ICML 2022.
6 Anonymous 79.13 % 78.81 % 80.13 % 82.41 % 86.43 % 83.40 % 88.81 % 87.11 % 91.72 %
7 FastTrack 78.78 % 77.67 % 80.66 % 81.76 % 84.57 % 84.02 % 87.58 % 86.01 % 92.06 %
8 CyberTrack 78.25 % 77.51 % 79.88 % 82.95 % 84.99 % 82.45 % 91.69 % 87.62 % 90.14 %
9 PermaTrack
This is an online method (no batch processing).
78.03 % 78.29 % 78.41 % 81.71 % 86.54 % 81.14 % 89.49 % 87.10 % 91.33 %
P. Tokmakov, J. Li, W. Burgard and A. Gaidon: Learning to Track with Object Permanence. ICCV 2021.
10 PC3T
This method makes use of Velodyne laser scans.
code 77.80 % 74.57 % 81.59 % 79.19 % 84.07 % 84.77 % 88.75 % 86.07 % 88.81 %
H. Wu, W. Han, C. Wen, X. Li and C. Wang: 3D Multi-Object Tracking in Point Clouds Based on Prediction Confidence-Guided Data Association. IEEE TITS 2021.
11 jerrymot 77.12 % 73.43 % 81.66 % 80.60 % 81.69 % 84.23 % 90.45 % 86.79 % 85.82 %
12 OC-SORT
This is an online method (no batch processing).
code 76.54 % 77.25 % 76.39 % 80.64 % 86.36 % 80.33 % 87.17 % 87.01 % 90.28 %
J. Cao, X. Weng, R. Khirodkar, J. Pang and K. Kitani: Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking. 2022.
13 MSF-MOT
This method uses stereo information.
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
76.15 % 75.87 % 77.05 % 82.00 % 83.46 % 79.34 % 90.68 % 87.03 % 88.81 %
14 StrongFusion-MOT 75.65 % 72.08 % 79.84 % 75.20 % 86.23 % 82.42 % 89.81 % 86.74 % 85.53 %
15 Mono_3D_KF
This method makes use of GPS/IMU information.
This is an online method (no batch processing).
75.47 % 74.10 % 77.63 % 78.86 % 82.98 % 80.23 % 88.88 % 85.48 % 88.48 %
A. Reich and H. Wuensche: Monocular 3D Multi-Object Tracking with an EKF Approach for Long-Term Stable Tracks. 2021 IEEE 24th International Conference on Information Fusion (FUSION) 2021.
16 DeepFusion-MOT
This method uses stereo information.
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
code 75.46 % 71.54 % 80.05 % 75.34 % 85.25 % 82.63 % 89.77 % 86.70 % 84.63 %
X. Wang, C. Fu, Z. Li, Y. Lai and J. He: DeepFusionMOT: A 3D Multi-Object Tracking Framework Based on Camera-LiDAR Fusion with Deep Association. IEEE Robotics and Automation Letters 2022.
17 InvariantGraphMOT 75.16 % 73.94 % 76.95 % 80.81 % 82.40 % 80.00 % 89.27 % 87.12 % 85.08 %
18 OSN 74.83 % 77.32 % 72.99 % 83.61 % 82.92 % 78.86 % 82.75 % 86.60 % 90.31 %
19 SPPH-IR
This method uses stereo information.
74.69 % 70.74 % 80.38 % 75.14 % 81.96 % 84.17 % 87.72 % 84.52 % 85.72 %
20 EagerMOT code 74.39 % 75.27 % 74.16 % 78.77 % 86.42 % 76.24 % 91.05 % 87.17 % 87.82 %
A. Kim, A. Osep and L. Leal-Taix'e: EagerMOT: 3D Multi-Object Tracking via Sensor Fusion. IEEE International Conference on Robotics and Automation (ICRA) 2021.
21 MMOT3D
This is an online method (no batch processing).
74.30 % 73.24 % 75.89 % 80.44 % 81.08 % 82.77 % 84.49 % 86.21 % 86.47 %
22 DEFT
This is an online method (no batch processing).
code 74.23 % 75.33 % 73.79 % 79.96 % 83.97 % 78.30 % 85.19 % 86.14 % 88.38 %
M. Chaabane, P. Zhang, R. Beveridge and S. O'Hara: DEFT: Detection Embeddings for Tracking. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2021.
23 OC-SORT
This is an online method (no batch processing).
74.20 % 76.75 % 72.36 % 80.15 % 86.30 % 76.30 % 87.10 % 87.00 % 89.31 %
24 TripletTrack 73.58 % 73.18 % 74.66 % 76.18 % 86.81 % 77.31 % 89.55 % 87.37 % 84.32 %
25 Opm-NC2
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
73.19 % 73.27 % 73.77 % 80.98 % 81.67 % 77.05 % 89.84 % 87.31 % 84.21 %
H. Chao Jiang: A Fast and High-Performance Object Proposal Method for Vision Sensors: Application to Object Detection. IEEE sensors journal 2022.
26 mono3DT
This method makes use of GPS/IMU information.
This is an online method (no batch processing).
code 73.16 % 72.73 % 74.18 % 76.51 % 85.28 % 77.18 % 87.77 % 86.88 % 84.28 %
H. Hu, Q. Cai, D. Wang, J. Lin, M. Sun, P. Krähenbühl, T. Darrell and F. Yu: Joint Monocular 3D Vehicle Detection and Tracking. ICCV 2019.
27 PointSiamMOT
This method makes use of Velodyne laser scans.
73.15 % 74.06 % 72.89 % 81.03 % 82.29 % 75.02 % 90.59 % 87.06 % 85.94 %
28 LGM 73.14 % 74.61 % 72.31 % 80.53 % 82.16 % 76.38 % 84.74 % 85.85 % 87.60 %
G. Wang, R. Gu, Z. Liu, W. Hu, M. Song and J. Hwang: Track without Appearance: Learn Box and Tracklet Embedding with Local and Global Motion Patterns for Vehicle Tracking. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) 2021.
29 Anonymous 73.13 % 74.07 % 72.83 % 81.04 % 82.30 % 74.94 % 90.71 % 87.06 % 85.78 %
30 DFR-MOT code 73.06 % 74.34 % 72.34 % 77.95 % 85.80 % 75.10 % 88.26 % 87.07 % 86.26 %
31 CenterTrack
This is an online method (no batch processing).
code 73.02 % 75.62 % 71.20 % 80.10 % 84.56 % 73.84 % 89.00 % 86.52 % 88.83 %
X. Zhou, V. Koltun and P. Krähenbühl: Tracking Objects as Points. ECCV 2020.
32 QD-3DT
This is an online method (no batch processing).
code 72.77 % 74.09 % 72.19 % 78.13 % 85.48 % 74.87 % 89.21 % 87.16 % 85.94 %
H. Hu, Y. Yang, T. Fischer, F. Yu, T. Darrell and M. Sun: Monocular Quasi-Dense 3D Object Tracking. ArXiv:2103.07351 2021.
33 TrackMPNN
This is an online method (no batch processing).
code 72.30 % 74.69 % 70.63 % 80.02 % 83.11 % 73.58 % 87.14 % 86.14 % 87.33 %
A. Rangesh, P. Maheshwari, M. Gebre, S. Mhatre, V. Ramezani and M. Trivedi: TrackMPNN: A Message Passing Graph Neural Architecture for Multi-Object Tracking. arXiv preprint arXiv:2101.04206 .
34 DTFI 72.22 % 65.67 % 79.98 % 81.99 % 70.95 % 84.14 % 87.65 % 86.35 % 72.91 %
35 DiTMOT code 72.21 % 71.09 % 74.04 % 75.98 % 83.28 % 76.57 % 89.97 % 86.15 % 84.53 %
S. Wang, P. Cai, L. Wang and M. Liu: DiTNet: End-to-End 3D Object Detection and Track ID Assignment in Spatio-Temporal World. IEEE Robotics and Automation Letters 2021.
36 GQY_tracking 72.05 % 69.46 % 75.15 % 78.20 % 79.09 % 81.42 % 85.07 % 86.54 % 80.98 %
ERROR: Wrong syntax in BIBTEX file.
37 SMAT
This is an online method (no batch processing).
71.88 % 72.13 % 72.13 % 74.43 % 87.33 % 74.77 % 88.30 % 87.19 % 83.64 %
N. Gonzalez, A. Ospina and P. Calvez: SMAT: Smart Multiple Affinity Metrics for Multiple Object Tracking. Image Analysis and Recognition 2020.
38 NC2
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
71.85 % 69.61 % 74.81 % 81.19 % 76.99 % 78.57 % 89.33 % 87.30 % 78.52 %
Chao Jiang and W. Zhiling: A New Adaptive Noise Covariance Matrices Estimation and Filtering Method: Application to Multi-Object Tracking. arXiv 2021.
39 TuSimple
This is an online method (no batch processing).
71.55 % 72.62 % 71.11 % 76.78 % 83.84 % 74.51 % 86.26 % 85.72 % 86.31 %
W. Choi: Near-online multi-target tracking with aggregated local flow descriptor. Proceedings of the IEEE International Conference on Computer Vision 2015.
K. He, X. Zhang, S. Ren and J. Sun: Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition 2016.
40 DetFlowTrack
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
71.52 % 72.87 % 70.89 % 79.56 % 82.98 % 73.47 % 90.64 % 87.79 % 83.34 %
41 CenterTube_RCNN 71.25 % 74.27 % 69.24 % 79.94 % 83.53 % 72.01 % 90.28 % 86.85 % 86.97 %
42 JMODT
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
code 70.73 % 73.45 % 68.76 % 78.67 % 84.02 % 72.46 % 88.02 % 86.95 % 85.35 %
K. Huang and Q. Hao: Joint Multi-Object Detection and Tracking with Camera-LiDAR Fusion for Autonomous Driving. 2021.
43 FFtracker 70.63 % 68.71 % 73.08 % 76.93 % 79.47 % 75.45 % 89.81 % 86.61 % 79.82 %
44 OC-SORT 70.22 % 71.97 % 69.44 % 77.20 % 81.00 % 74.31 % 84.17 % 84.40 % 87.01 %
45 AB3DMOT+PointRCNN code 69.99 % 71.13 % 69.33 % 75.66 % 84.40 % 72.31 % 89.02 % 86.85 % 83.61 %
X. Weng, J. Wang, D. Held and K. Kitani: 3D Multi-Object Tracking: A Baseline and New Evaluation Metrics. IROS 2020.
46 JRMOT
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
code 69.61 % 73.05 % 66.89 % 76.95 % 85.07 % 69.18 % 88.95 % 86.72 % 85.10 %
A. Shenoi, M. Patel, J. Gwak, P. Goebel, A. Sadeghian, H. Rezatofighi, R. Mart\'in-Mart\'in and S. Savarese: JRMOT: A Real-Time 3D Multi-Object Tracker and a New Large-Scale Dataset. The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020.
47 MOTSFusion
This method uses stereo information.
code 68.74 % 72.19 % 66.16 % 76.05 % 84.88 % 69.57 % 85.49 % 86.56 % 84.24 %
J. Luiten, T. Fischer and B. Leibe: Track to Reconstruct and Reconstruct to Track. IEEE Robotics and Automation Letters 2020.
48 IMMDP
This is an online method (no batch processing).
68.66 % 68.02 % 69.76 % 71.47 % 83.28 % 74.50 % 82.02 % 84.80 % 82.75 %
Y. Xiang, A. Alahi and S. Savarese: Learning to Track: Online Multi- Object Tracking by Decision Making. International Conference on Computer Vision (ICCV) 2015.
S. Ren, K. He, R. Girshick and J. Sun: Faster R-CNN: Towards Real- Time Object Detection with Region Proposal Networks. NIPS 2015.
49 SRK_ODESA(hc)
This is an online method (no batch processing).
68.51 % 75.40 % 63.08 % 78.89 % 86.00 % 65.89 % 87.47 % 86.88 % 87.79 %
D. Mykheievskyi, D. Borysenko and V. Porokhonskyy: Learning Local Feature Descriptors for Multiple Object Tracking. ACCV 2020.
50 BAT
This is an online method (no batch processing).
68.49 % 71.53 % 66.14 % 75.24 % 83.89 % 70.51 % 83.69 % 85.26 % 86.20 %
51 Quasi-Dense
This is an online method (no batch processing).
code 68.45 % 72.44 % 65.49 % 76.01 % 85.37 % 68.28 % 88.53 % 86.50 % 84.93 %
J. Pang, L. Qiu, X. Li, H. Chen, Q. Li, T. Darrell and F. Yu: Quasi-Dense Similarity Learning for Multiple Object Tracking. CVPR 2021.
52 MASS
This is an online method (no batch processing).
68.25 % 72.92 % 64.46 % 76.83 % 85.14 % 72.12 % 81.46 % 86.80 % 84.64 %
H. Karunasekera, H. Wang and H. Zhang: Multiple Object Tracking with attention to Appearance, Structure, Motion and Size. IEEE Access 2019.
53 CenterTube-P 67.92 % 68.73 % 68.32 % 75.30 % 80.42 % 73.80 % 82.32 % 85.61 % 80.30 %
54 CenterTube-V 67.76 % 69.95 % 66.96 % 76.09 % 81.17 % 72.46 % 80.97 % 85.67 % 81.56 %
55 AT 66.62 % 66.77 % 67.24 % 72.19 % 79.91 % 71.66 % 83.00 % 84.29 % 79.77 %
56 JCSTD
This is an online method (no batch processing).
65.94 % 65.37 % 67.03 % 68.49 % 82.42 % 71.02 % 82.25 % 84.03 % 80.24 %
W. Tian, M. Lauer and L. Chen: Online Multi-Object Tracking Using Joint Domain Information in Traffic Scenarios. IEEE Transactions on Intelligent Transportation Systems 2019.
57 MDP
This is an online method (no batch processing).
code 64.79 % 63.04 % 67.05 % 66.18 % 82.22 % 69.61 % 85.61 % 84.24 % 76.08 %
Y. Xiang, A. Alahi and S. Savarese: Learning to Track: Online Multi- Object Tracking by Decision Making. International Conference on Computer Vision (ICCV) 2015.
Y. Xiang, W. Choi, Y. Lin and S. Savarese: Subcategory-aware Convolutional Neural Networks for Object Proposals and Detection. IEEE Winter Conference on Applications of Computer Vision (WACV) 2017.
58 NOMT* 64.77 % 63.08 % 67.04 % 66.92 % 79.28 % 70.38 % 83.14 % 82.22 % 77.91 %
W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.
59 SRK_ODESA(mc)
This is an online method (no batch processing).
64.25 % 74.87 % 55.70 % 78.62 % 84.68 % 62.10 % 81.78 % 85.85 % 88.50 %
D. Mykheievskyi, D. Borysenko and V. Porokhonskyy: Learning Local Feature Descriptors for Multiple Object Tracking. ACCV 2020.
60 MOTBeyondPixels
This is an online method (no batch processing).
code 63.75 % 72.87 % 56.40 % 76.58 % 85.38 % 59.05 % 86.70 % 86.90 % 82.68 %
S. Sharma, J. Ansari, J. Krishna Murthy and K. Madhava Krishna: Beyond Pixels: Leveraging Geometry and Shape Cues for Online Multi-Object Tracking. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018.
61 mmMOT code 62.05 % 72.29 % 54.02 % 76.17 % 84.89 % 58.98 % 82.40 % 86.58 % 83.23 %
W. Zhang, H. Zhou, Sun, Z. Wang, J. Shi and C. Loy: Robust Multi-Modality Multi-Object Tracking. International Conference on Computer Vision (ICCV) 2019.
62 FANTrack
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
code 60.85 % 64.36 % 58.69 % 69.17 % 80.82 % 60.78 % 88.94 % 84.72 % 75.84 %
E. Baser, V. Balasubramanian, P. Bhattacharyya and K. Czarnecki: FANTrack: 3D Multi-Object Tracking with Feature Association Network. ArXiv 2019.
63 DSM 60.05 % 64.09 % 57.18 % 67.22 % 83.64 % 59.91 % 86.32 % 85.39 % 73.94 %
D. Frossard and R. Urtasun: End-To-End Learning of Multi-Sensor 3D Tracking by Detection. ICRA 2018.
64 aUToTrack
This method makes use of Velodyne laser scans.
This method makes use of GPS/IMU information.
This is an online method (no batch processing).
59.83 % 67.82 % 53.68 % 72.66 % 79.60 % 55.94 % 86.52 % 83.10 % 80.97 %
K. Burnett, S. Samavi, S. Waslander, T. Barfoot and A. Schoellig: aUToTrack: A Lightweight Object Detection and Tracking System for the SAE AutoDrive Challenge. arXiv:1905.08758 2019.
65 extraCK
This is an online method (no batch processing).
59.76 % 65.18 % 55.47 % 69.21 % 81.69 % 61.82 % 75.70 % 84.30 % 79.29 %
G. Gunduz and T. Acarman: A lightweight online multiple object vehicle tracking method. Intelligent Vehicles Symposium (IV), 2018 IEEE 2018.
66 3D-CNN/PMBM
This method makes use of GPS/IMU information.
This is an online method (no batch processing).
59.12 % 65.43 % 54.28 % 69.87 % 80.68 % 57.28 % 83.89 % 83.94 % 79.23 %
S. Scheidegger, J. Benjaminsson, E. Rosenberg, A. Krishnan and K. Granström: Mono-Camera 3D Multi-Object Tracking Using Deep Learning Detections and PMBM Filtering. 2018 IEEE Intelligent Vehicles Symposium, IV 2018, Changshu, Suzhou, China, June 26-30, 2018 2018.
67 NOMT-HM*
This is an online method (no batch processing).
59.08 % 61.27 % 57.45 % 65.14 % 79.29 % 60.25 % 83.46 % 82.63 % 74.66 %
W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.
68 Point3DT
This method makes use of Velodyne laser scans.
57.20 % 55.71 % 59.15 % 64.66 % 68.67 % 63.20 % 78.30 % 80.07 % 67.56 %
Sukai Wang and M. Liu: PointTrackNet: An End-to-End Network for 3-D Object Detection and Tracking from Point Clouds. to be submitted ICRA'20 .
69 LP-SSVM* 56.62 % 61.02 % 52.80 % 65.32 % 76.83 % 55.61 % 80.07 % 80.92 % 76.82 %
S. Wang and C. Fowlkes: Learning Optimal Parameters for Multi-target Tracking with Contextual Interactions. International Journal of Computer Vision 2016.
70 MCMOT-CPD 56.61 % 64.28 % 50.55 % 67.37 % 82.77 % 53.96 % 81.97 % 84.26 % 77.98 %
B. Lee, E. Erdenee, S. Jin, M. Nam, Y. Jung and P. Rhee: Multi-class Multi-object Tracking Using Changing Point Detection. ECCVWORK 2016.
71 NOMT 56.49 % 52.29 % 61.59 % 54.73 % 78.75 % 64.63 % 83.40 % 81.41 % 66.36 %
W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.
72 SCEA*
This is an online method (no batch processing).
56.09 % 60.70 % 52.15 % 64.97 % 77.83 % 54.87 % 81.17 % 81.94 % 74.92 %
J. Yoon, C. Lee, M. Yang and K. Yoon: Online Multi-object Tracking via Structural Constraint Event Aggregation. IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2016.
73 RMOT*
This is an online method (no batch processing).
55.82 % 54.95 % 57.34 % 62.56 % 69.08 % 62.58 % 74.77 % 78.82 % 65.07 %
J. Yoon, M. Yang, J. Lim and K. Yoon: Bayesian Multi-Object Tracking Using Motion Context from Multiple Objects. IEEE Winter Conference on Applications of Computer Vision (WACV) 2015.
74 CIWT*
This method uses stereo information.
This is an online method (no batch processing).
code 54.90 % 60.57 % 49.99 % 64.13 % 78.77 % 51.98 % 82.33 % 81.87 % 74.44 %
A. Osep, W. Mehner, M. Mathias and B. Leibe: Combined Image- and World-Space Tracking in Traffic Scenes. ICRA 2017.
75 FAMNet 52.56 % 61.00 % 45.51 % 64.40 % 78.67 % 48.66 % 77.41 % 81.47 % 75.92 %
P. Chu and H. Ling: FAMNet: Joint Learning of Feature, Affinity and Multi-dimensional Assignment for Online Multiple Object Tracking. ICCV 2019.
76 SASN-MCF_nano 52.24 % 59.65 % 46.22 % 66.28 % 77.27 % 56.20 % 68.77 % 84.56 % 69.82 %
G. Gunduz and T. Acarman: Efficient Multi-Object Tracking by Strong Associations on Temporal Window. IEEE Transactions on Intelligent Vehicles 2019.
77 NOMT-HM
This is an online method (no batch processing).
52.17 % 48.58 % 56.45 % 50.76 % 79.02 % 58.78 % 84.62 % 81.82 % 60.68 %
W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.
78 SSP* code 51.16 % 58.96 % 44.64 % 65.26 % 74.09 % 46.75 % 80.78 % 81.32 % 70.94 %
P. Lenz, A. Geiger and R. Urtasun: FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation. International Conference on Computer Vision (ICCV) 2015.
79 mbodSSP*
This is an online method (no batch processing).
code 50.92 % 58.57 % 44.51 % 63.69 % 75.67 % 46.47 % 81.23 % 81.44 % 70.78 %
P. Lenz, A. Geiger and R. Urtasun: FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation. International Conference on Computer Vision (ICCV) 2015.
80 Complexer-YOLO
This method makes use of Velodyne laser scans.
This method makes use of GPS/IMU information.
This is an online method (no batch processing).
49.12 % 62.44 % 39.34 % 67.58 % 76.86 % 40.72 % 85.23 % 81.47 % 72.61 %
M. Simon, K. Amende, A. Kraus, J. Honer, T. Samann, H. Kaulbersch, S. Milz and H. Michael Gross: Complexer-YOLO: Real-Time 3D Object Detection and Tracking on Semantic Point Clouds. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2019.
81 LP-SSVM 47.21 % 47.93 % 46.77 % 50.19 % 77.19 % 48.78 % 81.46 % 80.40 % 61.08 %
S. Wang and C. Fowlkes: Learning Optimal Parameters for Multi-target Tracking with Contextual Interactions. International Journal of Computer Vision 2016.
82 DCO-X* code 46.53 % 56.69 % 38.71 % 62.56 % 74.41 % 41.26 % 79.21 % 81.50 % 66.22 %
A. Milan, K. Schindler and S. Roth: Detection- and Trajectory-Level Exclusion in Multiple Object Tracking. CVPR 2013.
83 Decoupled DeepSORT 45.75 % 48.46 % 43.77 % 51.97 % 73.58 % 50.52 % 65.79 % 79.06 % 59.09 %
84 RMOT
This is an online method (no batch processing).
44.80 % 42.02 % 48.32 % 44.53 % 73.59 % 51.68 % 77.62 % 78.92 % 51.92 %
J. Yoon, M. Yang, J. Lim and K. Yoon: Bayesian Multi-Object Tracking Using Motion Context from Multiple Objects. IEEE Winter Conference on Applications of Computer Vision (WACV) 2015.
85 CEM code 43.41 % 41.72 % 45.77 % 43.72 % 76.72 % 47.45 % 83.68 % 80.44 % 51.34 %
A. Milan, S. Roth and K. Schindler: Continuous Energy Minimization for Multitarget Tracking. IEEE TPAMI 2014.
86 SCEA
This is an online method (no batch processing).
43.06 % 44.75 % 41.70 % 46.22 % 80.08 % 43.11 % 84.22 % 81.84 % 56.00 %
J. Yoon, C. Lee, M. Yang and K. Yoon: Online Multi-object Tracking via Structural Constraint Event Aggregation. IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2016.
87 TBD code 43.01 % 43.06 % 43.30 % 44.50 % 79.49 % 44.94 % 84.22 % 81.47 % 53.94 %
A. Geiger, M. Lauer, C. Wojek, C. Stiller and R. Urtasun: 3D Traffic Scene Understanding from Movable Platforms. Pattern Analysis and Machine Intelligence (PAMI) 2014.
H. Zhang, A. Geiger and R. Urtasun: Understanding High-Level Semantics by Modeling Traffic Patterns. International Conference on Computer Vision (ICCV) 2013.
88 SORT 42.52 % 44.01 % 41.31 % 47.30 % 73.93 % 42.83 % 83.04 % 80.75 % 53.15 %
A. Bewley, Z. Ge, L. Ott, F. Ramos and B. Upcroft: Simple online and realtime tracking. 2016 IEEE International Conference on Image Processing (ICIP) 2016.
89 SSP code 40.07 % 44.83 % 36.13 % 46.55 % 78.34 % 39.99 % 75.30 % 80.91 % 56.33 %
P. Lenz, A. Geiger and R. Urtasun: FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation. International Conference on Computer Vision (ICCV) 2015.
90 mbodSSP
This is an online method (no batch processing).
code 39.49 % 43.94 % 35.82 % 45.72 % 77.85 % 36.95 % 84.35 % 80.76 % 54.10 %
P. Lenz, A. Geiger and R. Urtasun: FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation. International Conference on Computer Vision (ICCV) 2015.
91 ODAMOT
This is an online method (no batch processing).
37.05 % 46.53 % 30.07 % 49.91 % 73.20 % 32.46 % 78.19 % 79.26 % 57.03 %
A. Gaidon and E. Vig: Online Domain Adaptation for Multi-Object Tracking. British Machine Vision Conference (BMVC) 2015.
92 FMMOVT 34.35 % 33.80 % 35.39 % 39.20 % 62.79 % 39.66 % 75.42 % 80.40 % 31.23 %
F. Alencar, C. Massera, D. Ridel and D. Wolf: Fast Metric Multi-Object Vehicle Tracking for Dynamical Environment Comprehension. Latin American Robotics Symposium (LARS), 2015 2015.
93 MCF 33.98 % 35.97 % 32.32 % 36.87 % 79.67 % 33.65 % 82.48 % 81.31 % 44.40 %
L. Zhang, Y. Li and R. Nevatia: Global data association for multi-object tracking using network flows.. CVPR .
94 HM
This is an online method (no batch processing).
33.79 % 34.30 % 33.45 % 35.16 % 79.56 % 34.55 % 83.08 % 81.33 % 42.36 %
A. Geiger: Probabilistic Models for 3D Urban Scene Understanding from Movable Platforms. 2013.
95 DCO code 33.45 % 36.33 % 31.30 % 40.93 % 64.11 % 34.23 % 73.46 % 77.25 % 36.72 %
A. Andriyenko, K. Schindler and S. Roth: Discrete-Continuous Optimization for Multi-Target Tracking. CVPR 2012.
96 DP-MCF code 25.97 % 35.69 % 19.12 % 36.76 % 78.84 % 28.98 % 39.84 % 81.19 % 36.89 %
H. Pirsiavash, D. Ramanan and C. Fowlkes: Globally-Optimal Greedy Algorithms for Tracking a Variable Number of Objects. IEEE conference on Computer Vision and Pattern Recognition (CVPR) 2011.
Table as LaTeX | Only published Methods


PEDESTRIAN



This figure as: png pdf

This figure as: png pdf

This figure as: png pdf

This figure as: png pdf

Method Setting Code HOTA DetA AssA DetRe DetPr AssRe AssPr LocA MOTA
1 FastTrack 55.10 % 52.72 % 57.88 % 58.39 % 69.99 % 63.01 % 71.77 % 78.22 % 67.92 %
2 OC-SORT
This is an online method (no batch processing).
code 54.69 % 50.82 % 59.08 % 55.68 % 70.94 % 64.09 % 73.36 % 78.52 % 65.14 %
J. Cao, X. Weng, R. Khirodkar, J. Pang and K. Kitani: Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking. 2022.
3 OC-SORT 53.22 % 48.72 % 58.39 % 53.56 % 70.83 % 63.67 % 72.82 % 78.91 % 61.12 %
4 Anonymous 52.72 % 53.55 % 52.21 % 58.87 % 70.15 % 59.50 % 65.00 % 77.69 % 68.37 %
5 RAM
This is an online method (no batch processing).
52.71 % 53.55 % 52.19 % 58.86 % 70.17 % 59.49 % 64.99 % 77.70 % 68.40 %
P. Tokmakov, A. Jabri, J. Li and A. Gaidon: Object Permanence Emerges in a Random Walk along Memory. ICML 2022.
6 OC-SORT
This is an online method (no batch processing).
52.12 % 47.93 % 56.87 % 52.18 % 72.12 % 61.72 % 73.14 % 79.34 % 60.16 %
7 SRK_ODESA(hp)
This is an online method (no batch processing).
50.87 % 53.43 % 48.78 % 57.79 % 72.90 % 53.45 % 71.33 % 78.81 % 68.04 %
D. Mykheievskyi, D. Borysenko and V. Porokhonskyy: Learning Local Feature Descriptors for Multiple Object Tracking. ACCV 2020.
8 CgOSNet 50.08 % 46.26 % 54.81 % 49.45 % 68.60 % 60.34 % 70.54 % 75.36 % 60.78 %
ERROR: Wrong syntax in BIBTEX file.
9 PermaTrack
This is an online method (no batch processing).
48.63 % 52.28 % 45.61 % 57.40 % 71.03 % 49.63 % 73.28 % 78.57 % 65.98 %
P. Tokmakov, J. Li, W. Burgard and A. Gaidon: Learning to Track with Object Permanence. ICCV 2021.
10 Opm-NC2
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
46.55 % 46.82 % 46.68 % 53.01 % 59.38 % 50.84 % 65.82 % 72.07 % 56.05 %
H. Chao Jiang: A Fast and High-Performance Object Proposal Method for Vision Sensors: Application to Object Detection. IEEE sensors journal 2022.
11 3D-TLSR
This method uses stereo information.
This is an online method (no batch processing).
46.34 % 42.03 % 51.32 % 44.51 % 71.14 % 54.45 % 73.11 % 76.87 % 53.58 %
U. Nguyen and C. Heipke: 3D Pedestrian tracking using local structure constraints. ISPRS Journal of Photogrammetry and Remote Sensing 2020.
12 TuSimple
This is an online method (no batch processing).
45.88 % 44.66 % 47.62 % 47.92 % 69.51 % 52.04 % 69.88 % 76.43 % 57.61 %
W. Choi: Near-online multi-target tracking with aggregated local flow descriptor. Proceedings of the IEEE International Conference on Computer Vision 2015.
K. He, X. Zhang, S. Ren and J. Sun: Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition 2016.
13 CAT
This method uses stereo information.
This is an online method (no batch processing).
45.65 % 42.43 % 49.55 % 45.89 % 67.79 % 53.20 % 71.97 % 75.90 % 51.96 %
U. Nguyen, F. Rottensteiner and C. Heipke: CONFIDENCE-AWARE PEDESTRIAN TRACKING USING A STEREO CAMERA. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences 2019.
14 MPNTrack code 45.26 % 43.74 % 47.28 % 53.62 % 58.30 % 52.18 % 68.47 % 75.93 % 46.23 %
G. Brasó and L. Leal-Taixé: Learning a Neural Solver for Multiple Object Tracking. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020.
15 CAMO-MOT
This method makes use of Velodyne laser scans.
44.77 % 41.53 % 48.70 % 45.16 % 60.25 % 55.35 % 59.63 % 71.22 % 52.48 %
16 NC2
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
44.30 % 42.31 % 46.75 % 52.97 % 52.43 % 50.91 % 65.83 % 72.08 % 44.18 %
Chao Jiang and W. Zhiling: A New Adaptive Noise Covariance Matrices Estimation and Filtering Method: Application to Multi-Object Tracking. arXiv 2021.
17 jerrymot 44.21 % 39.39 % 50.12 % 44.81 % 56.35 % 54.63 % 64.47 % 71.15 % 46.34 %
18 MSF-MOT
This method uses stereo information.
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
43.97 % 41.34 % 47.26 % 46.41 % 57.75 % 50.63 % 66.21 % 71.23 % 49.65 %
19 SRK_ODESA(mp)
This is an online method (no batch processing).
43.73 % 53.73 % 36.05 % 58.01 % 73.19 % 40.05 % 69.44 % 78.91 % 67.31 %
D. Mykheievskyi, D. Borysenko and V. Porokhonskyy: Learning Local Feature Descriptors for Multiple Object Tracking. ACCV 2020.
20 InvariantGraphMOT 43.59 % 39.88 % 48.12 % 44.90 % 57.40 % 51.95 % 65.22 % 71.34 % 46.98 %
21 Be-Track
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
43.36 % 39.99 % 47.23 % 43.00 % 69.03 % 51.28 % 69.60 % 76.78 % 50.85 %
M. Dimitrievski, P. Veelaert and W. Philips: Behavioral Pedestrian Tracking Using a Camera and LiDAR Sensors on a Moving Vehicle. Sensors 2019.
22 BAT
This is an online method (no batch processing).
42.91 % 43.91 % 42.22 % 47.19 % 71.11 % 45.25 % 73.40 % 77.92 % 55.71 %
23 Mono_3D_KF
This method makes use of GPS/IMU information.
This is an online method (no batch processing).
42.87 % 40.13 % 46.31 % 46.02 % 59.91 % 52.86 % 63.50 % 74.03 % 45.44 %
A. Reich and H. Wuensche: Monocular 3D Multi-Object Tracking with an EKF Approach for Long-Term Stable Tracks. 2021 IEEE 24th International Conference on Information Fusion (FUSION) 2021.
24 TripletTrack 42.77 % 39.54 % 46.54 % 41.97 % 71.91 % 50.86 % 71.26 % 77.93 % 50.08 %
25 MDP
This is an online method (no batch processing).
code 42.76 % 39.23 % 47.13 % 43.83 % 63.02 % 50.91 % 71.04 % 75.15 % 47.02 %
Y. Xiang, A. Alahi and S. Savarese: Learning to Track: Online Multi- Object Tracking by Decision Making. International Conference on Computer Vision (ICCV) 2015.
Y. Xiang, W. Choi, Y. Lin and S. Savarese: Subcategory-aware Convolutional Neural Networks for Object Proposals and Detection. IEEE Winter Conference on Applications of Computer Vision (WACV) 2017.
26 Quasi-Dense
This is an online method (no batch processing).
code 41.12 % 44.81 % 38.10 % 48.55 % 70.39 % 41.02 % 72.47 % 77.87 % 55.55 %
J. Pang, L. Qiu, X. Li, H. Chen, Q. Li, T. Darrell and F. Yu: Quasi-Dense Similarity Learning for Multiple Object Tracking. CVPR 2021.
27 QD-3DT
This is an online method (no batch processing).
code 41.08 % 44.01 % 38.82 % 48.96 % 67.19 % 42.09 % 72.44 % 77.38 % 51.77 %
H. Hu, Y. Yang, T. Fischer, F. Yu, T. Darrell and M. Sun: Monocular Quasi-Dense 3D Object Tracking. ArXiv:2103.07351 2021.
28 NOMT* 40.91 % 37.52 % 44.79 % 40.94 % 65.86 % 50.53 % 65.94 % 75.94 % 47.08 %
W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.
29 AT 40.85 % 39.86 % 42.19 % 44.44 % 66.96 % 47.91 % 69.25 % 78.37 % 47.18 %
30 CenterTrack
This is an online method (no batch processing).
code 40.35 % 44.48 % 36.93 % 49.91 % 66.83 % 41.05 % 70.19 % 77.81 % 53.84 %
X. Zhou, V. Koltun and P. Krähenbühl: Tracking Objects as Points. ECCV 2020.
31 HMM 39.97 % 44.34 % 36.41 % 51.33 % 62.62 % 48.06 % 49.13 % 76.11 % 52.61 %
32 DetFlowTrack
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
39.64 % 40.90 % 38.72 % 47.54 % 56.35 % 41.48 % 66.03 % 72.04 % 46.75 %
33 RMOT*
This is an online method (no batch processing).
39.56 % 36.07 % 43.63 % 39.74 % 63.97 % 49.54 % 62.82 % 75.35 % 43.32 %
J. Yoon, M. Yang, J. Lim and K. Yoon: Bayesian Multi-Object Tracking Using Motion Context from Multiple Objects. IEEE Winter Conference on Applications of Computer Vision (WACV) 2015.
34 JCSTD
This is an online method (no batch processing).
39.44 % 34.20 % 45.79 % 36.15 % 69.39 % 49.38 % 69.00 % 76.23 % 43.42 %
W. Tian, M. Lauer and L. Chen: Online Multi-Object Tracking Using Joint Domain Information in Traffic Scenarios. IEEE Transactions on Intelligent Transportation Systems 2019.
35 TrackMPNN
This is an online method (no batch processing).
code 39.40 % 44.24 % 35.45 % 50.78 % 64.58 % 38.98 % 69.80 % 77.56 % 52.10 %
A. Rangesh, P. Maheshwari, M. Gebre, S. Mhatre, V. Ramezani and M. Trivedi: TrackMPNN: A Message Passing Graph Neural Architecture for Multi-Object Tracking. arXiv preprint arXiv:2101.04206 .
36 EagerMOT code 39.38 % 40.60 % 38.72 % 43.43 % 61.49 % 40.98 % 68.33 % 71.25 % 49.82 %
A. Kim, A. Osep and L. Leal-Taix'e: EagerMOT: 3D Multi-Object Tracking via Sensor Fusion. IEEE International Conference on Robotics and Automation (ICRA) 2021.
37 AB3DMOT+PointRCNN code 37.81 % 32.37 % 44.33 % 34.91 % 59.35 % 48.44 % 62.83 % 71.31 % 38.13 %
X. Weng, J. Wang, D. Held and K. Kitani: 3D Multi-Object Tracking: A Baseline and New Evaluation Metrics. IROS 2020.
38 NOMT 36.26 % 31.87 % 41.63 % 34.61 % 61.26 % 46.88 % 62.25 % 72.83 % 36.52 %
W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.
39 NOMT-HM*
This is an online method (no batch processing).
34.76 % 33.96 % 35.81 % 37.64 % 62.76 % 39.23 % 66.66 % 75.36 % 38.60 %
W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.
40 SCEA*
This is an online method (no batch processing).
34.66 % 34.31 % 35.21 % 36.70 % 67.74 % 38.33 % 69.46 % 76.23 % 43.26 %
J. Yoon, C. Lee, M. Yang and K. Yoon: Online Multi-object Tracking via Structural Constraint Event Aggregation. IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2016.
41 JRMOT
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
code 34.24 % 38.79 % 30.55 % 42.51 % 66.64 % 32.69 % 70.12 % 76.64 % 45.31 %
A. Shenoi, M. Patel, J. Gwak, P. Goebel, A. Sadeghian, H. Rezatofighi, R. Mart\'in-Mart\'in and S. Savarese: JRMOT: A Real-Time 3D Multi-Object Tracker and a New Large-Scale Dataset. The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020.
42 RMOT
This is an online method (no batch processing).
34.09 % 29.61 % 39.45 % 32.12 % 60.92 % 43.14 % 63.59 % 72.99 % 34.05 %
J. Yoon, M. Yang, J. Lim and K. Yoon: Bayesian Multi-Object Tracking Using Motion Context from Multiple Objects. IEEE Winter Conference on Applications of Computer Vision (WACV) 2015.
43 CIWT*
This method uses stereo information.
This is an online method (no batch processing).
code 33.93 % 34.00 % 34.07 % 36.35 % 67.44 % 36.34 % 70.76 % 75.96 % 42.10 %
A. Osep, W. Mehner, M. Mathias and B. Leibe: Combined Image- and World-Space Tracking in Traffic Scenes. ICRA 2017.
44 LP-SSVM* 33.74 % 35.74 % 32.03 % 39.54 % 63.11 % 36.36 % 63.24 % 75.18 % 43.42 %
S. Wang and C. Fowlkes: Learning Optimal Parameters for Multi-target Tracking with Contextual Interactions. International Journal of Computer Vision 2016.
45 MCMOT-CPD 32.06 % 36.30 % 28.83 % 39.06 % 68.00 % 32.14 % 69.60 % 76.67 % 44.19 %
B. Lee, E. Erdenee, S. Jin, M. Nam, Y. Jung and P. Rhee: Multi-class Multi-object Tracking Using Changing Point Detection. ECCVWORK 2016.
46 Decoupled DeepSORT 32.02 % 34.10 % 30.43 % 39.37 % 56.88 % 38.34 % 46.56 % 73.36 % 34.12 %
47 NOMT-HM
This is an online method (no batch processing).
31.13 % 25.64 % 38.23 % 27.75 % 59.78 % 42.38 % 65.06 % 72.90 % 26.86 %
W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.
48 LP-SSVM 28.19 % 29.29 % 27.57 % 31.61 % 60.77 % 31.12 % 61.78 % 72.49 % 32.42 %
S. Wang and C. Fowlkes: Learning Optimal Parameters for Multi-target Tracking with Contextual Interactions. International Journal of Computer Vision 2016.
49 SCEA
This is an online method (no batch processing).
27.80 % 27.41 % 28.61 % 29.38 % 62.30 % 30.44 % 68.62 % 73.55 % 31.75 %
J. Yoon, C. Lee, M. Yang and K. Yoon: Online Multi-object Tracking via Structural Constraint Event Aggregation. IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2016.
50 CEM code 25.83 % 25.54 % 26.41 % 27.54 % 60.66 % 27.91 % 68.34 % 73.43 % 26.59 %
A. Milan, S. Roth and K. Schindler: Continuous Energy Minimization for Multitarget Tracking. IEEE TPAMI 2014.
51 Complexer-YOLO
This method makes use of Velodyne laser scans.
This method makes use of GPS/IMU information.
This is an online method (no batch processing).
14.08 % 24.91 % 8.15 % 27.21 % 52.62 % 8.63 % 59.39 % 68.64 % 11.99 %
M. Simon, K. Amende, A. Kraus, J. Honer, T. Samann, H. Kaulbersch, S. Milz and H. Michael Gross: Complexer-YOLO: Real-Time 3D Object Detection and Tracking on Semantic Point Clouds. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2019.
Table as LaTeX | Only published Methods


Related Datasets

  • TUD Datasets: "TUD Multiview Pedestrians" and "TUD Stadmitte" Datasets.
  • PETS 2009: The Datasets for the "Performance Evaluation of Tracking and Surveillance"" Workshop.
  • EPFL Terrace: Multi-camera pedestrian videos.
  • ETHZ Sequences: Inner City Sequences from Mobile Platforms.

Citation

When using this dataset in your research, we will be happy if you cite us:
@INPROCEEDINGS{Geiger2012CVPR,
  author = {Andreas Geiger and Philip Lenz and Raquel Urtasun},
  title = {Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite},
  booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2012}
}

@ARTICLE{Luiten2020IJCV,
  author = {Jonathon Luiten and Aljosa Osep and Patrick Dendorfer and Philip Torr and Andreas Geiger and Laura Leal-Taixe and Bastian Leibe},
  title = {HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking},
  journal = {International Journal of Computer Vision (IJCV)},
  year = {2020}
}



eXTReMe Tracker