Object Tracking Evaluation 2012


The object tracking benchmark consists of 21 training sequences and 29 test sequences. Despite the fact that we have labeled 8 different classes, only the classes 'Car' and 'Pedestrian' are evaluated in our benchmark, as only for those classes enough instances for a comprehensive evaluation have been labeled. The labeling process has been performed in two steps: First we hired a set of annotators, to label 3D bounding boxes as tracklets in point clouds. Since for a pedestrian tracklet, a single 3D bounding box tracklet (dimensions have been fixed) often fits badly, we additionally labeled the left/right boundaries of each object by making use of Mechanical Turk. We also collected labels of the object's occlusion state, and computed the object's truncation via backprojecting a car/pedestrian model into the image plane. We evaluate submitted results using the common metrics CLEAR MOT and MT/PT/ML. Since there is no single ranking criterion, we do not rank methods. Our development kit provides details about the data format as well as utility functions for reading and writing the label files.

The goal in the object tracking task is to estimate object tracklets for the classes 'Car' and 'Pedestrian'. We evaluate 2D 0-based bounding boxes in each image. We like to encourage people to add a confidence measure for every particular frame for this track. For evaluation we only consider detections/objects larger than 25 pixel (height) in the image and do not count Vans as false positives for cars or Sitting Persons as wrong positives for Pedestrians due to their similarity in appearance. As evaluation criterion we follow the CLEARMOT [1] and Mostly-Tracked/Partly-Tracked/Mostly-Lost [2] metrics. We do not rank methods by a single criterion, but bold numbers indicate the best method for a particular metric. To make the methods comparable, the time for object detection is not included in the specified runtime.

[1] K. Bernardin, R. Stiefelhagen: Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics. JIVP 2008.
[2] Y. Li, C. Huang, R. Nevatia: Learning to associate: HybridBoosted multi-target tracker for crowded scene. CVPR 2009.

Note 1: On 01.06.2015 we have fixed several bugs in the evaluation script and also in the calculation of the CLEAR MOT metrics. We have furthermore fixed some problems in the annotations of the training and test set (almost completely occluded objects are no longer counted as false negatives). Furthermore, from now on vans are not counted as false positives for cars and sitting persons not as false positives for pedestrians. We have also improved the devkit with new illustrations and re-calculated the results for all methods. Please download the devkit and the annotations/labels with the improved ground truth for training again if you have downloaded the files prior to 20.05.2015. Please consider reporting these new number for all future submissions. The last leaderboards right before the changes can be found here!

Note 2: On 27.11.2015 we have fixed a bug in the evaluation script which prevented van labels from being loaded and led to don't care areas being evaluated. Please download the devkit with the corrected evaluation script (if you want to evaluate on the training set) and consider reporting the new numbers for all future submissions. The leaderboard has been updated. The last leaderboards right before the changes can be found here!

Note 3: On 25.05.2016 we have fixed a bug in the evaluation script wrt. overcounting of ignored detections. Thanks to Adrien Gaidon for reporting this bug. Please download the devkit with the corrected evaluation script (if you want to evaluate on the training set) and consider reporting the new numbers for all future submissions. The leaderboard has been updated. The last leaderboards right before the changes can be found here!

Note 4: On 25.04.2017 a major update of the evaluation script includes the following changes: the counting of ignored detections was corrected; occlusion, truncation and minimum height handling was corrected; and the evaluation summary includes additional statistics. In detail, submitted detections are ignored (i.e. not considered) if they are classified as a "neighboring class" (i.e. 'Van' for 'Car' or 'Cyclist' for 'Pedestrian'), if they do not exceed the minimum height of 25px or if there is an overlap of 0.5 or greater with a 'Don't Care' area. In contrary, ground truth detections are ignored if the occlusion exceeds occlusion level 2, if the truncation exceeds the maximum truncation of 0 or if it belongs to a neighboring class (i.e. 'Van' for 'Car' or 'Cyclist' for 'Pedestrian'). We made sure that true positives, false positives, true negatives and false negatives are counted correctly. Finally, the evaluation summary now includes information about the number of ignored detections. We like to thank the following researchers for detailed feedback: Adrien Gaidon, Jonathan D. Kuck and Jose M. Buenaposada. The last leaderboards right before the changes can be found here!

Important Policy Update: As more and more non-published work and re-implementations of existing work is submitted to KITTI, we have established a new policy: from now on, only submissions with significant novelty that are leading to a peer-reviewed paper in a conference or journal are allowed. Minor modifications of existing algorithms or student research projects are not allowed. Such work must be evaluated on a split of the training set. To ensure that our policy is adopted, new users must detail their status, describe their work and specify the targeted venue during registration. Furthermore, we will regularly delete all entries that are 6 months old but are still anonymous or do not have a paper associated with them. For conferences, 6 month is enough to determine if a paper has been accepted and to add the bibliography information. For longer review cycles, you need to resubmit your results.
Additional information used by the methods
  • Stereo: Method uses left and right (stereo) images
  • Laser Points: Method uses point clouds from Velodyne laser scanner
  • GPS: Method uses GPS information
  • Online: Online method (frame-by-frame processing, no latency)
  • Additional training data: Use of additional data sources for training (see details)

CAR


Method Setting Code MOTA MOTP MT ML IDS FRAG Runtime Environment
1 FusionTrack 92.62 % 86.68 % 91.69 % 1.85 % 26 87 0.1 s 1 core @ 2.5 Ghz (python)
2 CrossTracker 92.05 % 87.26 % 85.08 % 2.62 % 56 195 0.05 s 1 core @ 2.5 Ghz (Python + C/C++)
3 CollabMOT
This method uses stereo information.
92.02 % 85.78 % 86.77 % 2.31 % 134 330 0.05 s 1 core @ 2.5 Ghz (Python)
4 CasTrack
This method makes use of Velodyne laser scans.
code 91.93 % 86.19 % 86.77 % 4.00 % 21 107 0.1 s 1 core @ 2.5 Ghz (C/C++)
H. Wu, J. Deng, C. Wen, X. Li and C. Wang: CasA: A Cascade Attention Network for 3D Object Detection from LiDAR point clouds. IEEE TGRS 2022.
H. Wu, W. Han, C. Wen, X. Li and C. Wang: 3D Multi-Object Tracking in Point Clouds Based on Prediction Confidence-Guided Data Association. IEEE TITS 2021.
5 PermaTrack
This is an online method (no batch processing).
91.92 % 85.83 % 86.77 % 2.31 % 138 345 0.1 s GPU @ 2.5 Ghz (Python)
P. Tokmakov, J. Li, W. Burgard and A. Gaidon: Learning to Track with Object Permanence. ICCV 2021.
6 CollabMOT
This method uses stereo information.
91.88 % 85.86 % 86.92 % 2.46 % 248 372 0.02 s 4 cores @ 2.5 Ghz (Python)
P. Ninh and H. Kim: CollabMOT Stereo Camera Collaborative Multi Object Tracking. IEEE Access 2024.
7 CollabMOT
This method uses stereo information.
91.79 % 85.87 % 86.92 % 2.46 % 248 375 0.01 s 1 core @ 2.5 Ghz (Python)
8 PC-TCNN
This method makes use of Velodyne laser scans.
91.75 % 86.17 % 87.54 % 2.92 % 26 118 0.3 s GPU (python/c++)
H. Wu, Q. Li, C. Wen, X. Li, X. Fan and C. Wang: Tracklet Proposal Network for Multi-Object Tracking on Point Clouds. IJCAI 2021.
9 RAM
This is an online method (no batch processing).
91.73 % 85.90 % 87.08 % 2.31 % 255 380 0.09 s GPU @ 2.5 Ghz (Python)
P. Tokmakov, A. Jabri, J. Li and A. Gaidon: Object Permanence Emerges in a Random Walk along Memory. ICML 2022.
10 BiTrack
This method makes use of Velodyne laser scans.
91.72 % 87.49 % 86.31 % 5.23 % 23 243 0.1 s 1 core @ 2.5 Ghz (Python)
11 Rethink MOT 91.47 % 85.63 % 89.38 % 4.31 % 72 180 0.3 s 4 cores @ 2.5 Ghz (Python)
L. Wang, J. Zhang, P. Cai and X. Li: Towards Robust Reference System for Autonomous Driving: Rethinking 3D MOT. Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA) 2023.
12 PMTrack 91.16 % 86.87 % 87.38 % 6.62 % 35 89 0.02 s 1 core @ 2.5 Ghz (Python)
13 Anonymous 91.04 % 86.56 % 83.54 % 10.15 % 25 71 1 s 1 core @ 2.5 Ghz (C/C++)
14 DFR 90.98 % 86.50 % 87.23 % 5.69 % 26 76 0.01 s 1 core @ 2.5 Ghz (C/C++)
15 SCMOT 90.90 % 86.31 % 84.46 % 5.85 % 130 197 0.01 s 2 cores @ 2.5 Ghz (Python)
16 LEGO
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
90.80 % 86.75 % 87.69 % 1.54 % 173 246 0.01 s 1 core @ 2.5 Ghz (Python)
Z. Zhang, J. Liu, Y. Xia, T. Huang, Q. Han and H. Liu: LEGO: Learning and Graph-Optimized Modular Tracker for Online Multi-Object Tracking with Point Clouds. arXiv preprint arXiv:2308.09908 2023.
17 OC-SORT
This is an online method (no batch processing).
code 90.64 % 85.71 % 81.23 % 2.92 % 225 471 0.03 s 1 core @ 3.0 Ghz (Python)
J. Cao, X. Weng, R. Khirodkar, J. Pang and K. Kitani: Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking. 2022.
18 STMOT_PointRCNN
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
90.44 % 86.31 % 84.15 % 5.85 % 265 322 0.01 s 1 core @ 2.5 Ghz (Python)
19 PNAS-MOT code 90.42 % 85.62 % 86.77 % 2.31 % 552 762 0.01 s GPU @ 2.5 Ghz (Python)
C. Peng, Z. Zeng, J. Gao, J. Zhou, M. Tomizuka, X. Wang, C. Zhou and N. Ye: PNAS-MOT: Multi-Modal Object Tracking With Pareto Neural Architecture Search. IEEE Robotics and Automation Letters 2024.
20 Anonymous
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
90.37 % 87.01 % 81.69 % 8.31 % 24 372 0.01 s 1 core @ 2.5 Ghz (C/C++)
21 VirConvTrack code 90.28 % 86.93 % 83.23 % 11.69 % 12 66 0.1 s 1 core @ 2.5 Ghz (C/C++)
H. Wu, C. Wen, S. Shi and C. Wang: Virtual Sparse Convolution for Multimodal 3D Object Detection. CVPR 2023.
22 SRK_ODESA(mc)
This is an online method (no batch processing).
90.03 % 84.32 % 82.62 % 2.31 % 90 501 0.4 s GPU (Python)
D. Mykheievskyi, D. Borysenko and V. Porokhonskyy: Learning Local Feature Descriptors for Multiple Object Tracking. ACCV 2020.
23 STMOT_v1
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
89.90 % 87.02 % 81.08 % 9.23 % 244 271 0.01 s 1 core @ 2.5 Ghz (Python)
24 FusionTrack+pointgnn 89.67 % 85.57 % 76.77 % 3.85 % 26 316 0.1 s 1 core @ 2.5 Ghz (C/C++)
25 CollabMOT
This method uses stereo information.
89.60 % 85.04 % 82.31 % 2.31 % 123 331 0.05 s 1 core @ 2.5 Ghz (C/C++)
P. Ninh and H. Kim: CollabMOT Stereo Camera Collaborative Multi Object Tracking. IEEE Access 2024.
26 CollabMOT
This method uses stereo information.
89.46 % 85.05 % 82.31 % 2.31 % 118 330 0.05 s 1 core @ 2.5 Ghz (Python)
27 CenterTrack
This is an online method (no batch processing).
code 89.44 % 85.05 % 82.31 % 2.31 % 116 334 0.045s GPU
X. Zhou, V. Koltun and P. Krähenbühl: Tracking Objects as Points. ECCV 2020.
28 APPTracker
This is an online method (no batch processing).
89.44 % 85.15 % 78.62 % 3.85 % 125 415 0.04 s GPU @ 1.5 Ghz (Python)
29 S3Track 88.97 % 87.25 % 86.92 % 1.69 % 154 369 0.03 s 1 core @ 2.5 Ghz (Python)
Anonymous: S$^3$Track: Self-supervised Tracking with Soft Assignment Flow. .
30 DEFT
This is an online method (no batch processing).
code 88.95 % 84.55 % 84.77 % 1.85 % 343 553 0.04 s GPU @ 2.5 Ghz (Python)
M. Chaabane, P. Zhang, R. Beveridge and S. O'Hara: DEFT: Detection Embeddings for Tracking. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2021.
31 PC3T
This method makes use of Velodyne laser scans.
code 88.88 % 84.37 % 80.00 % 8.31 % 208 369 0.0045 s 1 core @ >3.5 Ghz (Python + C/C++)
H. Wu, W. Han, C. Wen, X. Li and C. Wang: 3D Multi-Object Tracking in Point Clouds Based on Prediction Confidence-Guided Data Association. IEEE TITS 2021.
32 Mono_3D_KF
This method makes use of GPS/IMU information.
This is an online method (no batch processing).
88.77 % 83.95 % 80.46 % 3.69 % 96 218 0.3 s 1 core @ 2.5 Ghz (Python)
A. Reich and H. Wuensche: Monocular 3D Multi-Object Tracking with an EKF Approach for Long-Term Stable Tracks. 2021 IEEE 24th International Conference on Information Fusion (FUSION) 2021.
33 SRK_ODESA(hc)
This is an online method (no batch processing).
88.65 % 85.70 % 78.92 % 2.15 % 133 582 0.4 s GPU @ 2.5 Ghz (Python)
D. Mykheievskyi, D. Borysenko and V. Porokhonskyy: Learning Local Feature Descriptors for Multiple Object Tracking. ACCV 2020.
34 EagerMOT code 88.21 % 85.73 % 76.62 % 2.46 % 121 474 0.011 s 4 cores @ 3.0 Ghz (Python)
A. Kim, A. Osep and L. Leal-Taix'e: EagerMOT: 3D Multi-Object Tracking via Sensor Fusion. IEEE International Conference on Robotics and Automation (ICRA) 2021.
35 MSA-MOT
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
88.19 % 85.47 % 87.23 % 1.23 % 56 405 0.01 s 1 core @ 2.5 Ghz (Python)
Z. Zhu, J. Nie, H. Wu, Z. He and M. Gao: MSA-MOT: Multi-Stage Association for 3D Multimodality Multi-Object Tracking. Sensors 2022.
36 UG3DMOT code 88.10 % 86.58 % 79.23 % 5.38 % 5 330 0.1 s 1 core @ 2.5 Ghz (C/C++)
J. He, C. Fu and X. Wang: 3D Multi-Object Tracking Based on Uncertainty-Guided Data Association. arXiv preprint arXiv:2303.01786 2023.
37 LGM 88.06 % 84.16 % 85.54 % 2.15 % 469 590 0.08 s GPU @ 2.5 Ghz (Python)
G. Wang, R. Gu, Z. Liu, W. Hu, M. Song and J. Hwang: Track without Appearance: Learn Box and Tracklet Embedding with Local and Global Motion Patterns for Vehicle Tracking. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) 2021.
38 TrackMPNN
This is an online method (no batch processing).
code 87.74 % 84.55 % 84.77 % 1.85 % 404 607 0.05 s 4 cores @ 3.0 Ghz (Python)
A. Rangesh, P. Maheshwari, M. Gebre, S. Mhatre, V. Ramezani and M. Trivedi: TrackMPNN: A Message Passing Graph Neural Architecture for Multi-Object Tracking. arXiv preprint arXiv:2101.04206 .
39 SSL3DMOT 87.36 % 87.64 % 74.31 % 6.31 % 33 343 3 s GPU @ 2.5 Ghz (Python)
40 CMSSL3DMOT 87.31 % 87.68 % 73.23 % 6.62 % 21 331 268 s 1 core @ 2.5 Ghz (C/C++)
41 Rt_Track 87.14 % 82.72 % 69.08 % 5.23 % 183 486 0.1 s 1 core @ 2.5 Ghz (Python)
42 Stereo3DMOT 87.13 % 85.17 % 75.85 % 9.38 % 19 533 0.06 s 1 core @ 2.5 Ghz (C/C++)
C. Mao, C. Tan, H. Liu, J. Hu and M. Zheng: Stereo3DMOT: Stereo Vision Based 3D Multi-object Tracking with Multimodal ReID. Chinese Conference on Pattern Recognition and Computer Vision (PRCV) 2023.
43 Stereo3DMOT
This method uses stereo information.
This is an online method (no batch processing).
87.13 % 85.17 % 75.85 % 9.38 % 19 533 0.06 s 1 core @ 2.5 Ghz (C/C++)
44 TuSimple
This is an online method (no batch processing).
86.62 % 83.97 % 72.46 % 6.77 % 293 501 0.6 s 1 core @ 2.5 Ghz (Matlab + C/C++)
W. Choi: Near-online multi-target tracking with aggregated local flow descriptor. Proceedings of the IEEE International Conference on Computer Vision 2015.
K. He, X. Zhang, S. Ren and J. Sun: Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition 2016.
45 MHF-SOE
This is an online method (no batch processing).
code 86.61 % 81.65 % 73.38 % 4.00 % 142 444 1247 s 1 core @ 2.5 Ghz (Python)
46 YONTD-MOTv2
This method uses stereo information.
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
code 86.57 % 86.11 % 84.92 % 2.00 % 54 334 0.1 s GPU @ >3.5 Ghz (Python)
X. Wang, J. He, C. Fu, T. Meng and M. Huang: You Only Need Two Detectors to Achieve Multi-Modal 3D Multi-Object Tracking. arXiv preprint arXiv:2304.08709 2023.
47 BcMODT 86.53 % 85.37 % 78.31 % 2.62 % 45 626 0.01 s GPU @ 2.5 Ghz (Python)
K. Zhang, Y. Liu, F. Mei, J. Jin and Y. Wang: Boost Correlation Features with 3D-MiIoU- Based Camera-LiDAR Fusion for MODT in Autonomous Driving. Remote Sensing 2023.
48 QD-3DT
This is an online method (no batch processing).
code 86.41 % 85.82 % 75.38 % 2.46 % 108 553 0.03 s GPU @ 2.5 Ghz (Python)
H. Hu, Y. Yang, T. Fischer, F. Yu, T. Darrell and M. Sun: Monocular Quasi-Dense 3D Object Tracking. ArXiv:2103.07351 2021.
49 JMODT
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
code 86.27 % 85.41 % 77.38 % 2.92 % 45 585 0.01 s GPU @ 2.5 Ghz (Python)
K. Huang and Q. Hao: Joint multi-object detection and tracking with camera-LiDAR fusion for autonomous driving. 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2021.
50 P3DTrack 86.06 % 84.71 % 75.38 % 4.31 % 230 384 0.1 s GPU @ 2.5 Ghz (Python)
51 Smart3DMOT 85.99 % 85.09 % 69.38 % 6.15 % 87 489 2min 1 core @ 2.5 Ghz (C/C++) GPU@Nvidia3090
52 AIPT 85.91 % 85.42 % 66.77 % 6.62 % 42 460 0.5 s 1 core @ 2.5 Ghz (Python)
53 Quasi-Dense
This is an online method (no batch processing).
code 85.76 % 85.01 % 69.08 % 3.08 % 93 617 0.07s GPU (Python)
J. Pang, L. Qiu, X. Li, H. Chen, Q. Li, T. Darrell and F. Yu: Quasi-Dense Similarity Learning for Multiple Object Tracking. CVPR 2021.
54 JRMOT
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
code 85.70 % 85.48 % 71.85 % 4.00 % 98 372 0.07 s 4 cores @ 2.5 Ghz (Python)
A. Shenoi, M. Patel, J. Gwak, P. Goebel, A. Sadeghian, H. Rezatofighi, R. Mart\'in-Mart\'in and S. Savarese: JRMOT: A Real-Time 3D Multi-Object Tracker and a New Large-Scale Dataset. The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020.
55 StrongFusion-MOT 85.63 % 85.17 % 66.15 % 6.00 % 34 399 0.01 s 8 cores @ 2.5 Ghz (Python)
X. Wang, C. Fu, J. He, S. Wang and J. Wang: StrongFusionMOT: A Multi-Object Tracking Method Based on LiDAR-Camera Fusion. IEEE Sensors Journal 2022.
56 RA3DMOT 85.56 % 87.19 % 83.38 % 1.85 % 57 622 0.01 s GPU @ 2.5 Ghz (Python)
57 SG-AM 85.50 % 85.11 % 68.46 % 6.15 % 101 523 248 s 1 core @ 2.5 Ghz (C/C++)
58 PolarMOT code 85.31 % 85.52 % 81.38 % 2.31 % 408 900 0.02 s 1 core @ 2.5 Ghz (C/C++)
A. Kim, G. Bras'o, A. O\vsep and L. Leal-Taix'e: PolarMOT: How Far Can Geometric Relations Take Us in 3D Multi-Object Tracking?. European Conference on Computer Vision (ECCV) 2022.
59 YONTD-MOT
This method uses stereo information.
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
code 85.19 % 87.10 % 67.54 % 7.08 % 21 342 0.1 s GPU @ >3.5 Ghz (Python)
X. Wang, J. He, C. Fu, T. Meng and M. Huang: You Only Need Two Detectors to Achieve Multi-Modal 3D Multi-Object Tracking. arXiv preprint arXiv:2304.08709 2023.
60 3DMLA 85.12 % 84.91 % 70.62 % 5.85 % 15 318 0.02 s 1 core @ 2.5 Ghz (C/C++)
M. Cho and E. Kim: 3D LiDAR Multi-Object Tracking with Short-Term and Long-Term Multi-Level Associations. Remote Sensing 2023.
61 EAFFMOT
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
85.04 % 85.13 % 70.92 % 8.31 % 15 256 0.01 s 1 core @ 2.5 Ghz (C/C++)
J. Jin, J. Zhang, K. Zhang, Y. Wang, Y. Ma and D. Pan: 3D multi-object tracking with boosting data association and improved trajectory management mechanism. Signal Processing 2024.
62 MASS
This is an online method (no batch processing).
85.04 % 85.53 % 74.31 % 2.77 % 301 744 0.01s C++
H. Karunasekera, H. Wang and H. Zhang: Multiple Object Tracking with attention to Appearance, Structure, Motion and Size. IEEE Access 2019.
63 MOTSFusion
This method uses stereo information.
code 84.83 % 85.21 % 73.08 % 2.77 % 275 759 0.44s GPU (Python)
J. Luiten, T. Fischer and B. Leibe: Track to Reconstruct and Reconstruct to Track. IEEE Robotics and Automation Letters 2020.
64 DeepFusion-MOT
This method uses stereo information.
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
code 84.80 % 85.10 % 68.46 % 9.08 % 35 444 0.01 s >8 cores @ 2.5 Ghz (Python)
X. Wang, C. Fu, Z. Li, Y. Lai and J. He: DeepFusionMOT: A 3D Multi-Object Tracking Framework Based on Camera-LiDAR Fusion with Deep Association. IEEE Robotics and Automation Letters 2022.
65 mmMOT code 84.77 % 85.21 % 73.23 % 2.77 % 284 753 0.02s GPU @ 2.5 Ghz (Python)
W. Zhang, H. Zhou, Sun, Z. Wang, J. Shi and C. Loy: Robust Multi-Modality Multi-Object Tracking. International Conference on Computer Vision (ICCV) 2019.
66 TripletTrack 84.77 % 86.16 % 69.54 % 3.38 % 222 646 0.1 s 1 core @ 2.5 Ghz (C/C++)
N. Marinello, M. Proesmans and L. Van Gool: TripletTrack: 3D Object Tracking Using Triplet Embeddings and LSTM. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2022.
67 FNC2
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
84.75 % 85.80 % 76.00 % 5.85 % 33 311 0.01 s 1 core @ 3.0 Ghz (C/C++)
C. Jiang, Z. Wang, H. Liang and Y. Wang: A Novel Adaptive Noise Covariance Matrix Estimation and Filtering Method: Application to Multiobject Tracking. IEEE Transactions on Intelligent Vehicles 2024.
C. Jiang, Z. Wang and H. Liang: A Fast and High-Performance Object Proposal Method for Vision Sensors: Application to Object Detection. IEEE Sensors Journal 2022.
68 DiTMOT code 84.73 % 84.40 % 74.92 % 12.92 % 31 188 0.08 s 1 core @ >3.5 Ghz (Python)
S. Wang, P. Cai, L. Wang and M. Liu: DiTNet: End-to-End 3D Object Detection and Track ID Assignment in Spatio-Temporal World. IEEE Robotics and Automation Letters 2021.
69 mono3DT
This method makes use of GPS/IMU information.
This is an online method (no batch processing).
code 84.52 % 85.64 % 73.38 % 2.77 % 377 847 0.03 s GPU @ 2.5 Ghz (Python)
H. Hu, Q. Cai, D. Wang, J. Lin, M. Sun, P. Krähenbühl, T. Darrell and F. Yu: Joint Monocular 3D Vehicle Detection and Tracking. ICCV 2019.
70 SMAT
This is an online method (no batch processing).
84.27 % 86.09 % 63.08 % 5.38 % 28 341 0.1 s 1 core @ 2.5 Ghz (C/C++)
N. Gonzalez, A. Ospina and P. Calvez: SMAT: Smart Multiple Affinity Metrics for Multiple Object Tracking. Image Analysis and Recognition 2020.
71 MOTBeyondPixels
This is an online method (no batch processing).
code 84.24 % 85.73 % 73.23 % 2.77 % 468 944 0.3 s 1 core @ 2.5 Ghz (C/C++)
S. Sharma, J. Ansari, J. Krishna Murthy and K. Madhava Krishna: Beyond Pixels: Leveraging Geometry and Shape Cues for Online Multi-Object Tracking. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018.
72 AB3DMOT+PointRCNN code 83.92 % 85.30 % 66.77 % 9.08 % 10 199 0.0047s 1 core @ 2.5 Ghz (python)
X. Weng, J. Wang, D. Held and K. Kitani: 3D Multi-Object Tracking: A Baseline and New Evaluation Metrics. IROS 2020.
73 MO-YOLO code 83.55 % 84.61 % 72.00 % 5.23 % 252 569 0.024 s 2080ti (Python)
L. Pan, Y. Feng, W. Di, L. Bo and Z. Xingle: MO-YOLO: End-to-End Multiple-Object Tracking Method with YOLO and MOTR. arXiv preprint arXiv:2310.17170 2023.
74 3DMAETracking
This method makes use of Velodyne laser scans.
83.20 % 85.31 % 62.77 % 7.38 % 86 277 34 s >8 cores @ 2.5 Ghz (Python)
75 IMMDP
This is an online method (no batch processing).
83.04 % 82.74 % 60.62 % 11.38 % 172 365 0.19 s 4 cores @ >3.5 Ghz (Matlab + C/C++)
Y. Xiang, A. Alahi and S. Savarese: Learning to Track: Online Multi- Object Tracking by Decision Making. International Conference on Computer Vision (ICCV) 2015.
S. Ren, K. He, R. Girshick and J. Sun: Faster R-CNN: Towards Real- Time Object Detection with Region Proposal Networks. NIPS 2015.
76 SUAMOT
This method uses stereo information.
This method makes use of Velodyne laser scans.
82.48 % 85.51 % 56.92 % 9.54 % 24 627 0.01 s 8 cores @ >3.5 Ghz (Python + C/C++)
77 aUToTrack
This method makes use of Velodyne laser scans.
This method makes use of GPS/IMU information.
This is an online method (no batch processing).
82.25 % 80.52 % 72.62 % 3.54 % 1025 1402 0.01 s 1 core @ >3.5 Ghz (C/C++)
K. Burnett, S. Samavi, S. Waslander, T. Barfoot and A. Schoellig: aUToTrack: A Lightweight Object Detection and Tracking System for the SAE AutoDrive Challenge. arXiv:1905.08758 2019.
78 JCSTD
This is an online method (no batch processing).
80.57 % 81.81 % 56.77 % 7.38 % 61 643 0.07 s 1 core @ 2.7 Ghz (C++)
W. Tian, M. Lauer and L. Chen: Online Multi-Object Tracking Using Joint Domain Information in Traffic Scenarios. IEEE Transactions on Intelligent Transportation Systems 2019.
79 3D-CNN/PMBM
This method makes use of GPS/IMU information.
This is an online method (no batch processing).
80.39 % 81.26 % 62.77 % 6.15 % 121 613 0.01 s 1 core @ 3.0 Ghz (Matlab)
S. Scheidegger, J. Benjaminsson, E. Rosenberg, A. Krishnan and K. Granström: Mono-Camera 3D Multi-Object Tracking Using Deep Learning Detections and PMBM Filtering. 2018 IEEE Intelligent Vehicles Symposium, IV 2018, Changshu, Suzhou, China, June 26-30, 2018 2018.
80 extraCK
This is an online method (no batch processing).
79.99 % 82.46 % 62.15 % 5.54 % 343 938 0.03 s 1 core @ 2.5 Ghz (Python)
G. Gunduz and T. Acarman: A lightweight online multiple object vehicle tracking method. Intelligent Vehicles Symposium (IV), 2018 IEEE 2018.
81 NC2
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
78.95 % 85.82 % 76.00 % 5.69 % 31 275 0.01 s 1 core @ 3.0 Ghz (C/C++)
C. Jiang, Z. Wang, H. Liang and Y. Wang: A Novel Adaptive Noise Covariance Matrix Estimation and Filtering Method: Application to Multiobject Tracking. IEEE Transactions on Intelligent Vehicles 2024.
82 MCMOT-CPD 78.90 % 82.13 % 52.31 % 11.69 % 228 536 0.01 s 1 core @ 3.5 Ghz (Python)
B. Lee, E. Erdenee, S. Jin, M. Nam, Y. Jung and P. Rhee: Multi-class Multi-object Tracking Using Changing Point Detection. ECCVWORK 2016.
83 MC_CATrack
This is an online method (no batch processing).
78.78 % 79.86 % 52.15 % 11.38 % 49 324 0.05 s GPU @ 2.5 Ghz (Python)
84 NOMT* 78.15 % 79.46 % 57.23 % 13.23 % 31 207 0.09 s 16 cores @ 2.5 Ghz (C++)
W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.
85 FANTrack
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
code 77.72 % 82.33 % 62.62 % 8.77 % 150 812 0.04 s 8 cores @ >3.5 Ghz (Python)
E. Baser, V. Balasubramanian, P. Bhattacharyya and K. Czarnecki: FANTrack: 3D Multi-Object Tracking with Feature Association Network. ArXiv 2019.
86 LP-SSVM* 77.63 % 77.80 % 56.31 % 8.46 % 62 539 0.02 s 1 core @ 2.5 Ghz (Matlab + C/C++)
S. Wang and C. Fowlkes: Learning Optimal Parameters for Multi-target Tracking with Contextual Interactions. International Journal of Computer Vision 2016.
87 FAMNet 77.08 % 78.79 % 51.38 % 8.92 % 123 713 1.5 s GPU @ 1.0 Ghz (Python)
P. Chu and H. Ling: FAMNet: Joint Learning of Feature, Affinity and Multi-dimensional Assignment for Online Multiple Object Tracking. ICCV 2019.
88 MDP
This is an online method (no batch processing).
code 76.59 % 82.10 % 52.15 % 13.38 % 130 387 0.9 s 8 cores @ 3.5 Ghz (Matlab + C/C++)
Y. Xiang, A. Alahi and S. Savarese: Learning to Track: Online Multi- Object Tracking by Decision Making. International Conference on Computer Vision (ICCV) 2015.
Y. Xiang, W. Choi, Y. Lin and S. Savarese: Subcategory-aware Convolutional Neural Networks for Object Proposals and Detection. IEEE Winter Conference on Applications of Computer Vision (WACV) 2017.
89 DSM 76.15 % 83.42 % 60.00 % 8.31 % 296 868 0.1 s GPU @ 1.0 Ghz (Python)
D. Frossard and R. Urtasun: End-To-End Learning of Multi-Sensor 3D Tracking by Detection. ICRA 2018.
90 Complexer-YOLO
This method makes use of Velodyne laser scans.
This method makes use of GPS/IMU information.
This is an online method (no batch processing).
75.70 % 78.46 % 58.00 % 5.08 % 1186 2092 0.01 a GPU @ 3.5 Ghz (C/C++)
M. Simon, K. Amende, A. Kraus, J. Honer, T. Samann, H. Kaulbersch, S. Milz and H. Michael Gross: Complexer-YOLO: Real-Time 3D Object Detection and Tracking on Semantic Point Clouds. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2019.
91 SCEA*
This is an online method (no batch processing).
75.58 % 79.39 % 53.08 % 11.54 % 104 448 0.06 s 1 core @ 4.0 Ghz (Matlab + C/C++)
J. Yoon, C. Lee, M. Yang and K. Yoon: Online Multi-object Tracking via Structural Constraint Event Aggregation. IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2016.
92 CIWT*
This method uses stereo information.
This is an online method (no batch processing).
code 75.39 % 79.25 % 49.85 % 10.31 % 165 660 0.28 s 1 core @ 2.5 Ghz (C/C++)
A. Osep, W. Mehner, M. Mathias and B. Leibe: Combined Image- and World-Space Tracking in Traffic Scenes. ICRA 2017.
93 NOMT-HM*
This is an online method (no batch processing).
75.20 % 80.02 % 50.00 % 13.54 % 105 351 0.09 s 8 cores @ 2.5 Ghz (Matlab + C/C++)
W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.
94 SSP* code 72.72 % 78.55 % 53.85 % 8.00 % 185 932 0.6 s 1 core @ 2.7 Ghz (Python)
P. Lenz, A. Geiger and R. Urtasun: FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation. International Conference on Computer Vision (ICCV) 2015.
95 mbodSSP*
This is an online method (no batch processing).
code 72.69 % 78.75 % 48.77 % 8.77 % 114 858 0.01 s 1 core @ 2.7 Ghz (Python)
P. Lenz, A. Geiger and R. Urtasun: FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation. International Conference on Computer Vision (ICCV) 2015.
96 SASN-MCF_nano 70.86 % 82.65 % 58.00 % 7.85 % 443 975 0.02 s 1 core @ 3.0 Ghz (Python)
G. Gunduz and T. Acarman: Efficient Multi-Object Tracking by Strong Associations on Temporal Window. IEEE Transactions on Intelligent Vehicles 2019.
97 Point3DT
This method makes use of Velodyne laser scans.
68.24 % 76.57 % 60.62 % 12.31 % 111 725 0.05 s 1 core @ >3.5 Ghz (Python)
Sukai Wang and M. Liu: PointTrackNet: An End-to-End Network for 3-D Object Detection and Tracking from Point Clouds. to be submitted ICRA'20 .
98 DCO-X* code 68.11 % 78.85 % 37.54 % 14.15 % 318 959 0.9 s 1 core @ >3.5 Ghz (Matlab + C/C++)
A. Milan, K. Schindler and S. Roth: Detection- and Trajectory-Level Exclusion in Multiple Object Tracking. CVPR 2013.
99 SST [st]
This method uses stereo information.
67.38 % 83.98 % 43.08 % 20.15 % 13 212 1 s 1 core @ 2.5 Ghz (C/C++)
100 NOMT 66.60 % 78.17 % 41.08 % 25.23 % 13 150 0.09 s 16 core @ 2.5 Ghz (C++)
W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.
101 RMOT*
This is an online method (no batch processing).
65.83 % 75.42 % 40.15 % 9.69 % 209 727 0.02 s 1 core @ 3.5 Ghz (Matlab)
J. Yoon, M. Yang, J. Lim and K. Yoon: Bayesian Multi-Object Tracking Using Motion Context from Multiple Objects. IEEE Winter Conference on Applications of Computer Vision (WACV) 2015.
102 LP-SSVM 61.77 % 76.93 % 35.54 % 21.69 % 16 422 0.05 s 1 core @ 2.5 Ghz (Matlab + C/C++)
S. Wang and C. Fowlkes: Learning Optimal Parameters for Multi-target Tracking with Contextual Interactions. International Journal of Computer Vision 2016.
103 NOMT-HM
This is an online method (no batch processing).
61.17 % 78.65 % 33.85 % 28.00 % 28 241 0.09 s 8 cores @ 2.5 Ghz (Matlab + C/C++)
W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.
104 ODAMOT
This is an online method (no batch processing).
59.23 % 75.45 % 27.08 % 15.54 % 389 1274 1 s 1 core @ 2.5 Ghz (Python)
A. Gaidon and E. Vig: Online Domain Adaptation for Multi-Object Tracking. British Machine Vision Conference (BMVC) 2015.
105 SSP code 57.85 % 77.64 % 29.38 % 24.31 % 7 704 0.6s 1 core @ 2.7 Ghz (Python)
P. Lenz, A. Geiger and R. Urtasun: FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation. International Conference on Computer Vision (ICCV) 2015.
106 SCEA
This is an online method (no batch processing).
57.03 % 78.84 % 26.92 % 26.62 % 17 461 0.05 s 1 core @ 4.0 Ghz (Matlab + C/C++)
J. Yoon, C. Lee, M. Yang and K. Yoon: Online Multi-object Tracking via Structural Constraint Event Aggregation. IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2016.
107 mbodSSP
This is an online method (no batch processing).
code 56.03 % 77.52 % 23.23 % 27.23 % 0 699 0.01 s 1 core @ 2.7 Ghz (Python)
P. Lenz, A. Geiger and R. Urtasun: FollowMe: Efficient Online Min-Cost Flow Tracking with Bounded Memory and Computation. International Conference on Computer Vision (ICCV) 2015.
108 TBD code 55.07 % 78.35 % 20.46 % 32.62 % 31 529 10 s 1 core @ 2.5 Ghz (Matlab + C/C++)
A. Geiger, M. Lauer, C. Wojek, C. Stiller and R. Urtasun: 3D Traffic Scene Understanding from Movable Platforms. Pattern Analysis and Machine Intelligence (PAMI) 2014.
H. Zhang, A. Geiger and R. Urtasun: Understanding High-Level Semantics by Modeling Traffic Patterns. International Conference on Computer Vision (ICCV) 2013.
109 SORT 54.22 % 77.57 % 25.69 % 29.08 % 1 557 .002 s 1 core @ 2.5 Ghz (Python)
A. Bewley, Z. Ge, L. Ott, F. Ramos and B. Upcroft: Simple online and realtime tracking. 2016 IEEE International Conference on Image Processing (ICIP) 2016.
110 RMOT
This is an online method (no batch processing).
52.42 % 75.18 % 21.69 % 31.85 % 50 376 0.01 s 1 core @ 3.5 Ghz (Matlab)
J. Yoon, M. Yang, J. Lim and K. Yoon: Bayesian Multi-Object Tracking Using Motion Context from Multiple Objects. IEEE Winter Conference on Applications of Computer Vision (WACV) 2015.
111 CEM code 51.94 % 77.11 % 20.00 % 31.54 % 125 396 0.09 s 1 core @ >3.5 Ghz (Matlab + C/C++)
A. Milan, S. Roth and K. Schindler: Continuous Energy Minimization for Multitarget Tracking. IEEE TPAMI 2014.
112 MCF 45.92 % 78.25 % 14.92 % 37.23 % 21 581 0.01 s 1 core @ 2.5 Ghz (Python + C/C++)
L. Zhang, Y. Li and R. Nevatia: Global data association for multi-object tracking using network flows.. CVPR .
113 HM
This is an online method (no batch processing).
43.85 % 78.34 % 12.46 % 39.54 % 12 571 0.01 s 1 core @ 2.5 Ghz (Python)
A. Geiger: Probabilistic Models for 3D Urban Scene Understanding from Movable Platforms. 2013.
114 DP-MCF code 38.33 % 78.41 % 18.00 % 36.15 % 2716 3225 0.01 s 1 core @ 2.5 Ghz (Matlab)
H. Pirsiavash, D. Ramanan and C. Fowlkes: Globally-Optimal Greedy Algorithms for Tracking a Variable Number of Objects. IEEE conference on Computer Vision and Pattern Recognition (CVPR) 2011.
115 DCO code 37.28 % 74.36 % 15.54 % 30.92 % 220 612 0.03 s 1 core @ >3.5 Ghz (Matlab + C/C++)
A. Andriyenko, K. Schindler and S. Roth: Discrete-Continuous Optimization for Multi-Target Tracking. CVPR 2012.
116 FMMOVT 31.88 % 77.68 % 21.38 % 34.92 % 511 930 0.05 s 1 core @ 2.5 Ghz (C/C++)
F. Alencar, C. Massera, D. Ridel and D. Wolf: Fast Metric Multi-Object Vehicle Tracking for Dynamical Environment Comprehension. Latin American Robotics Symposium (LARS), 2015 2015.
117 PESORT 28.86 % 83.96 % 22.62 % 60.00 % 16 58 0.04 s GPU @ 2.0 Ghz (Python)
Table as LaTeX | Only published Methods


PEDESTRIAN


Method Setting Code MOTA MOTP MT ML IDS FRAG Runtime Environment
1 SRK_ODESA(mp)
This is an online method (no batch processing).
69.88 % 75.07 % 45.02 % 8.25 % 191 1070 0.5 s GPU (Python)
D. Mykheievskyi, D. Borysenko and V. Porokhonskyy: Learning Local Feature Descriptors for Multiple Object Tracking. ACCV 2020.
2 SRK_ODESA(hp)
This is an online method (no batch processing).
69.24 % 75.07 % 45.02 % 8.25 % 340 1181 0.5 s GPU @ 2.0 Ghz (Python)
D. Mykheievskyi, D. Borysenko and V. Porokhonskyy: Learning Local Feature Descriptors for Multiple Object Tracking. ACCV 2020.
3 RAM
This is an online method (no batch processing).
67.33 % 73.83 % 52.23 % 13.40 % 403 1077 0.09 s GPU @ 2.5 Ghz (Python)
P. Tokmakov, A. Jabri, J. Li and A. Gaidon: Object Permanence Emerges in a Random Walk along Memory. ICML 2022.
4 PermaTrack
This is an online method (no batch processing).
65.76 % 74.67 % 49.14 % 15.12 % 124 792 0.1 s GPU @ 2.5 Ghz (Python)
P. Tokmakov, J. Li, W. Burgard and A. Gaidon: Learning to Track with Object Permanence. ICCV 2021.
5 OC-SORT
This is an online method (no batch processing).
code 64.01 % 74.73 % 44.67 % 19.59 % 161 813 0.03 s 1 core @ 3.0 Ghz (Python)
J. Cao, X. Weng, R. Khirodkar, J. Pang and K. Kitani: Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking. 2022.
6 Rt_Track 60.63 % 74.70 % 31.27 % 27.15 % 115 764 0.1 s 1 core @ 2.5 Ghz (Python)
7 TuSimple
This is an online method (no batch processing).
58.15 % 71.93 % 30.58 % 24.05 % 138 818 0.6 s 1 core @ 2.5 Ghz (Matlab + C/C++)
W. Choi: Near-online multi-target tracking with aggregated local flow descriptor. Proceedings of the IEEE International Conference on Computer Vision 2015.
K. He, X. Zhang, S. Ren and J. Sun: Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition 2016.
8 Quasi-Dense
This is an online method (no batch processing).
code 56.81 % 73.99 % 31.27 % 18.90 % 254 1121 0.07s GPU (Python)
J. Pang, L. Qiu, X. Li, H. Chen, Q. Li, T. Darrell and F. Yu: Quasi-Dense Similarity Learning for Multiple Object Tracking. CVPR 2021.
9 MMTrack
This is an online method (no batch processing).
56.69 % 75.51 % 31.62 % 32.65 % 76 522 0.0135s GPU
L. Xu and Y. Huang: Rethinking Joint Detection and Embedding for Multiobject Tracking in Multiscenario. IEEE Transactions on Industrial Informatics 2024.
10 FNC2
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
56.52 % 66.07 % 43.99 % 12.37 % 349 1492 0.01 s 1 core @ 3.0 Ghz (C/C++)
C. Jiang, Z. Wang, H. Liang and Y. Wang: A Novel Adaptive Noise Covariance Matrix Estimation and Filtering Method: Application to Multiobject Tracking. IEEE Transactions on Intelligent Vehicles 2024.
C. Jiang, Z. Wang and H. Liang: A Fast and High-Performance Object Proposal Method for Vision Sensors: Application to Object Detection. IEEE Sensors Journal 2022.
11 APPTracker
This is an online method (no batch processing).
56.20 % 74.54 % 32.30 % 25.43 % 90 854 0.04 s GPU @ 1.5 Ghz (Python)
12 MO-YOLO code 55.71 % 73.93 % 34.02 % 35.40 % 121 797 0.024 s 2080ti (Python)
L. Pan, Y. Feng, W. Di, L. Bo and Z. Xingle: MO-YOLO: End-to-End Multiple-Object Tracking Method with YOLO and MOTR. arXiv preprint arXiv:2310.17170 2023.
13 CenterTrack
This is an online method (no batch processing).
code 55.34 % 74.02 % 34.71 % 19.93 % 95 751 0.045s GPU
X. Zhou, V. Koltun and P. Krähenbühl: Tracking Objects as Points. ECCV 2020.
14 AIPT 54.91 % 75.91 % 23.02 % 31.62 % 48 743 0.5 s 1 core @ 2.5 Ghz (Python)
15 3D-TLSR
This method uses stereo information.
This is an online method (no batch processing).
54.00 % 73.03 % 29.55 % 23.71 % 100 835 1 core @ 2.5 Ghz (C/C++)
U. Nguyen and C. Heipke: 3D Pedestrian tracking using local structure constraints. ISPRS Journal of Photogrammetry and Remote Sensing 2020.
16 TrackMPNN
This is an online method (no batch processing).
code 53.22 % 73.69 % 33.68 % 18.56 % 395 1035 0.05 s 4 cores @ 3.0 Ghz (Python)
A. Rangesh, P. Maheshwari, M. Gebre, S. Mhatre, V. Ramezani and M. Trivedi: TrackMPNN: A Message Passing Graph Neural Architecture for Multi-Object Tracking. arXiv preprint arXiv:2101.04206 .
17 QD-3DT
This is an online method (no batch processing).
code 52.98 % 73.41 % 32.30 % 18.56 % 488 1393 0.03 s GPU @ 2.5 Ghz (Python)
H. Hu, Y. Yang, T. Fischer, F. Yu, T. Darrell and M. Sun: Monocular Quasi-Dense 3D Object Tracking. ArXiv:2103.07351 2021.
18 CAT
This method uses stereo information.
This is an online method (no batch processing).
52.35 % 71.57 % 34.36 % 23.71 % 206 804
U. Nguyen, F. Rottensteiner and C. Heipke: CONFIDENCE-AWARE PEDESTRIAN TRACKING USING A STEREO CAMERA. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences 2019.
19 Be-Track
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
51.29 % 72.71 % 20.96 % 31.27 % 118 848 0.02 s GPU @ 1.5 Ghz (C/C++)
M. Dimitrievski, P. Veelaert and W. Philips: Behavioral Pedestrian Tracking Using a Camera and LiDAR Sensors on a Moving Vehicle. Sensors 2019.
20 EagerMOT code 51.11 % 64.75 % 27.84 % 24.05 % 234 1378 0.011 s 4 cores @ 3.0 Ghz (Python)
A. Kim, A. Osep and L. Leal-Taix'e: EagerMOT: 3D Multi-Object Tracking via Sensor Fusion. IEEE International Conference on Robotics and Automation (ICRA) 2021.
21 TripletTrack 50.85 % 74.17 % 22.68 % 28.87 % 139 986 0.1 s 1 core @ 2.5 Ghz (C/C++)
N. Marinello, M. Proesmans and L. Van Gool: TripletTrack: 3D Object Tracking Using Triplet Embeddings and LSTM. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2022.
22 MC_CATrack
This is an online method (no batch processing).
50.84 % 71.87 % 26.12 % 34.02 % 54 589 0.05 s GPU @ 2.5 Ghz (Python)
23 MSA-MOT
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
47.84 % 64.64 % 33.33 % 16.15 % 244 1393 0.01 s 1 core @ 2.5 Ghz (Python)
Z. Zhu, J. Nie, H. Wu, Z. He and M. Gao: MSA-MOT: Multi-Stage Association for 3D Multimodality Multi-Object Tracking. Sensors 2022.
24 PolarMOT code 47.25 % 64.87 % 30.24 % 18.56 % 241 1375 0.02 s 1 core @ 2.5 Ghz (C/C++)
A. Kim, G. Bras'o, A. O\vsep and L. Leal-Taix'e: PolarMOT: How Far Can Geometric Relations Take Us in 3D Multi-Object Tracking?. European Conference on Computer Vision (ECCV) 2022.
25 MDP
This is an online method (no batch processing).
code 47.22 % 70.36 % 24.05 % 27.84 % 87 825 0.9 s 8 cores @ 3.5 Ghz (Matlab + C/C++)
Y. Xiang, A. Alahi and S. Savarese: Learning to Track: Online Multi- Object Tracking by Decision Making. International Conference on Computer Vision (ICCV) 2015.
Y. Xiang, W. Choi, Y. Lin and S. Savarese: Subcategory-aware Convolutional Neural Networks for Object Proposals and Detection. IEEE Winter Conference on Applications of Computer Vision (WACV) 2017.
26 MPNTrack code 46.92 % 71.84 % 42.96 % 10.65 % 196 1151 0.02 s 8 cores @ 2.5 Ghz (Python)
G. Brasó and L. Leal-Taixé: Learning a Neural Solver for Multiple Object Tracking. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020.
G. Bras\'o, O. Cetintas and L. Leal-Taix\'e: Multi-Object Tracking and Segmentation Via Neural Message Passing. International Journal of Computer Vision 2022.
27 NOMT* 46.62 % 71.45 % 26.12 % 34.02 % 63 666 0.09 s 16 cores @ 2.5 Ghz (C++)
W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.
28 JRMOT
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
code 46.33 % 72.54 % 23.37 % 28.87 % 345 1111 0.07 s 4 cores @ 2.5 Ghz (Python)
A. Shenoi, M. Patel, J. Gwak, P. Goebel, A. Sadeghian, H. Rezatofighi, R. Mart\'in-Mart\'in and S. Savarese: JRMOT: A Real-Time 3D Multi-Object Tracker and a New Large-Scale Dataset. The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020.
29 MCMOT-CPD 45.94 % 72.44 % 20.62 % 34.36 % 143 764 0.01 s 1 core @ 3.5 Ghz (Python)
B. Lee, E. Erdenee, S. Jin, M. Nam, Y. Jung and P. Rhee: Multi-class Multi-object Tracking Using Changing Point Detection. ECCVWORK 2016.
30 Mono_3D_KF
This method makes use of GPS/IMU information.
This is an online method (no batch processing).
45.02 % 69.45 % 32.99 % 25.43 % 203 850 0.3 s 1 core @ 2.5 Ghz (Python)
A. Reich and H. Wuensche: Monocular 3D Multi-Object Tracking with an EKF Approach for Long-Term Stable Tracks. 2021 IEEE 24th International Conference on Information Fusion (FUSION) 2021.
31 NC2
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
44.64 % 66.08 % 43.99 % 13.06 % 348 1488 0.01 s 1 core @ 3.0 Ghz (C/C++)
C. Jiang, Z. Wang, H. Liang and Y. Wang: A Novel Adaptive Noise Covariance Matrix Estimation and Filtering Method: Application to Multiobject Tracking. IEEE Transactions on Intelligent Vehicles 2024.
32 JCSTD
This is an online method (no batch processing).
44.20 % 72.09 % 16.49 % 33.68 % 53 917 0.07 s 1 core @ 2.7 Ghz (C++)
W. Tian, M. Lauer and L. Chen: Online Multi-Object Tracking Using Joint Domain Information in Traffic Scenarios. IEEE Transactions on Intelligent Transportation Systems 2019.
33 PESORT 44.19 % 76.23 % 24.40 % 38.49 % 121 535 0.04 s GPU @ 2.0 Ghz (Python)
34 SCEA*
This is an online method (no batch processing).
43.91 % 71.86 % 16.15 % 43.30 % 56 641 0.06 s 1 core @ 4.0 Ghz (Matlab + C/C++)
J. Yoon, C. Lee, M. Yang and K. Yoon: Online Multi-object Tracking via Structural Constraint Event Aggregation. IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2016.
35 RMOT*
This is an online method (no batch processing).
43.77 % 71.02 % 19.59 % 41.24 % 153 748 0.02 s 1 core @ 3.5 Ghz (Matlab)
J. Yoon, M. Yang, J. Lim and K. Yoon: Bayesian Multi-Object Tracking Using Motion Context from Multiple Objects. IEEE Winter Conference on Applications of Computer Vision (WACV) 2015.
36 LP-SSVM* 43.76 % 70.48 % 20.62 % 34.36 % 73 809 0.02 s 1 core @ 2.5 Ghz (Matlab + C/C++)
S. Wang and C. Fowlkes: Learning Optimal Parameters for Multi-target Tracking with Contextual Interactions. International Journal of Computer Vision 2016.
37 CIWT*
This method uses stereo information.
This is an online method (no batch processing).
code 43.37 % 71.44 % 13.75 % 34.71 % 112 901 0.28 s 1 core @ 2.5 Ghz (C/C++)
A. Osep, W. Mehner, M. Mathias and B. Leibe: Combined Image- and World-Space Tracking in Traffic Scenes. ICRA 2017.
38 EAFFMOT
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
42.32 % 64.89 % 21.99 % 35.40 % 233 1141 0.01 s 1 core @ 2.5 Ghz (C/C++)
J. Jin, J. Zhang, K. Zhang, Y. Wang, Y. Ma and D. Pan: 3D multi-object tracking with boosting data association and improved trajectory management mechanism. Signal Processing 2024.
39 NOMT-HM*
This is an online method (no batch processing).
39.26 % 71.14 % 21.31 % 41.92 % 184 863 0.09 s 8 cores @ 2.5 Ghz (Matlab + C/C++)
W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.
40 StrongFusion-MOT 39.14 % 64.22 % 26.12 % 21.99 % 241 1467 0.01 s >8 cores @ 2.5 Ghz (Python + C/C++)
X. Wang, C. Fu, J. He, S. Wang and J. Wang: StrongFusionMOT: A Multi-Object Tracking Method Based on LiDAR-Camera Fusion. IEEE Sensors Journal 2022.
41 AB3DMOT+PointRCNN code 38.39 % 64.88 % 23.02 % 43.99 % 218 940 0.0047s 1 core @ 2.5 Ghz (python)
X. Weng, J. Wang, D. Held and K. Kitani: 3D Multi-Object Tracking: A Baseline and New Evaluation Metrics. IROS 2020.
42 NOMT 36.93 % 67.75 % 17.87 % 42.61 % 34 789 0.09 s 16 core @ 2.5 Ghz (C++)
W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.
43 RMOT
This is an online method (no batch processing).
34.54 % 68.06 % 14.43 % 47.42 % 81 685 0.01 s 1 core @ 3.5 Ghz (Matlab)
J. Yoon, M. Yang, J. Lim and K. Yoon: Bayesian Multi-Object Tracking Using Motion Context from Multiple Objects. IEEE Winter Conference on Applications of Computer Vision (WACV) 2015.
44 LP-SSVM 33.33 % 67.38 % 12.37 % 45.02 % 72 818 0.05 s 1 core @ 2.5 Ghz (Matlab + C/C++)
S. Wang and C. Fowlkes: Learning Optimal Parameters for Multi-target Tracking with Contextual Interactions. International Journal of Computer Vision 2016.
45 SCEA
This is an online method (no batch processing).
33.13 % 68.45 % 9.62 % 46.74 % 16 717 0.05 s 1 core @ 4.0 Ghz (Matlab + C/C++)
J. Yoon, C. Lee, M. Yang and K. Yoon: Online Multi-object Tracking via Structural Constraint Event Aggregation. IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2016.
46 YONTD-MOT
This method uses stereo information.
This method makes use of Velodyne laser scans.
This is an online method (no batch processing).
code 28.93 % 65.99 % 11.00 % 31.96 % 404 1697 0.1 s GPU @ >3.5 Ghz (Python)
X. Wang, J. He, C. Fu, T. Meng and M. Huang: You Only Need Two Detectors to Achieve Multi-Modal 3D Multi-Object Tracking. arXiv preprint arXiv:2304.08709 2023.
47 CEM code 27.54 % 68.48 % 8.93 % 51.89 % 96 608 0.09 s 1 core @ >3.5 Ghz (Matlab + C/C++)
A. Milan, S. Roth and K. Schindler: Continuous Energy Minimization for Multitarget Tracking. IEEE TPAMI 2014.
48 NOMT-HM
This is an online method (no batch processing).
27.49 % 67.99 % 15.12 % 50.52 % 73 732 0.09 s 8 cores @ 2.5 Ghz (Matlab + C/C++)
W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.
49 SST [st]
This method uses stereo information.
17.71 % 65.22 % 9.97 % 67.01 % 110 674 1 s 1 core @ 2.5 Ghz (C/C++)
50 Complexer-YOLO
This method makes use of Velodyne laser scans.
This method makes use of GPS/IMU information.
This is an online method (no batch processing).
16.46 % 62.69 % 2.41 % 38.14 % 527 1636 0.01 a GPU @ 3.5 Ghz (C/C++)
M. Simon, K. Amende, A. Kraus, J. Honer, T. Samann, H. Kaulbersch, S. Milz and H. Michael Gross: Complexer-YOLO: Real-Time 3D Object Detection and Tracking on Semantic Point Clouds. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2019.
Table as LaTeX | Only published Methods


Related Datasets

  • TUD Datasets: "TUD Multiview Pedestrians" and "TUD Stadmitte" Datasets.
  • PETS 2009: The Datasets for the "Performance Evaluation of Tracking and Surveillance"" Workshop.
  • EPFL Terrace: Multi-camera pedestrian videos.
  • ETHZ Sequences: Inner City Sequences from Mobile Platforms.

Citation

When using this dataset in your research, we will be happy if you cite us:
@inproceedings{Geiger2012CVPR,
  author = {Andreas Geiger and Philip Lenz and Raquel Urtasun},
  title = {Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite},
  booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2012}
}



eXTReMe Tracker