\begin{tabular}{c | c | c | c | c | c | c | c | c | c}
{\bf Method} & {\bf Setting} & {\bf MOTA} & {\bf MOTP} & {\bf MT} & {\bf ML} & {\bf IDS} & {\bf FRAG} & {\bf Runtime} & {\bf Environment}\\ \hline
SRK\_ODESA(mp) & on & 69.88 \% & 75.07 \% & 45.02 \% & 8.25 \% & 191 & 1070 & 0.5 s / & D. Mykheievskyi, D. Borysenko and V. Porokhonskyy: Learning Local Feature Descriptors for Multiple Object Tracking. ACCV 2020.\\
SRK\_ODESA(hp) & on & 69.24 \% & 75.07 \% & 45.02 \% & 8.25 \% & 340 & 1181 & 0.5 s / GPU & D. Mykheievskyi, D. Borysenko and V. Porokhonskyy: Learning Local Feature Descriptors for Multiple Object Tracking. ACCV 2020.\\
RAM & on & 67.33 \% & 73.83 \% & 52.23 \% & 13.40 \% & 403 & 1077 & 0.09 s / GPU & P. Tokmakov, A. Jabri, J. Li and A. Gaidon: Object Permanence Emerges in a Random Walk along Memory. ICML 2022.\\
PermaTrack & on & 65.76 \% & 74.67 \% & 49.14 \% & 15.12 \% & 124 & 792 & 0.1 s / GPU & P. Tokmakov, J. Li, W. Burgard and A. Gaidon: Learning to Track with Object Permanence. ICCV 2021.\\
McByte & & 65.52 \% & 74.69 \% & 40.55 \% & 21.99 \% & 170 & 674 & 99 min / GPU & ERROR: Wrong syntax in BIBTEX file.\\
OC-SORT & on & 64.01 \% & 74.73 \% & 44.67 \% & 19.59 \% & 161 & 813 & 0.03 s / 1 core & J. Cao, X. Weng, R. Khirodkar, J. Pang and K. Kitani: Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking. 2022.\\
Rt\_Track & & 60.63 \% & 74.70 \% & 31.27 \% & 27.15 \% & 115 & 764 & 0.1 s / 1 core & \\
TuSimple & on & 58.15 \% & 71.93 \% & 30.58 \% & 24.05 \% & 138 & 818 & 0.6 s / 1 core & W. Choi: Near-online multi-target tracking with aggregated local flow descriptor. Proceedings of the IEEE International Conference on Computer Vision 2015.K. He, X. Zhang, S. Ren and J. Sun: Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition 2016.\\
Quasi-Dense & on & 56.81 \% & 73.99 \% & 31.27 \% & 18.90 \% & 254 & 1121 & 0.07s / & J. Pang, L. Qiu, X. Li, H. Chen, Q. Li, T. Darrell and F. Yu: Quasi-Dense Similarity Learning for Multiple Object Tracking. CVPR 2021.\\
MMTrack & on & 56.69 \% & 75.51 \% & 31.62 \% & 32.65 \% & 76 & 522 & 0.0135s / & L. Xu and Y. Huang: Rethinking Joint Detection and Embedding for Multiobject Tracking in Multiscenario. IEEE Transactions on Industrial Informatics 2024.\\
FNC2 & la on & 56.52 \% & 66.07 \% & 43.99 \% & 12.37 \% & 349 & 1492 & 0.01 s / 1 core & C. Jiang, Z. Wang, H. Liang and Y. Wang: A Novel Adaptive Noise Covariance Matrix Estimation and Filtering Method: Application to Multiobject Tracking. IEEE Transactions on Intelligent Vehicles 2024.C. Jiang, Z. Wang and H. Liang: A Fast and High-Performance Object Proposal Method for Vision Sensors: Application to Object Detection. IEEE Sensors Journal 2022.\\
APPTracker & on & 56.20 \% & 74.54 \% & 32.30 \% & 25.43 \% & 90 & 854 & 0.04 s / GPU & \\
MO-YOLO & & 55.71 \% & 73.93 \% & 34.02 \% & 35.40 \% & 121 & 797 & 0.024 s / & L. Pan, Y. Feng, W. Di, L. Bo and Z. Xingle: MO-YOLO: End-to-End Multiple-Object Tracking Method with YOLO and MOTR. arXiv preprint arXiv:2310.17170 2023.\\
CenterTrack & on & 55.34 \% & 74.02 \% & 34.71 \% & 19.93 \% & 95 & 751 & 0.045s / & X. Zhou, V. Koltun and P. Krähenbühl: Tracking Objects as Points. ECCV 2020.\\
AIPT & & 54.91 \% & 75.91 \% & 23.02 \% & 31.62 \% & 48 & 743 & 0.5 s / 1 core & \\
3D-TLSR & st on & 54.00 \% & 73.03 \% & 29.55 \% & 23.71 \% & 100 & 835 & / 1 core & U. Nguyen and C. Heipke: 3D Pedestrian tracking using local structure constraints. ISPRS Journal of Photogrammetry and Remote Sensing 2020.\\
TrackMPNN & on & 53.22 \% & 73.69 \% & 33.68 \% & 18.56 \% & 395 & 1035 & 0.05 s / 4 cores & A. Rangesh, P. Maheshwari, M. Gebre, S. Mhatre, V. Ramezani and M. Trivedi: TrackMPNN: A Message Passing Graph Neural Architecture for Multi-Object Tracking. arXiv preprint arXiv:2101.04206 .\\
QD-3DT & on & 52.98 \% & 73.41 \% & 32.30 \% & 18.56 \% & 488 & 1393 & 0.03 s / GPU & H. Hu, Y. Yang, T. Fischer, F. Yu, T. Darrell and M. Sun: Monocular Quasi-Dense 3D Object Tracking. ArXiv:2103.07351 2021.\\
CAT & st on & 52.35 \% & 71.57 \% & 34.36 \% & 23.71 \% & 206 & 804 & / & U. Nguyen, F. Rottensteiner and C. Heipke: CONFIDENCE-AWARE PEDESTRIAN TRACKING USING A STEREO CAMERA. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences 2019.\\
Be-Track & la on & 51.29 \% & 72.71 \% & 20.96 \% & 31.27 \% & 118 & 848 & 0.02 s / GPU & M. Dimitrievski, P. Veelaert and W. Philips: Behavioral Pedestrian Tracking Using a Camera and LiDAR Sensors on a Moving Vehicle. Sensors 2019.\\
EagerMOT & & 51.11 \% & 64.75 \% & 27.84 \% & 24.05 \% & 234 & 1378 & 0.011 s / 4 cores & A. Kim, A. Osep and L. Leal-Taix'e: EagerMOT: 3D Multi-Object Tracking via Sensor Fusion. IEEE International Conference on Robotics and Automation (ICRA) 2021.\\
TripletTrack & & 50.85 \% & 74.17 \% & 22.68 \% & 28.87 \% & 139 & 986 & 0.1 s / 1 core & N. Marinello, M. Proesmans and L. Van Gool: TripletTrack: 3D Object Tracking Using Triplet Embeddings and LSTM. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2022.\\
MC\_CATrack & on & 50.84 \% & 71.87 \% & 26.12 \% & 34.02 \% & 54 & 589 & 0.05 s / GPU & \\
MSA-MOT & la on & 47.84 \% & 64.64 \% & 33.33 \% & 16.15 \% & 244 & 1393 & 0.01 s / 1 core & Z. Zhu, J. Nie, H. Wu, Z. He and M. Gao: MSA-MOT: Multi-Stage Association for 3D Multimodality Multi-Object Tracking. Sensors 2022.\\
PolarMOT & & 47.25 \% & 64.87 \% & 30.24 \% & 18.56 \% & 241 & 1375 & 0.02 s / 1 core & A. Kim, G. Bras'o, A. O\vsep and L. Leal-Taix'e: PolarMOT: How Far Can Geometric Relations Take Us in 3D Multi-Object Tracking?. European Conference on Computer Vision (ECCV) 2022.\\
MDP & on & 47.22 \% & 70.36 \% & 24.05 \% & 27.84 \% & 87 & 825 & 0.9 s / 8 cores & Y. Xiang, A. Alahi and S. Savarese: Learning to Track: Online Multi- Object Tracking by Decision Making. International Conference on Computer Vision (ICCV) 2015.Y. Xiang, W. Choi, Y. Lin and S. Savarese: Subcategory-aware Convolutional Neural Networks for Object Proposals and Detection. IEEE Winter Conference on Applications of Computer Vision (WACV) 2017.\\
MPNTrack & & 46.92 \% & 71.84 \% & 42.96 \% & 10.65 \% & 196 & 1151 & 0.02 s / 8 cores & G. Brasó and L. Leal-Taixé: Learning a Neural Solver for Multiple Object Tracking. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020.G. Bras\'o, O. Cetintas and L. Leal-Taix\'e: Multi-Object Tracking and Segmentation Via Neural Message Passing. International Journal of Computer Vision 2022.\\
NOMT* & & 46.62 \% & 71.45 \% & 26.12 \% & 34.02 \% & 63 & 666 & 0.09 s / 16 cores & W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.\\
JRMOT & la on & 46.33 \% & 72.54 \% & 23.37 \% & 28.87 \% & 345 & 1111 & 0.07 s / 4 cores & A. Shenoi, M. Patel, J. Gwak, P. Goebel, A. Sadeghian, H. Rezatofighi, R. Mart\'in-Mart\'in and S. Savarese: JRMOT: A Real-Time 3D Multi-Object Tracker and a New Large-Scale Dataset. The IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020.\\
MCMOT-CPD & & 45.94 \% & 72.44 \% & 20.62 \% & 34.36 \% & 143 & 764 & 0.01 s / 1 core & B. Lee, E. Erdenee, S. Jin, M. Nam, Y. Jung and P. Rhee: Multi-class Multi-object Tracking Using Changing Point Detection. ECCVWORK 2016.\\
Mono\_3D\_KF & gp on & 45.02 \% & 69.45 \% & 32.99 \% & 25.43 \% & 203 & 850 & 0.3 s / 1 core & A. Reich and H. Wuensche: Monocular 3D Multi-Object Tracking with an EKF Approach for Long-Term Stable Tracks. 2021 IEEE 24th International Conference on Information Fusion (FUSION) 2021.\\
NC2 & la on & 44.64 \% & 66.08 \% & 43.99 \% & 13.06 \% & 348 & 1488 & 0.01 s / 1 core & C. Jiang, Z. Wang, H. Liang and Y. Wang: A Novel Adaptive Noise Covariance Matrix Estimation and Filtering Method: Application to Multiobject Tracking. IEEE Transactions on Intelligent Vehicles 2024.\\
JCSTD & on & 44.20 \% & 72.09 \% & 16.49 \% & 33.68 \% & 53 & 917 & 0.07 s / 1 core & W. Tian, M. Lauer and L. Chen: Online Multi-Object Tracking Using Joint Domain Information in Traffic Scenarios. IEEE Transactions on Intelligent Transportation Systems 2019.\\
PESORT & & 44.19 \% & 76.23 \% & 24.40 \% & 38.49 \% & 121 & 535 & 0.04 s / GPU & \\
SCEA* & on & 43.91 \% & 71.86 \% & 16.15 \% & 43.30 \% & 56 & 641 & 0.06 s / 1 core & J. Yoon, C. Lee, M. Yang and K. Yoon: Online Multi-object Tracking via Structural Constraint Event Aggregation. IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2016.\\
RMOT* & on & 43.77 \% & 71.02 \% & 19.59 \% & 41.24 \% & 153 & 748 & 0.02 s / 1 core & J. Yoon, M. Yang, J. Lim and K. Yoon: Bayesian Multi-Object Tracking Using Motion Context from Multiple Objects. IEEE Winter Conference on Applications of Computer Vision (WACV) 2015.\\
LP-SSVM* & & 43.76 \% & 70.48 \% & 20.62 \% & 34.36 \% & 73 & 809 & 0.02 s / 1 core & S. Wang and C. Fowlkes: Learning Optimal Parameters for Multi-target Tracking with Contextual Interactions. International Journal of Computer Vision 2016.\\
CIWT* & st on & 43.37 \% & 71.44 \% & 13.75 \% & 34.71 \% & 112 & 901 & 0.28 s / 1 core & A. Osep, W. Mehner, M. Mathias and B. Leibe: Combined Image- and World-Space Tracking in Traffic Scenes. ICRA 2017.\\
EAFFMOT & la on & 42.32 \% & 64.89 \% & 21.99 \% & 35.40 \% & 233 & 1141 & 0.01 s / 1 core & J. Jin, J. Zhang, K. Zhang, Y. Wang, Y. Ma and D. Pan: 3D multi-object tracking with boosting data association and improved trajectory management mechanism. Signal Processing 2024.\\
NOMT-HM* & on & 39.26 \% & 71.14 \% & 21.31 \% & 41.92 \% & 184 & 863 & 0.09 s / 8 cores & W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.\\
StrongFusion-MOT & & 39.14 \% & 64.22 \% & 26.12 \% & 21.99 \% & 241 & 1467 & 0.01 s / >8 cores & X. Wang, C. Fu, J. He, S. Wang and J. Wang: StrongFusionMOT: A Multi-Object Tracking Method Based on LiDAR-Camera Fusion. IEEE Sensors Journal 2022.\\
AB3DMOT+PointRCNN & & 38.39 \% & 64.88 \% & 23.02 \% & 43.99 \% & 218 & 940 & 0.0047s / 1 core & X. Weng, J. Wang, D. Held and K. Kitani: 3D Multi-Object Tracking: A Baseline and New Evaluation Metrics. IROS 2020.\\
NOMT & & 36.93 \% & 67.75 \% & 17.87 \% & 42.61 \% & 34 & 789 & 0.09 s / 16 core & W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.\\
RMOT & on & 34.54 \% & 68.06 \% & 14.43 \% & 47.42 \% & 81 & 685 & 0.01 s / 1 core & J. Yoon, M. Yang, J. Lim and K. Yoon: Bayesian Multi-Object Tracking Using Motion Context from Multiple Objects. IEEE Winter Conference on Applications of Computer Vision (WACV) 2015.\\
LP-SSVM & & 33.33 \% & 67.38 \% & 12.37 \% & 45.02 \% & 72 & 818 & 0.05 s / 1 core & S. Wang and C. Fowlkes: Learning Optimal Parameters for Multi-target Tracking with Contextual Interactions. International Journal of Computer Vision 2016.\\
SCEA & on & 33.13 \% & 68.45 \% & 9.62 \% & 46.74 \% & 16 & 717 & 0.05 s / 1 core & J. Yoon, C. Lee, M. Yang and K. Yoon: Online Multi-object Tracking via Structural Constraint Event Aggregation. IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2016.\\
YONTD-MOT & st la on & 28.93 \% & 65.99 \% & 11.00 \% & 31.96 \% & 404 & 1697 & 0.1 s / GPU & X. Wang, J. He, C. Fu, T. Meng and M. Huang: You Only Need Two Detectors to Achieve Multi-Modal 3D Multi-Object Tracking. arXiv preprint arXiv:2304.08709 2023.\\
CEM & & 27.54 \% & 68.48 \% & 8.93 \% & 51.89 \% & 96 & 608 & 0.09 s / 1 core & A. Milan, S. Roth and K. Schindler: Continuous Energy Minimization for Multitarget Tracking. IEEE TPAMI 2014.\\
NOMT-HM & on & 27.49 \% & 67.99 \% & 15.12 \% & 50.52 \% & 73 & 732 & 0.09 s / 8 cores & W. Choi: Near-Online Multi-target Tracking with Aggregated Local Flow Descriptor . ICCV 2015.\\
SST [st] & st & 17.71 \% & 65.22 \% & 9.97 \% & 67.01 \% & 110 & 674 & 1 s / 1 core & \\
Complexer-YOLO & la gp on & 16.46 \% & 62.69 \% & 2.41 \% & 38.14 \% & 527 & 1636 & 0.01 a / GPU & M. Simon, K. Amende, A. Kraus, J. Honer, T. Samann, H. Kaulbersch, S. Milz and H. Michael Gross: Complexer-YOLO: Real-Time 3D Object Detection and Tracking on Semantic Point Clouds. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2019.
\end{tabular}