Optical Flow Evaluation 2015


The stereo 2015 / flow 2015 / scene flow 2015 benchmark consists of 200 training scenes and 200 test scenes (4 color images per scene, saved in loss less png format). Compared to the stereo 2012 and flow 2012 benchmarks, it comprises dynamic scenes for which the ground truth has been established in a semi-automatic process. Our evaluation server computes the percentage of bad pixels averaged over all ground truth pixels of all 200 test images. For this benchmark, we consider a pixel to be correctly estimated if the disparity or flow end-point error is <3px or <5% (for scene flow this criterion needs to be fulfilled for both disparity maps and the flow map). We require that all methods use the same parameter set for all test pairs. Our development kit provides details about the data format as well as MATLAB / C++ utility functions for reading and writing disparity maps and flow fields. More details can be found in Object Scene Flow for Autonomous Vehicles (CVPR 2015).

Our evaluation table ranks all methods according to the number of erroneous pixels. All methods providing less than 100 % density have been interpolated using simple background interpolation as explained in the corresponding header file in the development kit. Legend:

  • D1: Percentage of stereo disparity outliers in first frame
  • D2: Percentage of stereo disparity outliers in second frame
  • Fl: Percentage of optical flow outliers
  • SF: Percentage of scene flow outliers (=outliers in either D0, D1 or Fl)
  • bg: Percentage of outliers averaged only over background regions
  • fg: Percentage of outliers averaged only over foreground regions
  • all: Percentage of outliers averaged over all ground truth pixels


Note: On 13.03.2017 we have fixed several small errors in the flow (noc+occ) ground truth of the dynamic foreground objects and manually verified all images for correctness by warping them according to the ground truth. As a consequence, all error numbers have decreased slightly. Please download the devkit and the annotations with the improved ground truth for the training set again if you have downloaded the files prior to 13.03.2017 and consider reporting these new number in all future publications. The last leaderboards before these corrections can be found here (optical flow 2015) and here (scene flow 2015). The leaderboards for the KITTI 2015 stereo benchmarks did not change.

Important Policy Update: As more and more non-published work and re-implementations of existing work is submitted to KITTI, we have established a new policy: from now on, only submissions with significant novelty that are leading to a peer-reviewed paper in a conference or journal are allowed. Minor modifications of existing algorithms or student research projects are not allowed. Such work must be evaluated on a split of the training set. To ensure that our policy is adopted, new users must detail their status, describe their work and specify the targeted venue during registration. Furthermore, we will regularly delete all entries that are 6 months old but are still anonymous or do not have a paper associated with them. For conferences, 6 month is enough to determine if a paper has been accepted and to add the bibliography information. For longer review cycles, you need to resubmit your results.
Additional information used by the methods
  • Stereo: Method uses left and right (stereo) images
  • Multiview: Method uses more than 2 temporally adjacent images
  • Motion stereo: Method uses epipolar geometry for computing optical flow
  • Additional training data: Use of additional data sources for training (see details)

Evaluation ground truth        Evaluation area

Method Setting Code Fl-bg Fl-fg Fl-all Density Runtime Environment
1 CamLiFlow++
This method uses stereo information.
2.07 % 6.77 % 2.85 % 100.00 % 1 s GPU @ 2.5 Ghz (Python + C/C++)
2 CamLiFlow
This method uses stereo information.
code 2.31 % 7.04 % 3.10 % 100.00 % 1.2 s GPU @ 2.5 Ghz (Python + C/C++)
H. Liu, T. Lu, Y. Xu, J. Liu, W. Li and L. Chen: CamLiFlow: Bidirectional Camera-LiDAR Fusion for Joint Optical Flow and Scene Flow Estimation. CVPR 2022.
3 M-FUSE
This method uses stereo information.
This method makes use of multiple (>2) views.
code 2.66 % 7.47 % 3.46 % 100.00 % 1.3 s GPU
L. Mehl, A. Jahedi, J. Schmalfuss and A. Bruhn: M-FUSE: Multi-frame Fusion for Scene Flow Estimation. Proc. Winter Conference on Applications of Computer Vision (WACV) 2023.
4 RigidMask+ISF
This method uses stereo information.
code 2.63 % 7.85 % 3.50 % 100.00 % 3.3 s GPU @ 2.5 Ghz (Python)
G. Yang and D. Ramanan: Learning to Segment Rigid Motions from Two Frames. CVPR 2021.
5 TPCV+RAFT3D
This method uses stereo information.
2.48 % 10.19 % 3.76 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (C/C++)
6 RAFT-it+_RVC 3.62 % 5.33 % 3.90 % 100.00 % 0.14 s 1 core @ 2.5 Ghz (Python)
7 RAFT-OCTC 3.72 % 5.39 % 4.00 % 100.00 % 0.2 s GPU @ 2.5 Ghz (Python)
J. Jeong, J. Lin, F. Porikli and N. Kwak: Imposing Consistency for Optical Flow Estimation (Qualcomm AI Research). CVPR 2022.
8 SF2SE3
This method uses stereo information.
code 3.17 % 8.79 % 4.11 % 100.00 % 2.7 s GPU @ >3.5 Ghz (Python)
L. Sommer, P. Schröppel and T. Brox: SF2SE3: Clustering Scene Flow into SE (3)-Motions via Proposal and Selection. DAGM German Conference on Pattern Recognition 2022.
9 RAFT-CF-PL3 3.80 % 5.65 % 4.11 % 100.00 % 0.05 s GPU @ 2.5 Ghz (Python)
Z. Zhang, P. Ji, N. Bansal, C. Cai, Q. Yan, X. Xu and Y. Xu: CLIP-FLow: Contrastive Learning by semi-supervised Iterative Pseudo labeling for Optical Flow Estimation. 2022.
10 RAFT-S-AF code 3.86 % 5.38 % 4.12 % 100.00 % 1 s 1 core @ 2.5 Ghz (C/C++)
11 MS_RAFT+_corr_RVC code 3.83 % 5.71 % 4.15 % 100.00 % 0.65 s GPU @ 2.5 Ghz (Python + C/C++)
A. Jahedi, M. Luz, L. Mehl, M. Rivinius and A. Bruhn: High Resolution Multi-Scale RAFT. Robust Vision Challenge 2022, arXiv preprint arXiv:2210.16900 2022.
A. Jahedi, L. Mehl, M. Rivinius and A. Bruhn: Multi-Scale Raft: Combining Hierarchical Concepts for Learning-Based Optical Flow Estimation. IEEE International Conference on Image Processing (ICIP) 2022.
12 MS_RAFT+_RVC 3.89 % 5.67 % 4.19 % 100.00 % 0.65 s GPU @ 2.5 Ghz (Python + C/C++)
13 DIP code 3.86 % 5.96 % 4.21 % 100.00 % 0.15 s 1 core @ 2.5 Ghz (Python)
Z. Zheng, N. Nie, Z. Ling, P. Xiong, J. Liu, H. Wang and J. Li: DIP: Deep Inverse Patchmatch for High- Resolution Optical Flow. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022.
14 RAFT-3D
This method uses stereo information.
3.39 % 8.79 % 4.29 % 100.00 % 2 s GPU @ 2.5 Ghz (Python + C/C++)
Z. Teed and J. Deng: RAFT-3D: Scene Flow using Rigid-Motion Embeddings. arXiv preprint arXiv:2012.00726 2020.
15 RAFT-it 4.11 % 5.34 % 4.31 % 100.00 % 0.1 s GPU @ 2.5 Ghz (Python)
16 RCA-Flow 3.96 % 6.21 % 4.33 % 100.00 % 0.16 s 1 core @ 2.5 Ghz (Python)
17 GMFlow_RVC code 4.16 % 5.67 % 4.41 % 100.00 % 0.2 s GPU (Python)
H. Xu, J. Zhang, J. Cai, H. Rezatofighi, F. Yu, D. Tao and A. Geiger: Unifying Flow, Stereo and Depth Estimation. arXiv preprint arXiv:2211.05783 2022.
18 AnyFlow 4.15 % 5.76 % 4.41 % 100.00 % 0.1 s 1 core @ 2.5 Ghz (Python)
19 CCH-Flow 4.20 % 5.50 % 4.42 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (Python)
20 GMFlow+ code 4.27 % 5.60 % 4.49 % 100.00 % 0.2 s GPU (Python)
H. Xu, J. Zhang, J. Cai, H. Rezatofighi, F. Yu, D. Tao and A. Geiger: Unifying Flow, Stereo and Depth Estimation. arXiv preprint arXiv:2211.05783 2022.
21 GAFlow 4.15 % 6.36 % 4.52 % 100.00 % 0.2 s GPU @ 2.5 Ghz (Python)
22 SeparableFlow code 4.25 % 5.92 % 4.53 % 100.00 % 0.5 s GPU
F. Zhang, O. Woodford, V. Prisacariu and P. Torr: Separable Flow: Learning Motion Cost Volumes for Optical Flow Estimation. Proceedings of the IEEE/CVF International Conference on Computer Vision 2021.
23 KPA-Flow 4.17 % 6.77 % 4.60 % 100.00 % 0.2 s GPU @ 2.5 Ghz (Python)
A. Luo, F. Yang, X. Li and S. Liu: Learning Optical Flow With Kernel Patch Attention. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022.
24 SwinTR-RAFT code 4.32 % 6.05 % 4.61 % 100.00 % 0.6 s 8 cores @ 2.5 Ghz (Python)
25 MatchFlow(G) 4.33 % 6.11 % 4.63 % 100.00 % 0.3 s GPU (Python)
26 DGA-Flow 4.34 % 6.11 % 4.64 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (Python)
27 FCTR-m 4.45 % 5.63 % 4.65 % 100.00 % 0.2 s GPU @ 2.5 Ghz (Python)
28 FlowNAS-RAFT-K 4.36 % 6.25 % 4.67 % 100.00 % 0.19 s GPU @ 2.5 Ghz (Python)
29 FlowFormer code 4.37 % 6.18 % 4.68 % 100.00 % 0.3 s GPU (Python)
Z. Huang, X. Shi, C. Zhang, Q. Wang, K. Cheung, H. Qin, J. Dai and H. Li: FlowFormer: A Transformer Architecture for Optical Flow. European conference on computer vision 2022.
30 CRAFT-intramodes2 code 4.35 % 6.35 % 4.68 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (Python)
31 TPCV+RAFT
This method uses stereo information.
4.53 % 5.52 % 4.69 % 100.00 % 0.2 s 1 core 2.5ghz gpu
32 AGM-FlowNet 4.32 % 6.57 % 4.69 % 100.00 % 0.38 s GPU @ 2.5 Ghz (Python)
33 MatchFlow(R) 4.51 % 5.78 % 4.72 % 100.00 % 0.26 s GPU (Python)
34 UberATG-DRISF
This method uses stereo information.
3.59 % 10.40 % 4.73 % 100.00 % 0.75 s CPU+GPU @ 2.5 Ghz (Python)
W. Ma, S. Wang, R. Hu, Y. Xiong and R. Urtasun: Deep Rigid Instance Scene Flow. CVPR 2019.
35 SKII 4.57 % 5.66 % 4.75 % 100.00 % 0.3 s 1 core @ 2.5 Ghz (Python)
36 ErrorMatch-RAFT code 4.46 % 6.23 % 4.75 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (Python)
37 ErrorMatch-GMA code 4.53 % 5.87 % 4.75 % 100.00 % 0.3 s 1 core @ 2.5 Ghz (C/C++)
38 Super 4.43 % 6.43 % 4.76 % 100.00 % 0.07 s GPU @ 2.5 Ghz (Python)
39 RAFT-A code 4.54 % 5.99 % 4.78 % 100.00 % 0.7 s GPU @ 2.5 Ghz (Python + C/C++)
D. Sun, D. Vlasic, C. Herrmann, V. Jampani, M. Krainin, H. Chang, R. Zabih, W. Freeman and C. Liu: AutoFlow: Learning a Better Training Set for Optical Flow. CVPR 2021.
40 EMD-Flow 4.41 % 6.65 % 4.78 % 100.00 % 0.11 s GPU @ 2.5 Ghz (Python)
41 CRAFT code 4.58 % 5.85 % 4.79 % 100.00 % 0.2 s GPU @ 2.5 Ghz (Python)
X. Sui, S. Li, X. Geng, Y. Wu, X. Xu, Y. Liu, R. Goh and H. Zhu: CRAFT: Cross-Attentional Flow Transformers for Robust Optical Flow. CVPR 2022.
42 GMFlowNet code 4.39 % 6.84 % 4.79 % 100.00 % 0.5 s GPU @ 2.5 Ghz (Python)
S. Zhao, L. Zhao, Z. Zhang, E. Zhou and D. Metaxas: Global Matching with Overlapping Attention for Optical Flow Estimation. CVPR 2022.
43 SKFlow code 4.64 % 5.83 % 4.84 % 100.00 % 0.2 s GPU @ 2.5 Ghz (Python)
44 RAFT-DFlow 4.52 % 6.48 % 4.84 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (C/C++)
45 MSAF 4.67 % 5.79 % 4.86 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (Python)
46 MS_RAFT 4.58 % 6.38 % 4.88 % 100.00 % 0.3 s GPU: Nvidia A100 (Python)
A. Jahedi, L. Mehl, M. Rivinius and A. Bruhn: Multi-Scale Raft: Combining Hierarchical Concepts for Learning-Based Optical Flow Estimation. IEEE International Conference on Image Processing (ICIP) 2022.
47 AGFlow code 4.52 % 6.75 % 4.89 % 100.00 % 0.2 s 8 cores @ 2.5 Ghz (Python)
A. Luo, F. Yang, K. Luo, X. Li, H. Fan and S. Liu: Learning Optical Flow with Adaptive Graph Reasoning. AAAI 2022.
48 DEQ-Flow-H code 4.68 % 6.06 % 4.91 % 100.00 % 0.5 s GPU @ 2.5 Ghz (Python)
S. Bai, Z. Geng, Y. Savani and Z. Kolter: Deep Equilibrium Optical Flow Estimation. CVPR 2022.
49 RR-RAFT code 4.63 % 6.29 % 4.91 % 100.00 % 1 s 1 core @ 2.5 Ghz (Python)
50 optical_flow3D
This method uses stereo information.
4.78 % 5.77 % 4.94 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (C/C++)
51 GLFlow 4.55 % 6.89 % 4.94 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (C/C++)
52 cgcv code 4.61 % 6.68 % 4.96 % 100.00 % 0.27 s 1 core @ 2.5 Ghz (C/C++)
53 MCPFlow_RVC 4.71 % 6.27 % 4.97 % 100.00 % 0.38 s GPU @ 2.5 Ghz (Python)
54 CVM2 code 4.72 % 6.24 % 4.97 % 100.00 % 0.27 s GPU @ 2.5 Ghz (Python)
55 DFlow-test 4.65 % 6.60 % 4.98 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (C/C++)
56 CVM code 4.71 % 6.34 % 4.98 % 100.00 % 0.20 s GPU @ 2.5 Ghz (C/C++)
57 GMA-RECLoss 4.64 % 6.75 % 4.99 % 100.00 % 0.11 s 1 core @ 2.5 Ghz (Python)
58 IIN code 4.90 % 5.50 % 5.00 % 100.00 % 0.27 s 1 core @ 2.5 Ghz (C/C++)
59 CSFlow code 4.71 % 6.46 % 5.00 % 100.00 % 0.2 s GPU @ 2.5 Ghz (Python)
H. Shi, Y. Zhou, K. Yang, X. Yin and K. Wang: CSFlow: Learning Optical Flow via Cross Strip Correlation for Autonomous Driving. arXiv preprint arXiv:2202.00909 2022.
60 opl 4.69 % 6.61 % 5.01 % 100.00 % 0.27 s 1 core @ 2.5 Ghz (C/C++)
61 SSTM 4.58 % 7.20 % 5.02 % 100.00 % 0.5 s GPU @ 2.5 Ghz (Python)
62 newmfc 4.75 % 6.40 % 5.02 % 100.00 % 0.2 s GPU @ 2.5 Ghz (Python)
63 RAFT-LC 4.67 % 6.88 % 5.04 % 100.00 % 0.24 s 1 core @ 2.5 Ghz (Python)
64 SSTM+[mv] 4.64 % 7.04 % 5.04 % 100.00 % 0.4 s GPU @ 2.5 Ghz (Python)
65 INN code 4.71 % 6.73 % 5.05 % 100.00 % 0.27 s 1 core @ 2.5 Ghz (C/C++)
66 RAFT-RECLoss 4.73 % 6.61 % 5.05 % 100.00 % 0.13 s 1 core @ 2.5 Ghz (Python)
67 FCTR 4.65 % 7.07 % 5.05 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (C/C++)
68 RAFT+AOIR 4.68 % 6.99 % 5.07 % 100.00 % 10 s GPU @ 2.5 Ghz (Python + C/C++)
L. Mehl, C. Beschle, A. Barth and A. Bruhn: An Anisotropic Selection Scheme for Variational Optical Flow Methods with Order-Adaptive Regularisation. SSVM 2021.
69 RAFT code 4.74 % 6.87 % 5.10 % 100.00 % 0.2 s GPU @ 2.5 Ghz (Python)
Z. Teed and J. Deng: RAFT: Recurrent All-Pairs Field Transforms for Optical Flow. ECCV 2020.
70 GMA+LCT-Flow code 4.75 % 6.86 % 5.10 % 100.00 % 0.65 s 1 core @ 2.5 Ghz (C/C++)
ERROR: Wrong syntax in BIBTEX file.
71 CGCV code 4.82 % 6.55 % 5.10 % 100.00 % 0.3 s 1 core @ 2.5 Ghz (Python)
72 RAFT-Original code 4.76 % 6.85 % 5.11 % 100.00 % 0.45 s 1 core @ 2.5 Ghz (C/C++)
ERROR: Wrong syntax in BIBTEX file.
73 RAFT-LC+ 4.76 % 6.97 % 5.13 % 100.00 % 2.4 s 1 core @ 2.5 Ghz (C/C++)
74 test 4.89 % 6.47 % 5.15 % 100.00 % 1 s 1 core @ 2.5 Ghz (Python)
75 raft_acn 4.80 % 6.93 % 5.15 % 100.00 % 0.03 s 1 core @ 2.5 Ghz (Python)
76 GMA-test 4.78 % 7.03 % 5.15 % 100.00 % 0.2 s GPU @ 2.5 Ghz (Python)
77 raft_test 4.78 % 7.03 % 5.15 % 100.00 % 0.02 s 1 core @ 2.5 Ghz (C/C++)
78 SFG
This method uses stereo information.
4.84 % 6.85 % 5.17 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (C/C++)
79 raft 4.84 % 7.05 % 5.21 % 100.00 % 0.03 s 1 core @ 2.5 Ghz (Python)
80 GMISF
This method uses stereo information.
4.92 % 6.79 % 5.23 % 100.00 % 0.2 s GPU @ 2.5 Ghz (Python + C/C++)
81 GMA-base code 4.90 % 7.00 % 5.25 % 100.00 % 0.5 s GPU @ 1.5 Ghz (Python + C/C++)
ERROR: Wrong syntax in BIBTEX file.
82 S_RAFT 5.03 % 6.58 % 5.28 % 100.00 % 2.1 s 1 core @ 2.5 Ghz (Python)
83 Scale flow
This method uses stereo information.
5.24 % 5.71 % 5.32 % 100.00 % 0.8 s GPU @ 2.5 Ghz (Python)
84 PRAFlow_RVC 5.08 % 7.21 % 5.43 % 100.00 % 0.5 s GPU @ NVIDIA RTX 2080Ti (Python)
Z. Wan, Y. Mao and Y. Dai: PRAFlow_RVC: Pyramid Recurrent All- Pairs Field Transforms for Optical Flow Estimation in Robust Vision Challenge 2020. 2020.
85 RAFT-TF_RVC 5.32 % 6.75 % 5.56 % 100.00 % 0.7 s GPU @ 2.5 Ghz (Python)
D. Sun, C. Herrmann, V. Jampani, M. Krainin, F. Cole, A. Stone, R. Jonschkowski, R. Zabih, W. Freeman and C. Liu: A TensorFlow implementation of RAFT. 2020.
86 MRRN 5.38 % 6.51 % 5.57 % 100.00 % 0.05 s 1 core @ 2.5 Ghz (Python)
87 RAFT-Illumination 5.34 % 7.51 % 5.70 % 100.00 % 0.5 s 1 core @ 2.5 Ghz (C/C++)
88 ACOSF
This method uses stereo information.
4.56 % 12.00 % 5.79 % 100.00 % 5 min 1 core @ 3.0 Ghz (Matlab + C/C++)
C. Li, H. Ma and Q. Liao: Two-Stage Adaptive Object Scene Flow Using Hybrid CNN-CRF Model. International Conference on Pattern Recognition (ICPR) 2020.
89 Scale-flow-split
This method uses stereo information.
5.62 % 6.93 % 5.84 % 100.00 % 1.6 s GPU 2.5GHZ
90 PPAC-HD3 code 5.78 % 7.48 % 6.06 % 100.00 % 0.19 s NVIDIA GTX 1080 Ti
A. Wannenwetsch and S. Roth: Probabilistic Pixel-Adaptive Refinement Networks. CVPR 2020.
91 GMA-base2 code 5.47 % 9.16 % 6.09 % 100.00 % 0.5 s 1 core @ 2.5 Ghz (C/C++)
92 MaskFlownet code 5.79 % 7.70 % 6.11 % 100.00 % 0.06 s NVIDIA TITAN Xp
S. Zhao, Y. Sheng, Y. Dong, E. Chang and Y. Xu: MaskFlownet: Asymmetric Feature Matching with Learnable Occlusion Mask. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020.
93 RAFT+LCT-Flow code 5.49 % 9.19 % 6.11 % 100.00 % 0.65 s GPU @ 1.5 Ghz (Python + C/C++)
J. Chen: Motion Estimation with L0 norm Regularization (Extended Version). IEEE 7th International Conference on Virtual Reality(ICVR) 2021.
94 vcn_finetune_245999 5.72 % 8.64 % 6.21 % 100.00 % 0.01 s 1 core @ 2.5 Ghz (Python)
95 ISF
This method uses stereo information.
5.40 % 10.29 % 6.22 % 100.00 % 10 min 1 core @ 3 Ghz (C/C++)
A. Behl, O. Jafari, S. Mustikovela, H. Alhaija, C. Rother and A. Geiger: Bounding Boxes, Segmentations and Object Coordinates: How Important is Recognition for 3D Scene Flow Estimation in Autonomous Driving Scenarios?. International Conference on Computer Vision (ICCV) 2017.
96 VCN+LCV code 5.75 % 8.80 % 6.25 % 100.00 % 0.26 s 1 core @ 2.5 Ghz (Python)
T. Xiao, J. Yuan, D. Sun, Q. Wang, X. Zhang, K. Xu and M. Yang: Learnable Cost Volume using the Cayley Representation. Proceedings of the European Conference on Computer Vision (ECCV) 2020.
97 RAFT+LCV code 5.73 % 8.90 % 6.26 % 100.00 % 0.1 s 1 core @ 2.5 Ghz (C/C++)
T. Xiao, J. Yuan, D. Sun, Q. Wang, X. Zhang, K. Xu and M. Yang: Learnable Cost Volume using the Cayley Representation. Proceedings of the European Conference on Computer Vision (ECCV) 2020.
98 RAFT-base code 5.69 % 9.26 % 6.28 % 100.00 % 0.5 s 1 core @ 2.5 Ghz (C/C++)
99 PRichFlow 6.18 % 6.89 % 6.30 % 100.00 % 0.1 s TITAN X MAXWELL
X. Wang, D. Zhu, J. Song, Y. Liu, J. Li and X. Zhang: Richer Aggregated Features for Optical Flow Estimation with Edge-aware Refinement. .
100 VCN code 5.83 % 8.66 % 6.30 % 100.00 % 0.18 s Titan X Pascal
G. Yang and D. Ramanan: Volumetric Correspondence Networks for Optical Flow. NeurIPS 2019.
101 Stereo expansion
This method uses stereo information.
code 5.83 % 8.66 % 6.30 % 100.00 % 2 s GPU @ 2.5 Ghz (Python)
G. Yang and D. Ramanan: Upgrading Optical Flow to 3D Scene Flow through Optical Expansion. CVPR 2020.
102 Binary TTC
This method uses stereo information.
5.84 % 8.67 % 6.31 % 100.00 % 2 s GPU @ 1.0 Ghz (Python)
A. Badki, O. Gallo, J. Kautz and P. Sen: Binary TTC: A Temporal Geofence for Autonomous Navigation. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2021.
103 MonoComb
This method uses stereo information.
5.84 % 8.67 % 6.31 % 100.00 % 0.58 s RTX 2080 Ti
R. Schuster, C. Unger and D. Stricker: MonoComb: A Sparse-to-Dense Combination Approach for Monocular Scene Flow. ACM Computer Science in Cars Symposium (CSCS) 2020.
104 HD^3-Flow code 6.05 % 9.02 % 6.55 % 100.00 % 0.10 s NVIDIA Pascal Titan XP
Z. Yin, T. Darrell and F. Yu: Hierarchical Discrete Distribution Decomposition for Match Density Estimation. CVPR 2019.
105 PRSM
This method uses stereo information.
This method makes use of multiple (>2) views.
code 5.33 % 13.40 % 6.68 % 100.00 % 300 s 1 core @ 2.5 Ghz (C/C++)
C. Vogel, K. Schindler and S. Roth: 3D Scene Flow Estimation with a Piecewise Rigid Scene Model. ijcv 2015.
106 RAFT-SA code 5.90 % 11.09 % 6.76 % 100.00 % 1 s 1 core @ 2.5 Ghz (C/C++)
107 MaskFlownet-S code 6.53 % 8.21 % 6.81 % 100.00 % 0.03 s NVIDIA TITAN Xp
S. Zhao, Y. Sheng, Y. Dong, E. Chang and Y. Xu: MaskFlownet: Asymmetric Feature Matching with Learnable Occlusion Mask. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020.
108 ScopeFlow code 6.72 % 7.36 % 6.82 % 100.00 % -1 s Nvidia GPU
A. Bar-Haim and L. Wolf: ScopeFlow: Dynamic Scene Scoping for Optical Flow. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020.
109 SMURF code 6.04 % 10.75 % 6.83 % 100.00 % .2 s 1 core @ 2.5 Ghz (C/C++)
A. Stone, D. Maurer, A. Ayvaci, A. Angelova and R. Jonschkowski: SMURF: Self-Teaching Multi-Frame Unsupervised RAFT With Full-Image Warping. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021.
110 RAFT-VM 6.49 % 8.65 % 6.85 % 100.00 % 0.4 s GPU @ 2.5 Ghz (C/C++)
111 OSF+TC
This method uses stereo information.
This method makes use of multiple (>2) views.
5.76 % 13.31 % 7.02 % 100.00 % 50 min 1 core @ 2.5 Ghz (C/C++)
M. Neoral and J. Šochman: Object Scene Flow with Temporal Consistency. 22nd Computer Vision Winter Workshop (CVWW) 2017.
112 IRR-full 6.99 % 7.57 % 7.09 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (Python)
113 DPCTF-F 7.22 % 6.47 % 7.09 % 100.00 % 0.07 s GPU @ 2.5 Ghz (C/C++)
Y. Deng, J. Xiao, S. Zhou and J. Feng: Detail Preserving Coarse-to-Fine Matching for Stereo Matching and Optical Flow. IEEE Transactions on Image Processing 2021.
114 SSF
This method uses stereo information.
5.63 % 14.71 % 7.14 % 100.00 % 5 min 1 core @ 2.5 Ghz (Matlab + C/C++)
Z. Ren, D. Sun, J. Kautz and E. Sudderth: Cascaded Scene Flow Prediction using Semantic Segmentation. International Conference on 3D Vision (3DV) 2017.
115 MFF
This method makes use of multiple (>2) views.
7.15 % 7.25 % 7.17 % 100.00 % 0.05 s NVIDIA Pascal Titan X (Python)
Z. Ren, O. Gallo, D. Sun, M. Yang, E. Sudderth and J. Kautz: A Fusion Approach for Multi-Frame Optical Flow Estimation. IEEE Winter Conference on Applications of Computer Vision 2019.
116 LiteFlowNet3-S code 7.27 % 6.96 % 7.22 % 100.00 % 0.07s GTX 1080 (slower than Titan X Pascal)
T. Hui and C. Loy: LiteFlowNet3: Resolving Correspondence Ambiguity for More Accurate Optical Flow Estimation. European Conference on Computer Vision (ECCV) 2020.
117 PMC-PWC code 7.27 % 6.94 % 7.22 % 100.00 % TBD s GPU @ 2.5 Ghz (Python)
C. Zhang, C. Feng, Z. Chen, W. Hu and M. Li: Parallel multiscale context-based edge- preserving optical flow estimation with occlusion detection. Signal Processing: Image Communication 2022.
118 SwiftFlow 6.85 % 9.11 % 7.23 % 100.00 % 0.03 s GPU @ 2.5 Ghz (Python)
H. Wang, Y. Liu, H. Huang, Y. Pan, W. Yu, J. Jiang, D. Lyu, M. Bocus, M. Liu, I. Pitas and others: ATG-PVD: Ticketing parking violations on a drone. European Conference on Computer Vision 2020.
119 IRR-deconv code 7.28 % 7.30 % 7.29 % 100.00 % 0.2 s GPU @ 2.5 Ghz (Python)
ERROR: Wrong syntax in BIBTEX file.
120 IRR-docs code 7.43 % 6.65 % 7.30 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (Python)
ERROR: Wrong syntax in BIBTEX file.
121 LiteFlowNet3 code 7.26 % 7.75 % 7.34 % 100.00 % 0.07s GTX 1080 (slower than Titan X Pascal)
T. Hui and C. Loy: LiteFlowNet3: Resolving Correspondence Ambiguity for More Accurate Optical Flow Estimation. European Conference on Computer Vision (ECCV) 2020.
122 OSF 2018
This method uses stereo information.
code 5.38 % 17.61 % 7.41 % 100.00 % 390 s 1 core @ 2.5 Ghz (Matlab + C/C++)
M. Menze, C. Heipke and A. Geiger: Object Scene Flow. ISPRS Journal of Photogrammetry and Remote Sensing (JPRS) 2018.
123 IRR-CS-full code 7.58 % 7.56 % 7.58 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (C/C++)
ERROR: Wrong syntax in BIBTEX file.
124 LiteFlowNet2 code 7.62 % 7.64 % 7.62 % 100.00 % 0.0486 s GTX 1080 (slower than Titan X Pascal)
T. Hui, X. Tang and C. Loy: A Lightweight Optical Flow CNN - Revisiting Data Fidelity and Regularization. TPAMI 2020.
125 SENSE
This method uses stereo information.
code 7.30 % 9.33 % 7.64 % 100.00 % 0.32s GPU, GTX 2080Ti
H. Jiang, D. Sun, V. Jampani, Z. Lv, E. Learned-Miller and J. Kautz: SENSE: A Shared Encoder Network for Scene-Flow Estimation. The IEEE International Conference on Computer Vision (ICCV) 2019.
126 IRR-PWC code 7.68 % 7.52 % 7.65 % 100.00 % 0.18 s NVIDIA GTX 1080 Ti
J. Hur and S. Roth: Iterative Residual Refinement for Joint Optical Flow and Occlusion Estimation. CVPR 2019.
127 STaRFlow code 7.51 % 8.35 % 7.65 % 100.00 % 0.24 s GPU @ 2.0 Ghz (Python)
P. Godet, A. Boulch, A. Plyer and G. Besnerais: STaRFlow: A SpatioTemporal Recurrent Cell for Lightweight Multi-Frame Optical Flow Estimation. ICPR 2020.
128 DTF_SENSE
This method uses stereo information.
This method makes use of multiple (>2) views.
7.31 % 9.48 % 7.67 % 100.00 % 0.76 s 1 core @ 2.5 Ghz (C/C++)
R. Schuster, C. Unger and D. Stricker: A Deep Temporal Fusion Framework for Scene Flow Using a Learnable Motion Model and Occlusions. IEEE Winter Conference on Applications of Computer Vision (WACV) 2021.
129 IRR-CS0829 code 7.74 % 7.58 % 7.71 % 100.00 % 0.2 s GPU @ 2.5 Ghz (Python)
ERROR: Wrong syntax in BIBTEX file.
130 PWC-Net+ code 7.69 % 7.88 % 7.72 % 100.00 % 0.03 s NVIDIA Pascal Titan X
D. Sun, X. Yang, M. Liu and J. Kautz: Models Matter, So Does Training: An Empirical Study of CNNs for Optical Flow Estimation. arXiv preprint arXiv:1809.05571 2018.
131 IRR-CC 7.79 % 7.92 % 7.81 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (Python)
132 OSF
This method uses stereo information.
code 5.62 % 18.92 % 7.83 % 100.00 % 50 min 1 core @ 2.5 Ghz (C/C++)
M. Menze and A. Geiger: Object Scene Flow for Autonomous Vehicles. Conference on Computer Vision and Pattern Recognition (CVPR) 2015.
133 Separable-Sim2real 7.30 % 11.01 % 7.92 % 100.00 % 0.25 s GPU
F. Zhang, O. Woodford, V. Prisacariu and P. Torr: Separable Flow: Learning Motion Cost Volumes for Optical Flow Estimation. Proceedings of the IEEE/CVF International Conference on Computer Vision 2021.
134 pwc_test 7.55 % 10.65 % 8.06 % 100.00 % 0.09 s 1 core @ 2.5 Ghz (Python)
135 pwc_another 7.57 % 10.65 % 8.09 % 100.00 % 0.09 s 1 core @ 2.5 Ghz (Python)
136 BSF
This method uses stereo information.
5.80 % 20.56 % 8.25 % 100.00 % 162 s 1 core @ 2.5 Ghz (Matlab)
137 IRR-CS code 8.45 % 7.39 % 8.27 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (Python)
ERROR: Wrong syntax in BIBTEX file.
138 LSM_FLOW_RVC code 7.33 % 13.06 % 8.28 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (C/C++)
C. Tang, L. Yuan and P. Tan: LSM: Learning Subspace Minimization for Low-Level Vision. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020.
139 AL-OF_r0.2 code 7.25 % 13.53 % 8.30 % 100.00 % 0.1 s 1 core @ 2.5 Ghz (Python)
S. Yuan, X. Sun, H. Kim, S. Yu and C. Tomasi: Optical Flow Training Under Limited Label Budget via Active Learning. ECCV 2022.
140 IRR-PWC_RVC code 7.61 % 12.22 % 8.38 % 100.00 % 0.18 s NVIDIA GTX 1080 Ti
J. Hur and S. Roth: Iterative Residual Refinement for Joint Optical Flow and Occlusion Estimation. CVPR 2019.
141 SelFlow
This method makes use of multiple (>2) views.
7.61 % 12.48 % 8.42 % 100.00 % 0.09 s GPU @ 2.5 Ghz (Python)
P. Liu, M. Lyu, I. King and J. Xu: SelFlow: Self-Supervised Learning of Optical Flow. CVPR 2019.
142 RAFT-MSF-ft
This method uses stereo information.
8.35 % 11.02 % 8.80 % 100.00 % 0.18 s 1 core @ 2.5 Ghz (Python)
143 MDFlow 8.14 % 12.80 % 8.91 % 100.00 % 0.03 s NVIDIA GTX 1080 Ti
L. Kong and J. Yang: MDFlow: Unsupervised Optical Flow Learning by Reliable Mutual Knowledge Distillation. IEEE Transactions on Circuits and Systems for Video Technology 2022.
144 ULDENet 7.81 % 15.74 % 9.13 % 100.00 % 0.05 s GPU @ >3.5 Ghz (Python)
145 GMFlow code 9.67 % 7.57 % 9.32 % 100.00 % 0.071 s A100 GPU (Python)
H. Xu, J. Zhang, J. Cai, H. Rezatofighi and D. Tao: GMFlow: Learning Optical Flow via Global Matching. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022.
146 FDFlowNet 9.31 % 9.71 % 9.38 % 100.00 % 0.02 s NVIDIA GTX 1080 Ti
L. Kong and J. Yang: FDFlowNet: Fast Optical Flow Estimation using a Deep Lightweight Network. IEEE International Conference on Image Processing (ICIP) 2020.
147 LiteFlowNet code 9.66 % 7.99 % 9.38 % 100.00 % 0.0885 s GTX 1080 (slower than Titan X Pascal)
T. Hui, X. Tang and C. Loy: LiteFlowNet: A Lightweight Convolutional Neural Network for Optical Flow Estimation. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018.
148 PWC-Net code 9.66 % 9.31 % 9.60 % 100.00 % 0.03 s NVIDIA Pascal Titan X
D. Sun, X. Yang, M. Liu and J. Kautz: PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume. CVPR 2018.
149 DCVNet 9.08 % 12.33 % 9.62 % 100.00 % 0.03 s GPU @ 2.5 Ghz (Python + C/C++)
150 ContinualFlow_ROB
This method makes use of multiple (>2) views.
8.54 % 17.48 % 10.03 % 100.00 % 0.15 s GPU - NVidia 1080Ti
M. Neoral, J. Šochman and J. Matas: Continual Occlusions and Optical Flow Estimation. 14th Asian Conference on Computer Vision (ACCV) 2018.
151 VCN_RVC code 8.53 % 18.30 % 10.15 % 100.00 % 0.36 s GPU @ 2.5 Ghz (Python)
G. Yang and D. Ramanan: Volumetric Correspondence Networks for Optical Flow. NeurIPS 2019.
152 NccFLow 8.81 % 17.36 % 10.24 % 100.00 % 0.04 s 1 core @ 2.5 Ghz (C/C++)
G. Wang, S. Ren and H. Wang: NccFlow: Unsupervised Learning of Optical Flow With Non-occlusion from Geometry. arXiv preprint arXiv:2107.03610 2021.
153 MirrorFlow code 8.93 % 17.07 % 10.29 % 100.00 % 11 min 4 core @ 2.2 Ghz (C/C++)
J. Hur and S. Roth: MirrorFlow: Exploiting Symmetries in Joint Optical Flow and Occlusion Estimation. ICCV 2017.
154 CoT-AMFlow 10.02 % 11.95 % 10.34 % 100.00 % 0.06 s GPU @ 2.5 Ghz (Python)
H. Wang, R. Fan and M. Liu: CoT-AMFlow: Adaptive Modulation Network with Co-Teaching Strategy for Unsupervised Optical Flow Estimation. Conference on Robot Learning (CoRL) 2020.
155 DWARF
This method uses stereo information.
9.80 % 13.37 % 10.39 % 100.00 % 0.14s - 1.43s TitanXP - JetsonTX2
F. Aleotti, M. Poggi, F. Tosi and S. Mattoccia: Learning end-to-end scene flow by distilling single tasks knowledge. Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) 2020.
156 FlowNet2 code 10.75 % 8.75 % 10.41 % 100.00 % 0.1 s GPU @ 2.5 Ghz (C/C++)
E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy and T. Brox: FlowNet 2.0: Evolution of Optical Flow Estimation with Deep Networks. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017.
157 sub_pnp 9.77 % 16.46 % 10.88 % 100.00 % 0.02 s 1 core @ 2.5 Ghz (C/C++)
158 SDF 8.61 % 23.01 % 11.01 % 100.00 % TBA 1 core @ 2.5 Ghz (C/C++)
M. Bai*, W. Luo*, K. Kundu and R. Urtasun: Exploiting Semantic Information and Deep Matching for Optical Flow. ECCV 2016.
159 Flow2Stereo 9.99 % 16.67 % 11.10 % 100.00 % 0.05 s GPU @ 2.5 Ghz (Python)
P. Liu, I. King, M. Lyu and J. Xu: Flow2Stereo: Effective Self-Supervised Learning of Optical Flow and Stereo Matching. CVPR 2020.
160 UnFlow code 10.15 % 15.93 % 11.11 % 100.00 % 0.12 s GPU @ 1.5 Ghz (Python + C/C++)
S. Meister, J. Hur and S. Roth: UnFlow: Unsupervised Learning of Optical Flow with a Bidirectional Census Loss. AAAI 2018.
161 UFlow code 9.78 % 17.87 % 11.13 % 100.00 % 0.04 s 1 core @ 2.5 Ghz (C/C++)
R. Jonschkowski, A. Stone, J. Barron, A. Gordon, K. Konolige and A. Angelova: What Matters in Unsupervised Optical Flow. ECCV 2020.
162 trail1 11.38 % 10.10 % 11.17 % 100.00 % 0.56 s 1 core @ 2.5 Ghz (C/C++)
163 FastFlowNet code 11.20 % 11.30 % 11.22 % 100.00 % 0.01 s NVIDIA GTX 1080 Ti
L. Kong, C. Shen and J. Yang: FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation. 2021 IEEE International Conference on Robotics and Automation (ICRA) 2021.
164 FSF+MS
This method uses stereo information.
This method makes use of the epipolar geometry.
This method makes use of multiple (>2) views.
8.48 % 25.43 % 11.30 % 100.00 % 2.7 s 4 cores @ 3.5 Ghz (C/C++)
T. Taniai, S. Sinha and Y. Sato: Fast Multi-frame Stereo Scene Flow with Motion Segmentation. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017) 2017.
165 MDFlow-Fast 10.75 % 14.81 % 11.43 % 100.00 % 0.01 s NVIDIA GTX 1080 Ti
L. Kong and J. Yang: MDFlow: Unsupervised Optical Flow Learning by Reliable Mutual Knowledge Distillation. IEEE Transactions on Circuits and Systems for Video Technology 2022.
166 CNNF+PMBP 10.08 % 18.56 % 11.49 % 100.00 % 45 min 1 cores @ 3.5 Ghz (C/C++)
F. Zhang and B. Wah: Fundamental Principles on Learning New Features for Effective Dense Matching. IEEE Transactions on Image Processing 2018.
167 PWC-Net_RVC code 11.22 % 13.69 % 11.63 % 100.00 % 0.03 s NVIDIA Pascal Titan X
D. Sun, X. Yang, M. Liu and J. Kautz: PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume. CVPR 2018.
168 SFF++
This method uses stereo information.
This method makes use of multiple (>2) views.
10.63 % 17.48 % 11.77 % 100.00 % 78 s 4 cores @ 3.5 Ghz (C/C++)
R. Schuster, O. Wasenmüller, C. Unger, G. Kuschk and D. Stricker: SceneFlowFields++: Multi-frame Matching, Visibility Prediction, and Robust Interpolation for Scene Flow Estimation. International Journal of Computer Vision (IJCV) 2019.
169 SfM-PM
This method makes use of multiple (>2) views.
9.66 % 22.73 % 11.83 % 100.00 % 69 s 3 cores @ 3.6 Ghz (C/C++)
D. Maurer, N. Marniok, B. Goldluecke and A. Bruhn: Structure-from-Motion-Aware PatchMatch for Adaptive Optical Flow Estimation. ECCV 2018.
170 Self-SuperFlow-ft
This method uses stereo information.
10.65 % 19.44 % 12.12 % 100.00 % 0.13 s GTX 1080 Ti
K. Bendig, R. Schuster and D. Stricker: Self-SuperFlow: Self-supervised Scene Flow Prediction in Stereo Sequences. International Conference on Image Processing (ICIP) 2022.
171 MR-Flow
This method makes use of multiple (>2) views.
code 10.13 % 22.51 % 12.19 % 100.00 % 8 min 1 core @ 2.5 Ghz (Python + C/C++)
J. Wulff, L. Sevilla-Lara and M. Black: Optical Flow in Mostly Rigid Scenes. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2017.
172 DTF_PWOC
This method uses stereo information.
This method makes use of multiple (>2) views.
10.78 % 19.99 % 12.31 % 100.00 % 0.38 s RTX 2080 Ti
R. Schuster, C. Unger and D. Stricker: A Deep Temporal Fusion Framework for Scene Flow Using a Learnable Motion Model and Occlusions. IEEE Winter Conference on Applications of Computer Vision (WACV) 2021.
173 pnp 11.47 % 19.19 % 12.75 % 100.00 % 0.02 s 1 core @ 2.5 Ghz (Python)
174 Mono-SF
This method uses stereo information.
11.40 % 19.64 % 12.77 % 100.00 % 41 s 1 core @ 3.5 Ghz (Matlab + C/C++)
F. Brickwedde, S. Abraham and R. Mester: Mono-SF: Multi-View Geometry meets Single-View Depth for Monocular Scene Flow Estimation of Dynamic Traffic Scenes. Proc. of International Conference on Computer Vision (ICCV) 2019.
175 SceneFFields
This method uses stereo information.
10.58 % 24.41 % 12.88 % 100.00 % 65 s 4 cores @ 3.7 Ghz (C/C++)
R. Schuster, O. Wasenmüller, G. Kuschk, C. Bailer and D. Stricker: SceneFlowFields: Dense Interpolation of Sparse Scene Flow Correspondences. IEEE Winter Conference on Applications of Computer Vision (WACV) 2018.
176 CSF
This method uses stereo information.
10.40 % 25.78 % 12.96 % 100.00 % 80 s 1 core @ 2.5 Ghz (C/C++)
Z. Lv, C. Beall, P. Alcantarilla, F. Li, Z. Kira and F. Dellaert: A Continuous Optimization Approach for Efficient and Accurate Scene Flow. European Conf. on Computer Vision (ECCV) 2016.
177 PWOC-3D
This method uses stereo information.
code 12.40 % 15.78 % 12.96 % 100.00 % 0.13 s GTX 1080 Ti
R. Saxena, R. Schuster, O. Wasenmüller and D. Stricker: PWOC-3D: Deep Occlusion-Aware End-to-End Scene Flow Estimation. Intelligent Vehicles Symposium (IV) 2019.
178 Multi-Mono-SF-ft
This method uses stereo information.
This method makes use of multiple (>2) views.
code 12.41 % 18.20 % 13.37 % 100.00 % 0.06 s NVIDIA GTX 1080 Ti
J. Hur and S. Roth: Self-Supervised Multi-Frame Monocular Scene Flow. CVPR 2021.
179 UnsupSimFlow code 12.60 % 17.27 % 13.38 % 100.00 % 0.03 s 8 cores @ 3.0 Ghz (Python + C/C++)
W. Im, T. Kim and S. Yoon: Unsupervised Learning of Optical Flow with Deep Feature Similarity. The European Conference on Computer Vision (ECCV) 2020.
180 PR-Sceneflow
This method uses stereo information.
code 11.73 % 24.33 % 13.83 % 100.00 % 150 s 4 core @ 3.0 Ghz (Matlab + C/C++)
C. Vogel, K. Schindler and S. Roth: Piecewise Rigid Scene Flow. ICCV 2013.
181 DDFlow+LCV 12.98 % 19.83 % 14.12 % 100.00 % 0.1 s GPU @ 2.5 Ghz (Python)
T. Xiao, J. Yuan, D. Sun, Q. Wang, X. Zhang, K. Xu and M. Yang: Learnable Cost Volume using the Cayley Representation. Proceedings of the European Conference on Computer Vision (ECCV) 2020.
182 SelFlow
This method makes use of multiple (>2) views.
12.68 % 21.74 % 14.19 % 100.00 % 0.09 s GPU @ 2.5 Ghz (Python)
P. Liu, M. Lyu, I. King and J. Xu: SelFlow: Self-Supervised Learning of Optical Flow. CVPR 2019.
183 DDFlow 13.08 % 20.40 % 14.29 % 100.00 % 0.06 s GPU @ >3.5 Ghz (Python + C/C++)
P. Liu, I. King, M. Lyu and J. Xu: DDFlow: Learning Optical Flow with Unlabeled Data Distillation. AAAI 2019.
184 F-s 13.31 % 19.34 % 14.32 % 100.00 % 0.2 s 1 core @ 2.5 Ghz (Python)
185 DCFlow code 13.10 % 23.70 % 14.86 % 100.00 % 8.6 s GPU @ 3.0 Ghz (Matlab + C/C++)
J. Xu, R. Ranftl and V. Koltun: Accurate Optical Flow via Direct Cost Volume Processing. CVPR 2017.
186 ProFlow
This method makes use of multiple (>2) views.
13.86 % 20.91 % 15.04 % 100.00 % 112 s GPU+CPU @ 3.6 Ghz (Python + C/C++)
D. Maurer and A. Bruhn: ProFlow: Learning to Predict Optical Flow. BMVC 2018.
187 FlowFields++ code 14.82 % 17.77 % 15.31 % 100.00 % 29 s 1 core @ 3.5 Ghz (C/C++)
R. Schuster, C. Bailer, O. Wasenmüller and D. Stricker: FlowFields++: Accurate Optical Flow Correspondences Meet Robust Interpolation. International Conference on Image Processing (ICIP) 2018.
188 ProFlow_ROB
This method makes use of multiple (>2) views.
14.15 % 21.82 % 15.42 % 100.00 % 112 s GPU+CPU @ 3.6 Ghz (Python + C/C++)
D. Maurer and A. Bruhn: ProFlow: Learning to Predict Optical Flow. BMVC 2018.
189 Self-Mono-SF-ft
This method uses stereo information.
code 15.51 % 17.96 % 15.91 % 100.00 % 0.09 s NVIDIA GTX 1080 Ti
J. Hur and S. Roth: Self-Supervised Monocular Scene Flow Estimation. CVPR 2020.
190 FF++_ROB 15.32 % 19.27 % 15.97 % 100.00 % 29 s 1 core @ 3.5 Ghz (C/C++)
R. Schuster, C. Bailer, O. Wasenmüller and D. Stricker: FlowFields++: Accurate Optical Flow Correspondences Meet Robust Interpolation. International Conference on Image Processing (ICIP) 2018.
191 SOF code 14.63 % 22.83 % 15.99 % 100.00 % 6 min 1 core @ 2.5 Ghz (Matlab)
L. Sevilla-Lara, D. Sun, V. Jampani and M. Black: Optical Flow with Semantic Segmentation and Localized Layers. CVPR 2016.
192 DIP-Flow-DF
This method makes use of multiple (>2) views.
14.93 % 23.37 % 16.33 % 100.00 % 104s 2 cores @ 3.6 Ghz (C/C++)
D. Maurer, M. Stoll and A. Bruhn: Directional Priors for Multi-Frame Optical Flow. BMVC 2018.
193 JFS
This method makes use of the epipolar geometry.
15.90 % 19.31 % 16.47 % 100.00 % 13 min 1 core @ 3.2 Ghz (C/C++)
J. Hur and S. Roth: Joint Optical Flow and Temporally Consistent Semantic Segmentation. ECCV Workshops 2016.
194 DF+OIR 15.11 % 23.45 % 16.50 % 100.00 % 3 min 1 core @ 3.5 Ghz (Matlab + C/C++)
D. Maurer, M. Stoll and A. Bruhn: Order-Adaptive and Illumination Aware Variational Optical Flow Refinement. BMVC 2017.
195 SPS+FF++
This method uses stereo information.
code 15.91 % 20.27 % 16.64 % 100.00 % 36 s 1 core @ 3.5 Ghz (C/C++)
R. Schuster, O. Wasenmüller and D. Stricker: Dense Scene Flow from Stereo Disparity and Optical Flow. ACM Computer Science in Cars Symposium (CSCS) 2018.
196 DIP-Flow-CPM
This method makes use of multiple (>2) views.
15.57 % 23.84 % 16.95 % 100.00 % 52 s 2 core @ 3.6 Ghz (C/C++)
D. Maurer, M. Stoll and A. Bruhn: Directional Priors for Multi-Frame Optical Flow. BMVC 2018.
197 ImpPB+SPCI code 17.25 % 20.44 % 17.78 % 100.00 % 60 s GPU @ 2.5 Ghz (Python)
T. Schuster, L. Wolf and D. Gadot: Optical Flow Requires Multiple Strategies (but only one network). CVPR 2017.
198 PCOF-LDOF
This method uses stereo information.
14.34 % 38.32 % 18.33 % 100.00 % 50 s 1 core @ 3.0 Ghz (C/C++)
M. Derome, A. Plyer, M. Sanfourche and G. Le Besnerais: A Prediction-Correction Approach for Real-Time Optical Flow Computation Using Stereo. German Conference on Pattern Recognition 2016.
199 RAFT-MSF
This method uses stereo information.
17.98 % 20.33 % 18.37 % 100.00 % 0.18 s NVIDIA GTX 1080 Ti
200 FlowFieldCNN 18.33 % 20.42 % 18.68 % 100.00 % 23 s GPU/CPU 4 core @ 3.5 Ghz (C/C++)
C. Bailer, K. Varanasi and D. Stricker: CNN-based Patch Matching for Optical Flow with Thresholded Hinge Embedding Loss. CVPR 2017.
201 RicFlow 18.73 % 19.09 % 18.79 % 100.00 % 5 s 1 core @ 3.5 Ghz (C/C++)
Y. Hu, Y. Li and R. Song: Robust Interpolation of Correspondences for Large Displacement Optical Flow. CVPR 2017.
202 selfmono
This method uses stereo information.
17.73 % 26.08 % 19.12 % 100.00 % 0.05 s 1 core @ 2.5 Ghz (C/C++)
ERROR: Wrong syntax in BIBTEX file.
203 HCSH 18.05 % 26.23 % 19.41 % 100.00 % 3.5 s 1 core @ 3.0 Ghz (C/C++)
J. Fan, Y. Wang and L. Guo: Hierarchical coherency sensitive hashing and interpolation with RANSAC for large displacement optical flow. Computer Vision and Image Understanding 2018.
204 OmegaNet 17.43 % 29.69 % 19.47 % 100.00 % 0.01 s GPU @ 1.5 Ghz (Python)
F. Tosi, F. Aleotti, P. Ramirez, M. Poggi, S. Salti, L. Di Stefano and S. Mattoccia: Distilled semantics for comprehensive scene understanding from videos. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2020.
205 UJG code 18.57 % 24.02 % 19.48 % 100.00 % 0.03 s GPU @ 2.5 Ghz (Python)
J. Li, J. Zhao, S. Song and T. Feng: Unsupervised Joint Learning of Depth, Optical Flow, Ego-motion from Video. arXiv preprint arXiv:2105.14520 2021.
206 Multi-Mono-SF
This method uses stereo information.
This method makes use of multiple (>2) views.
code 18.13 % 26.59 % 19.54 % 100.00 % 0.06 s NVIDIA GTX 1080 Ti
J. Hur and S. Roth: Self-Supervised Multi-Frame Monocular Scene Flow. CVPR 2021.
207 PGM-G 18.90 % 23.43 % 19.66 % 100.00 % 5.05 s 1 core @ 3.1 Ghz (C/C++)
Y. Li: Pyramidal Gradient Matching for Optical Flow Estimation. CoRR 2017.
208 FlowFields+ 19.51 % 21.26 % 19.80 % 100.00 % 28s 1 core @ 3.5 Ghz (C/C++)
C. Bailer, B. Taetz and D. Stricker: Flow Fields: Dense Correspondence Fields for Highly Accurate Large Displacement Optical Flow Estimation. .
209 EPC++ (stereo)
This method uses stereo information.
19.24 % 26.93 % 20.52 % 100.00 % 0.05 s GPU @ 2.5 Ghz (Python)
C. Luo, Z. Yang, P. Wang, Y. Wang, W. Xu, R. Nevatia and A. Yuille: Every Pixel Counts ++: Joint Learning of Geometry and Motion with 3D Holistic Understanding. IEEE transactions on pattern analysis and machine intelligence 2019.
210 PatchBatch code 19.98 % 26.50 % 21.07 % 100.00 % 50 s GPU @ 2.5 Ghz (Python)
D. Gadot and L. Wolf: PatchBatch: a Batch Augmented Loss for Optical Flow. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016.
211 DDF code 20.36 % 25.19 % 21.17 % 100.00 % ~1 min GPU @ 2.5 Ghz (C/C++)
F. G\"uney and A. Geiger: Deep Discrete Flow. Asian Conference on Computer Vision (ACCV) 2016.
212 SODA-Flow 20.01 % 29.14 % 21.53 % 100.00 % 96 s 2 cores @ 3.5 Ghz (C/C++)
D. Maurer, M. Stoll, S. Volz, P. Gairing and A. Bruhn: A Comparison of Isotropic and Anisotropic Second Order Regularisers for Optical Flow. SSVM 2017.
213 DiscreteFlow code 21.53 % 21.76 % 21.57 % 100.00 % 3 min 1 core @ 2.5 Ghz (Matlab + C/C++)
M. Menze, C. Heipke and A. Geiger: Discrete Optimization for Optical Flow. German Conference on Pattern Recognition (GCPR) 2015.
214 SGM+SF
This method uses stereo information.
20.91 % 25.50 % 21.67 % 100.00 % 45 min 16 core @ 3.2 Ghz (C/C++)
H. Hirschmüller: Stereo Processing by Semiglobal Matching and Mutual Information. PAMI 2008.
M. Hornacek, A. Fitzgibbon and C. Rother: SphereFlow: 6 DoF Scene Flow from RGB-D Pairs. CVPR 2014.
215 OAR-Flow 20.62 % 27.67 % 21.79 % 100.00 % 100 s 2 cores @ 3.5 Ghz (C/C++)
D. Maurer, M. Stoll and A. Bruhn: Order-Adaptive Regularisation for Variational Optical Flow: Global, Local and in Between. SSVM 2017.
216 CPM-Flow code 22.32 % 22.81 % 22.40 % 100.00 % 4.2 s 1 core @ 3.5 Ghz (C/C++)
Y. Hu, R. Song and Y. Li: Efficient Coarse-to-Fine PatchMatch for Large Displacement Optical Flow. CVPR 2016.
217 PCOF + ACTF
This method uses stereo information.
14.89 % 60.15 % 22.43 % 100.00 % 0.08 s GPU @ 2.0 Ghz (C/C++)
M. Derome, A. Plyer, M. Sanfourche and G. Le Besnerais: A Prediction-Correction Approach for Real-Time Optical Flow Computation Using Stereo. German Conference on Pattern Recognition 2016.
218 SegFlow(d0=3) 22.21 % 23.72 % 22.46 % 100.00 % 6.6 s 1 core @ >3.5 Ghz (C/C++)
J. Chen, Z. Cai, J. Lai and X. Xie: Efficient Segmentation-based PatchMatch for Large displacement Optical Flow Estimation. IEEE TCSVT 2018.
219 IntrpNt-df code 22.15 % 26.03 % 22.80 % 100.00 % 3 min GPU @ 2.5 Ghz (Python)
S. Zweig and L. Wolf: InterpoNet, a Brain Inspired Neural Network for Optical Flow Dense Interpolation. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017.
220 SGM&FlowFie+
This method uses stereo information.
22.83 % 22.75 % 22.82 % 81.24 % 29 s 1 core @ 3.5 Ghz (C/C++)
R. Schuster, C. Bailer, O. Wasenmüller and D. Stricker: Combining Stereo Disparity and Optical Flow for Basic Scene Flow. Commercial Vehicle Technology Symposium (CVTS) 2018.
221 Back2FutureFlow(UFO)
This method makes use of multiple (>2) views.
code 22.67 % 24.27 % 22.94 % 100.00 % 0.12 s GPU @ 2.5 Ghz (LUA/Torch)
J. Janai, F. Güney, A. Ranjan, M. Black and A. Geiger: Unsupervised Learning of Multi-Frame Optical Flow with Occlusions. Proc. of the European Conf. on Computer Vision (ECCV) 2018.
222 MotionSLIC
This method makes use of the epipolar geometry.
code 14.86 % 64.44 % 23.11 % 100.00 % 30 s 4 cores @ 2.5 Ghz (C/C++)
K. Yamaguchi, D. McAllester and R. Urtasun: Robust Monocular Epipolar Flow Estimation. CVPR 2013.
223 IntrpNt-cpm code 22.51 % 26.54 % 23.18 % 100.00 % 5.6 s GPU @ 2.5 Ghz (Python)
S. Zweig and L. Wolf: InterpoNet, a Brain Inspired Neural Network for Optical Flow Dense Interpolation. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017.
224 FullFlow 23.09 % 24.79 % 23.37 % 100.00 % 4 min 4 cores @ >3.5 Ghz (Matlab and C++)
Q. Chen and V. Koltun: Full Flow: Optical Flow Estimation By Global Optimization over Regular Grids. CVPR 2016.
225 HiLM code 23.73 % 21.79 % 23.41 % 100.00 % 8 sec P6000 (C/C++)
M. Fathy, Q. Tran, M. Zia, P. Vernaza and M. Chandraker: Hierarchical Metric Learning and Matching for 2D and 3D Geometric Correspondences. European Conference on Computer Vision (ECCV) 2018.
226 Self-Mono-SF
This method uses stereo information.
code 23.26 % 24.93 % 23.54 % 100.00 % 0.09 s NVIDIA GTX 1080 Ti
J. Hur and S. Roth: Self-Supervised Monocular Scene Flow Estimation. CVPR 2020.
227 Self-SuperFlow
This method uses stereo information.
22.70 % 28.55 % 23.67 % 100.00 % 0.13 s GTX 1080 Ti
K. Bendig, R. Schuster and D. Stricker: Self-SuperFlow: Self-supervised Scene Flow Prediction in Stereo Sequences. International Conference on Image Processing (ICIP) 2022.
228 IntrpNt-dm code 23.46 % 26.27 % 23.93 % 100.00 % 15 s GPU @ 2.5 Ghz (Python)
S. Zweig and L. Wolf: InterpoNet, a Brain Inspired Neural Network for Optical Flow Dense Interpolation. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017.
229 SPM-BP 24.06 % 24.97 % 24.21 % 100.00 % 10 s 2 cores @ 2.5 Ghz (C/C++)
Y. Li, D. Min, M. Brown, M. Do and J. Lu: SPM-BP: Sped-up PatchMatch Belief Propagation for Continuous MRFs. Proceedings of the IEEE International Conference on Computer Vision 2015.
230 PPM code 25.87 % 23.67 % 25.50 % 100.00 % 17.3 s 1 core @ 2.5 Ghz (C/Chttps://github.c++)
F. Kuang: PatchMatch algorithms for motion estimation and stereo reconstruction. 2017.
231 3DFlow 25.56 % 29.33 % 26.19 % 100.00 % 448s Matlab with embedded C++ code
J. Chen, Z. Cai, J. Lai and X. Xie: A Filtering Based Framework for Optical Flow Estimation. IEEE TCSVT 2018.
232 EpicFlow code 25.81 % 28.69 % 26.29 % 100.00 % 15 s 1 core @ >3.5 Ghz (C/C++)
J. Revaud, P. Weinzaepfel, Z. Harchaoui and C. Schmid: EpicFlow: Edge-Preserving Interpolation of Correspondences for Optical Flow. CVPR 2015 - IEEE Conference on Computer Vision \& Pattern Recognition 2015.
233 SegFlow(d0=11) 28.97 % 22.64 % 27.91 % 100.00 % 4.5 s 1 core @ 3.5 Ghz (C/C++)
J. Chen, Z. Cai, J. Lai and X. Xie: Efficient Segmentation-based PatchMatch for Large displacement Optical Flow Estimation. IEEE TCSVT 2018.
234 DeepFlow code 27.96 % 31.06 % 28.48 % 100.00 % 17 s 1 core @ >3.5 Ghz (Python + C/C++)
P. Weinzaepfel, J. Revaud, Z. Harchaoui and C. Schmid: DeepFlow: Large displacement optical flow with deep matching. IEEE Intenational Conference on Computer Vision (ICCV) 2013.
235 CPNFlow 31.05 % 27.16 % 30.40 % 100.00 % 0.1 s GPU @ 1.5 Ghz (Python)
Y. Yang and S. Soatto: Conditional prior networks for optical flow. Proceedings of the European Conference on Computer Vision (ECCV) 2018.
236 IIOF-NLDP 30.23 % 32.44 % 30.60 % 100.00 % 350 s 4 cores @ 3.5 Ghz (Matlab + C/C++)
D. Trinh, W. Blondel and C. Daul: A General Form of Illumination- Invariant Descriptors in Variational Optical Flow Estimation. IEEE Int. Conf. on Image Processing (ICIP) 2017.
237 DMF_ROB code 30.74 % 30.07 % 30.63 % 100.00 % 150 s 1 core @ 2.5 Ghz (C/C++)
P. Weinzaepfel, J. Revaud, Z. Harchaoui and C. Schmid: DeepFlow: Large displacement optical flow with deep matching. ICCV - IEEE International Conference on Computer Vision 2013.
238 SPyNet code 33.36 % 43.62 % 35.07 % 100.00 % 0.16 s 1 core @ 2.5 Ghz (C/C++)
A. Ranjan and M. Black: Optical Flow Estimation using a Spatial Pyramid Network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2017.
239 SGM+C+NL
This method uses stereo information.
code 34.24 % 42.46 % 35.61 % 93.83 % 4.5 min 1 core @ 2.5 Ghz (C/C++)
H. Hirschmüller: Stereo Processing by Semiglobal Matching and Mutual Information. PAMI 2008.
D. Sun, S. Roth and M. Black: A Quantitative Analysis of Current Practices in Optical Flow Estimation and the Principles Behind Them. IJCV 2013.
240 DWBSF
This method uses stereo information.
40.74 % 31.16 % 39.14 % 100.00 % 7 min 4 cores @ 3.5 Ghz (C/C++)
C. Richardt, H. Kim, L. Valgaerts and C. Theobalt: Dense Wide-Baseline Scene Flow From Two Handheld Video Cameras. 3DV 2016.
241 SGM+LDOF
This method uses stereo information.
code 40.81 % 31.92 % 39.33 % 95.89 % 86 s 1 core @ 2.5 Ghz (C/C++)
H. Hirschmüller: Stereo Processing by Semiglobal Matching and Mutual Information. PAMI 2008.
T. Brox and J. Malik: Large Displacement Optical Flow: Descriptor Matching in Variational Motion Estimation. PAMI 2011.
242 HS code 39.90 % 51.39 % 41.81 % 100.00 % 2.6 min 1 core @ 3.0 Ghz (Matlab)
D. Sun, S. Roth and M. Black: A Quantitative Analysis of Current Practices in Optical Flow Estimation and The Principles Behind Them. 2014.
243 GCSF
This method uses stereo information.
code 47.38 % 41.50 % 46.40 % 100.00 % 2.4 s 1 core @ 2.5 Ghz (C/C++)
J. Cech, J. Sanchez-Riera and R. Horaud: Scene Flow Estimation by growing Correspondence Seeds. CVPR 2011.
244 DB-TV-L1 code 47.52 % 48.27 % 47.64 % 100.00 % 16 s 1 core @ 2.5 Ghz (Matlab)
C. Zach, T. Pock and H. Bischof: A Duality Based Approach for Realtime TV- L1 Optical Flow. DAGM 2007.
245 VSF
This method uses stereo information.
code 50.06 % 45.40 % 49.28 % 100.00 % 125 min 1 core @ 2.5 Ghz (C/C++)
F. Huguet and F. Devernay: A Variational Method for Scene Flow Estimation from Stereo Sequences. ICCV 2007.
246 HAOF code 49.89 % 50.74 % 50.04 % 100.00 % 16.2 s 1 core @ 2.5 Ghz (C/C++)
T. Brox, A. Bruhn, N. Papenberg and J. Weickert: High accuracy optical flow estimation based on a theory for warping. ECCV 2004.
247 TVL1_ROB code 51.15 % 51.12 % 51.14 % 100.00 % 3 s 4 cores @ 2.5 Ghz (C/C++)
J. Sánchez Pérez, E. Meinhardt-Llopis and G. Facciolo: TV-L1 Optical Flow Estimation. Image Processing On Line 2013.
248 PolyExpand 52.00 % 58.56 % 53.09 % 100.00 % 1 s 1 core @ 2.5 Ghz (C/C++)
G. Farneback: Two-Frame Motion Estimation Based on Polynomial Expansion. SCIA 2003.
249 uh
This method uses stereo information.
59.10 % 52.99 % 58.08 % 16.51 % 1.2 s 8 cores @ 3.2 Ghz (Matlab)
250 H+S_ROB code 68.22 % 76.49 % 69.60 % 100.00 % 8 s 4 cores @ 2.5 Ghz (C/C++)
E. Meinhardt-Llopis, J. Sánchez Pérez and D. Kondermann: Horn-Schunck Optical Flow with a Multi-Scale Strategy. Image Processing On Line 2013.
251 FRLPSSF
This method uses stereo information.
70.68 % 73.60 % 71.17 % 9.26 % 2.5 s 8 core @ 2.5 Ghz (Matlab)
A. Erfan salehi and R. hoseuni: Real-time Low complexity Precision Sparse Scene-flow. 2022.
252 Pyramid-LK code 71.84 % 76.82 % 72.67 % 100.00 % 1.5 min 1 core @ 2.5 Ghz (Matlab)
J. Bouguet: Pyramidal implementation of the Lucas Kanade feature tracker. Intel 2000.
253 iu
This method uses stereo information.
73.05 % 76.85 % 73.68 % 6.98 % 1.5 s 8 cores @ 2.5 Ghz (Matlab) and (C++)
254 MEDIAN 87.37 % 92.80 % 88.27 % 99.86 % 0.01 s 1 core @ 2.5 Ghz (C/C++)
255 AVERAGE 88.47 % 92.08 % 89.07 % 99.86 % 0.01 s 1 core @ 2.5 Ghz (C/C++)
Table as LaTeX | Only published Methods




Related Datasets

  • HCI/Bosch Robust Vision Challenge: Optical flow and stereo vision challenge on high resolution imagery recorded at a high frame rate under diverse weather conditions (e.g., sunny, cloudy, rainy). The Robert Bosch AG provides a prize for the best performing method.
  • Image Sequence Analysis Test Site (EISATS): Synthetic image sequences with ground truth information provided by UoA and Daimler AG. Some of the images come with 3D range sensor information.
  • Middlebury Stereo Evaluation: The classic stereo evaluation benchmark, featuring four test images in version 2 of the benchmark, with very accurate ground truth from a structured light system. 38 image pairs are provided in total.
  • Daimler Stereo Dataset: Stereo bad weather highway scenes with partial ground truth for freespace
  • Make3D Range Image Data: Images with small-resolution ground truth used to learn and evaluate depth from single monocular images.
  • Lubor Ladicky's Stereo Dataset: Stereo Images with manually labeled ground truth based on polygonal areas.
  • Middlebury Optical Flow Evaluation: The classic optical flow evaluation benchmark, featuring eight test images, with very accurate ground truth from a shape from UV light pattern system. 24 image pairs are provided in total.

Citation

When using this dataset in your research, we will be happy if you cite us:
@ARTICLE{Menze2018JPRS,
  author = {Moritz Menze and Christian Heipke and Andreas Geiger},
  title = {Object Scene Flow},
  journal = {ISPRS Journal of Photogrammetry and Remote Sensing (JPRS)},
  year = {2018}
}
@INPROCEEDINGS{Menze2015ISA,
  author = {Moritz Menze and Christian Heipke and Andreas Geiger},
  title = {Joint 3D Estimation of Vehicles and Scene Flow},
  booktitle = {ISPRS Workshop on Image Sequence Analysis (ISA)},
  year = {2015}
}



eXTReMe Tracker