PointINet: Point Cloud Frame Interpolation Network
Fan Lu, Guang Chen*, Sanqing Qu, Zhijun Li, Yinlong Liu, Alois Knoll
Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021
LiDAR point cloud streams are usually sparse in time dimension, which is limited by hardware performance. Generally,
the frame rates of mechanical LiDAR sensors are 10 to 20 Hz, which is much lower than other commonly used sensors
like cameras. To overcome the temporal limitations of LiDAR sensors, a novel task named Point Cloud Frame Interpolation
is studied in this paper. Given two consecutive point cloud frames, Point Cloud Frame Interpolation aims to generate intermediate frame(s) between them.
To achieve that, we propose a novel framework, namely Point Cloud Frame Interpolation Network (PointINet). Based on the proposed method,
the low frame rate point cloud streams can be upsampled to higher frame rates. We start by estimating bi-directional
3D scene flow between the two point clouds and then warp them to the given time step based on the 3D scene flow. To
fuse the two warped frames and generate intermediate point cloud(s), we propose a novel learning-based points fusion
module, which simultaneously takes two warped point clouds into consideration.
We design both quantitative and qualitative experiments to evaluate the performance of the point
cloud frame interpolation method and extensive experiments on two large scale outdoor LiDAR datasets demonstrate the
effectiveness of the proposed PointINet.
The illustration of point cloud frame interpolation is shown below.
The network architecture is shown below.