Offene Arbeiten

Uncertainty Quantification for Deep Learning-based Point Cloud Registration

Keywords:
Uncertainty Quantification, Point Cloud Registration, Bayesian Inference, Deep Learning

Description

The problem of registering point clouds can be reduced to estimating a Euclidean transformation between two sets of 3D points [1]. Once the transformation is estimated, it can be used to register two point clouds in a common coordinate system.

Applications of point cloud registration include 3D reconstruction, localization, or change detection. However, these applications rely on a high similarity between point clouds and do not account for disturbances in the form of noise, occlusions, or outliers. Such defects degrade the quality of the point cloud and thus the accuracy of the registration-dependent application. One approach to deal with these effects is to quantify the registration uncertainty. The general idea is to use uncertainty as a guide for point cloud registration quality. If the uncertainty is too high, a new registration iteration or re-scanning is needed.

In this project, we investigate uncertainty quantification for current learning-based approaches to point cloud registration [1, 2, 3]. First, several methods for uncertainty quantification are selected [4]. Of particular interest are approaches based on Bayesian inference. The approaches are then modified to fit current point cloud registration frameworks and evaluated against benchmark datasets such as ModelNet or ShapeNet. In the evaluation, different types of scan perturbations need to be tested.

References

[1] Huang, Xiaoshui, et al. A Comprehensive Survey on Point Cloud Registration. arXiv:2103.02690, arXiv, 5 Mar. 2021. arXiv.org, http://arxiv.org/abs/2103.02690.

[2] Yuan, Wentao, et al. DeepGMR: Learning Latent Gaussian Mixture Models for Registration. arXiv:2008.09088, arXiv, 20 Aug. 2020. arXiv.org, http://arxiv.org/abs/2008.09088.

[3] Huang, Shengyu, et al. “PREDATOR: Registration of 3D Point Clouds with Low Overlap.” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE, 2021, pp. 4265–74. DOI.org (Crossref), https://doi.org/10.1109/CVPR46437.2021.00425.

[4] Abdar, Moloud, et al. “A Review of Uncertainty Quantification in Deep Learning: Techniques, Applications and Challenges.” Information Fusion, vol. 76, Dec. 2021, pp. 243–97. ScienceDirecthttps://doi.org/10.1016/j.inffus.2021.05.008.

   

 

Prerequisites

  • Python and Git
  • Experience with a deep learning framework (Pytorch, Tensorflow)
  • Interest in Computer Vision and Machine Learning

Contact

Please send your CV and Transcript of Records to:

adam.misik@tum.de

 

Supervisor:

Adam Misik

Removal of Dynamic Objects from Indoor Point Clouds

Keywords:
Static Maps, Point Clouds, Deep Learning

Description

The accuracy of point cloud-based indoor localization can be improved by using static maps. One step in the creation of such maps is the removal of dynamic objects and their corresponding traces. In this project, we aim to investigate approaches for dynamic object removal based on either (a) occupancy grid analysis [1, 2] or (b) semantic segmentation [3].

 

References

[1] S.Pagad, D.Agarwal, S.Narayanan, K. Rangan, H. Kim, and G. Yalla, "Robust Method for Removing Dynamic Objects from Point Clouds," 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 10765-10771, doi: 10.1109/ICRA40945.2020.9197168

[2] H. Lim, S. Hwang, and H. Myung, "ERASOR: Egocentric Ratio of Pseudo Occupancy-Based Dynamic Object Removal for Static 3D Point Cloud Map Building," IEEE Robotics and Automation Letters, 6(2), pp. 2272-2279

[3] M. Arora, L. Wiesmann, X. Chen, and C. Stachniss, "Mapping the Static Parts of Dynamic Scenes from 3D LiDAR Point Clouds Exploiting Ground Segmentation," 2021 European Conference on Mobile Robots (ECMR), 2021, pp. 1-6, doi: 10.1109/ECMR50962.2021.9568799

 

 

 

Prerequisites

  • Python and Git
  • Experience with a deep learning framework (Pytorch, Tensorflow)
  • Interest in Computer Vision and Machine Learning

Contact

Please send your CV and Transcript of Records to:

adam.misik@tum.de

Supervisor:

Adam Misik

Laufende Arbeiten

Research Internships (Forschungspraxis)

Improving Point Cloud-based Place Recognition and Re-localization using Point Cloud Completion Approaches

Keywords:
Point Cloud Completion, Place Recognition, Relocalization, Deep Learning

Description

The field of point cloud-based global placer recognition and re-localization has become a trending research area given the recent advances in geometric deep learning [1, 2, 3]. The main advantage of point cloud-based place recognition and re-localization is its robustness to photometric perturbations (day/night, weather conditions) and the direct availability of depth information. However, if the queried point cloud is sparse (e.g., due to occlusions) and thus of lower quality, such localization approaches fail.

One approach to deal with the sparseness of point clouds is through point cloud completion. In this research internship, we will investigate the potential of current point cloud completion methods for improving point cloud-based place recognition and re-localization. The completion models investigated will fall into the following categories: generative adversarial networks, probabilistic diffusion models, and variational autoencoders [4, 5]. 

References

 

  1. Du, Juan, et al. “DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF Relocalization.” Computer Vision – ECCV 2020, edited by Andrea Vedaldi et al., vol. 12349, Springer International Publishing, 2020, pp. 744–62. DOI.org (Crossref)https://doi.org/10.1007/978-3-030-58548-8_43.
  2. Komorowski, Jacek, et al. EgoNN: Egocentric Neural Network for Point Cloud Based 6DoF Relocalization at the City Scale. arXiv:2110.12486, arXiv, 24 Oct. 2021. arXiv.orghttp://arxiv.org/abs/2110.12486.
  3. Uy, Mikaela Angelina, and Gim Hee Lee. “PointNetVLAD: Deep Point Cloud Based Retrieval for Large-Scale Place Recognition.” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2018, pp. 4470–79. DOI.org (Crossref)https://doi.org/10.1109/CVPR.2018.00470.
  4. Zhang, Junzhe, et al. Unsupervised 3D Shape Completion through GAN Inversion. arXiv:2104.13366, arXiv, 29 Apr. 2021. arXiv.orghttp://arxiv.org/abs/2104.13366.
  5. Fei, Ben, et al. Comprehensive Review of Deep Learning-Based 3D Point Cloud Completion Processing and Analysis. arXiv:2203.03311, arXiv, 9 Mar. 2022. arXiv.orghttp://arxiv.org/abs/2203.03311.

 

 

Prerequisites

  • Python and Git
  • Experience with a deep learning framework (Pytorch, Tensorflow)
  • Interest in Computer Vision and Machine Learning

Contact

Please send your CV and Transcript of Records to:

adam.misik@tum.de

Supervisor:

Adam Misik

Learning the Layout of Large-Scale Indoor Point Clouds

Keywords:
Semantic Segmentation, Point Cloud, Layout Understanding

Description

Indoor localization approaches based on point clouds rely on global 3D maps of the target environment. Learning the layout of the 3D map by generating submaps representing scenes or rooms simplifies the localization process. In this work, we aim to develop an approach for the layout understanding of large-scale indoor point clouds. The approach can utilize methods from 3D semantic segmentation and structural element detection [1, 2]. 

The developed pipeline will be evaluated on indoor point clouds from different sources, e.g. LiDAR and SfM point clouds.

References

[1] I. Armeni et al., "3D Semantic Parsing of Large-Scale indoor Spaces," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 1534-1543, doi: 10.1109/CVPR.2016.170

[2] T. Zheng et al. "Building fusion: semantic aware structural building-scale 3d reconstruction." IEEE Transactions on Pattern Analysis and Machine Intelligence (2020).

Prerequisites

  • Python and Git
  • Experience with a deep learning framework (Pytorch, Tensorflow)
  • Interest in Computer Vision and Machine Learning

Contact

Please send your CV and Transcript of Records to:

adam.misik@tum.de

Supervisor:

Adam Misik