Efficient rendering of massive and dynamic point cloud data in state-of-the-art graphics engines

This thesis deals with point cloud rendering and a plugin for the Unreal Engine is developed, which is capable of rendering massive and dynamic point cloud data in Real-Time.







Description / Abstract

Point clouds have lately gained much popularity since professional laser scanners and consumer devices like the Microsoft Kinect have become available to a broad audience. Nowadays, point clouds are being used in a multitude of industries, like the 3D industry, architecture, robotics, and so on. At the same time, the industries rely more and more on the popular 3D graphics engines for their Real-Time applications, like Unity3D or Epic's Unreal Engine. However, there is just very few software and research available on how to efficiently include or implement a high-quality point cloud renderer into these polygon-based and complex state-of-the-art engines.
In this thesis, I present an efficient way to implement a GPU-based point cloud renderer that is capable of rendering huge point clouds, both static and fully dynamic, in high quality and Real-Time inside the Unreal Engine. To do so, a novel way of Order-Independent Transparency (OIT) is developed by employing a massively parallel bitonic sorting that sorts the point cloud via a compute shader.
The presented renderer could be applied in various application fields, such as collaborative virtual environments (CVEs) or dynamic and on-the-fly environment scanning, which is relevant for instance in robotics.
The point cloud renderer will be published as an publicly available, open source plugin for the Unreal Engine.

Results

On the one hand, a way to implement an efficient, GPU-based point cloud renderer in a polygon-based pipeline was presented. The ability to render large and dynamically changing point clouds is particularly remarkable since very few approaches focus on the rendering of dynamic point clouds. In this context, an alternative method for order-independent transparency was proposed by the parallel bitonic sorting of the point cloud's point positions texture according to the camera position. Moreover, it was shown that the combination of a scanning device (in this case the Microsoft Kinect) and a positional tracking system (in this case the HTC Vive) allow for an easy and efficient environment scanning and can make computationally expensive matching or registration of the individual point cloud parts superfluous or at least improve it. This is particularly interesting for the mapping of smaller indoor environments, for instance in robotics, where until now computationally expensive matching/registration algorithms are dominant.

In practical terms, a GPU-based point cloud renderer was implemented that is capable of processing and rendering huge and fully dynamic point clouds in high quality inside the Unreal Engine.

Files

Read the full version of the master thesis.

Download the full project with source code for UE4.21.

The current version of the Point Cloud Renderer plugin for Unreal can also be accessed on the Github repository.

The current version of the Compute Shader Sorting plugin for Unreal can also be accessed on the Github repository.

License

This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit creativecommons.org.
If you would like to use the software commercially, please get in touch with us (zach at cs.uni-bremen dot de) to obtain the assets under a different license (a.k.a. multi-licensing).
If you use any of the assets or software from this web page to produce a publication, you must give credit and put a reference in your publication.

Here is a movie that shows the point cloud renderer visualising both static and dynamic point clouds in real-time:

Collaborations

This work was performed in collaboration with the Institute of Artificial Intelligence of the University of Bremen.