Learning to Reconstruct Compact Building Models from Point Clouds

More Info
expand_more

Abstract

Three-dimensional building models play a pivotal role in shaping the digital twin of our world. With the advance of sensing technologies, unprecedented data acquisition capabilities on capturing the built environment have surfaced, with photogrammetry and light detection and ranging being the two important sources, both of which can acquire point clouds of buildings. A point cloud is anisotropically distributed in space, which---though conveying spatial information itself---has to be converted into a surface model for a wider spectrum of usage. This conversion is often referred to as reconstruction. Despite the enhanced availability of point cloud data in the built environment, how to reconstruct high-quality building surface models remains non-trivial in remote sensing, computer vision, and computer graphics. Most reconstruction methods are dedicated to smooth surfaces represented by dense triangles, irrespective of the piecewise planarity that dominates the geometry of real-world buildings. Although some works claim the possibility of reconstructing piecewise-planar shapes from point clouds, they either struggle to comply with specific geometric constraints, or suffer from serious scalability issues. There is no versatile solution yet for building reconstruction. In this thesis, we propose a novel framework for reconstructing compact, watertight, polygonal building models from point clouds. Our approach comprises three functional blocks: (a) a cell complex is generated via adaptive space partitioning that provides a polyhedral embedding as the candidate set; (b) an implicit field is learnt by a deep neural network that facilitates building occupancy estimation; (c) a Markov random field is formulated for surface extraction via combinatorial optimisation, where an efficient graph-cut solver is applied. We extensively evaluate the proposed method in comparison with state-of-the-art methods in shape reconstruction, surface approximation and geometry simplification. Experimental results reveal that, with our neural-guided strategy, high-quality building models can be obtained with significant advantages over fidelity, compactness and computational efficiency. The method shows robustness to noise and insufficient measurements due to occlusions, and generalise reasonably well from synthetic scans to real-world measurements. Moreover, our method remains generic to not only buildings, but any piecewise-planar objects.

Files

ZChen_Thesis.pdf
(pdf | 83.3 Mb)
Unknown license
ZChen_Slides.pdf
(pdf | 9.01 Mb)
Unknown license