Biography

My expertise lies in the field of machine learning; I'm dealing with Point Cloud segmentation models. My primary focus is to address the challenges posed by sparse sections of point clouds, especially those that are crucial for forestry applications, like the sections near the base of tree trunks. While data from drones and airplanes is readily accessible, ensuring high semantic accuracy during processing can be quite intricate. Therefore, new methods in point cloud instance and semantic segmentation are needed.

Read more
To document

Abstract

Detailed forest inventories are critical for sustainable and flexible management of forest resources, to conserve various ecosystem services. Modern airborne laser scanners deliver high-density point clouds with great potential for fine-scale forest inventory and analysis, but automatically partitioning those point clouds into meaningful entities like individual trees or tree components remains a challenge. The present study aims to fill this gap and introduces a deep learning framework, termed ForAINet, that is able to perform such a segmentation across diverse forest types and geographic regions. From the segmented data, we then derive relevant biophysical parameters of individual trees as well as stands. The system has been tested on FOR-Instance, a dataset of point clouds that have been acquired in five different countries using surveying drones. The segmentation back-end achieves over 85% F-score for individual trees, respectively over 73% mean IoU across five semantic categories: ground, low vegetation, stems, live branches and dead branches. Building on the segmentation results our pipeline then densely calculates biophysical features of each individual tree (height, crown diameter, crown volume, DBH, and location) and properties per stand (digital terrain model and stand density). Especially crown-related features are in most cases retrieved with high accuracy, whereas the estimates for DBH and location are less reliable, due to the airborne scanning setup.

To document

Abstract

This study focuses on advancing individual tree crown (ITC) segmentation in lidar data, developing a sensor- and platform-agnostic deep learning model transferable across a spectrum of dense laser scanning datasets from drone (ULS), to terrestrial (TLS), and mobile (MLS) laser scanning data. In a field where transferability across different data characteristics has been a longstanding challenge, this research marks a step towards versatile, efficient, and comprehensive 3D forest scene analysis. Central to this study is model performance evaluation based on platform type (ULS vs. MLS) and data density. This involved five distinct scenarios, each integrating different combinations of input training data, including ULS, MLS, and their augmented versions through random subsampling, to assess the model's transferability to varying resolutions and efficacy across different canopy layers. The core of the model, inspired by the PointGroup architecture, is a 3D convolutional neural network (CNN) with dedicated prediction heads for semantic and instance segmentation. The model underwent comprehensive validation on publicly available, machine learning-ready point cloud datasets. Additional analyses assessed model adaptability to different resolutions and performance across canopy layers. Our results reveal that point cloud random subsampling is an effective augmentation strategy and improves model performance and transferability. The model trained using the most aggressive augmentation, including point clouds as sparse as 10 points m−2, showed best performance and was found to be transferable to sparse lidar data and boosts detection and segmentation of codominant and dominated trees. Notably, the model showed consistent performance for point clouds with densities >50 points m−2 but exhibited a drop in performance at the sparsest level (10 points m−2), mainly due to increased omission rates. Benchmarking against current state-of-the-art methods revealed boosts of up to 20% in the detection rates, indicating the model's superior performance on multiple open benchmark datasets. Further, our experiments also set new performance baselines for the other public datasets. The comparison highlights the model's superior segmentation skill, mainly due to better detection and segmentation of understory trees below the canopy, with reduced computational demands compared to other recent methods. In conclusion, the present study demonstrates that it is indeed feasible to train a sensor-agnostic model that can handle diverse laser scanning data, going beyond current sensor-specific methodologies. Further, our study sets a new baseline for tree segmentation, especially in complex forest structures. By advancing the state-of-the-art in forest lidar analysis, our work also lays the foundation for future innovations in ecological modeling and forest management.