GEOMETRIC FEATURES AND THEIR RELEVANCE FOR 3D POINT CLOUD CLASSIFICATION

In this paper, we focus on the automatic interpretation of 3D point cloud data in terms of associating a class label to each 3D point. While much effort has recently been spent on this research topic, little attention has been paid to the influencing factors that affect the quality of the derived classification results. For this reason, we investigate fundamental influencing factors making geometric features more or less relevant with respect to the classification task. We present a framework which consists of five components addressing point sampling, neighborhood recovery, feature extraction, classification and feature relevance assessment. To analyze the impact of the main influencing factors which are represented by the given point sampling and the selected neighborhood type, we present the results derived with different configurations of our framework for a commonly used benchmark dataset for which a reference labeling with respect to three structural classes (linear structures, planar structures and volumetric structures) as well as a reference labeling with respect to five semantic classes (Wire, Pole/Trunk, Façade, Ground and Vegetation) is available.


INTRODUCTION
Modern scanning devices allow acquiring 3D data in the form of densely sampled point clouds comprising millions of 3D points.Based on such point clouds, a variety of tasks can be performed of which many rely on an initial point cloud interpretation.Such an initial interpretation is often derived via point cloud classification where the objective consists in automatically labeling the 3D points of a given point cloud with respect to pre-defined class labels.In this regard, the main challenges are given by the irregular point sampling with typically strongly varying point density, different types of objects in the scene and a high complexity of the observed scene.
Interestingly, the visualization of the spatial arrangement of acquired 3D points (and thus only geometric cues) is already sufficient for us humans to reason about specific structures in the scene (Figure 1).For this reason, we follow a variety of investigations on point cloud classification and focus on the use of geometric features.In this regard, the standard processing pipeline starts with the recovery of a local neighborhood for each 3D point of the given point cloud.Subsequently, geometric features are extracted based on the consideration of the spatial arrangement of 3D points within the local neighborhoods, and these features are finally provided as input to a classifier that has been trained on representative training data and is therefore able to generalize to unseen data.Such a standard processing pipeline already reveals that the derived classification results might strongly depend on the given point sampling, the selected neighborhood type, the extracted geometric features themselves and the involved classifier.This paper is dedicated to a detailed analysis of fundamental influencing factors regarding point cloud classification and feature relevance with respect to the classification task.In contrast to previous work, we aim at quantifying the impact of the given point sampling and the selected neighborhood type on geometric features and their relevance with respect to the classification task.For this purpose, we consider • the original point sampling and a point sampling derived via voxel-grid filtering, • four conceptually different neighborhood types (a spherical neighborhood with a radius of 1m, a cylindrical neighborhood with a radius of 1m, a spherical neighborhood formed by the 50 nearest neighbors and a spherical neighborhood formed by the "optimal" number of nearest neighbors), • a set of 18 low-level geometric 3D and 2D features, • a classification with respect to structural classes and a classification with respect to semantic classes, • three classifiers relying on different learning principles (instance-based learning, probabilistic learning and ensemble learning), and • a classifier-independent relevance metric taking into account seven different intrinsic properties of the given training data.
For performance evaluation, we use a benchmark dataset for which reference labelings with respect to three structural classes (linear structures, planar structures and volumetric structures) and five semantic classes (Wire, Pole/Trunk, Fac ¸ade, Ground and Vegetation) are available as shown in Figure 1.
After briefly summarizing related work (Section 2), we present the proposed framework for 3D point cloud classification and feature relevance assessment (Section 3).We demonstrate the performance of this framework on a benchmark dataset (Section 4) and discuss the derived results with respect to different aspects (Section 5).Finally, we provide concluding remarks and suggestions for future work (Section 6).

RELATED WORK
In the following, we briefly summarize related work and thereby address a typical processing pipeline for point cloud classification that involves the steps of neighborhood recovery (Section 2.1), feature extraction (Section 2.2) and classification (Section 2.3).
In addition, we address previous work on feature relevance assessment (Section 2.4).

Neighborhood Recovery
In general, different strategies may be applied to recover local neighborhoods for the points of a 3D point cloud.In particular, those neighborhood types in the form of a spherical neighborhood parameterized by a radius (Lee and Schenk, 2002), a cylindrical neighborhood parameterized by a radius (Filin and Pfeifer, 2005), a spherical neighborhood parameterized by the number of nearest neighbors with respect to the Euclidean distance in 3D space (Linsen and Prautzsch, 2001) or a cylindrical neighborhood parameterized by the number of nearest neighbors with respect to the Euclidean distance in 2D space (Niemeyer et al., 2014) are commonly used.These neighborhood types are parameterized with a single scale parameter which is represented by either a radius or the number of nearest neighbors, and they allow describing the local 3D structure at a specific scale.To select an appropriate value for the scale parameter, prior knowledge about the scene and/or the data is typically involved.Furthermore, identical values for the scale parameter are typically selected for all points of the 3D point cloud.Recent investigations however revealed that structures related with different classes may favor a different neighborhood size (Weinmann et al., 2015a;Weinmann, 2016) and therefore it seems favorable to allow for more variability by using data-driven approaches for optimal neighborhood size selection (Mitra and Nguyen, 2003;Lalonde et al., 2005;Demantké et al., 2011;Weinmann et al., 2015a).
Instead of using a single neighborhood to describe the local 3D structure at a specific scale, multiple neighborhoods can be used to describe the local 3D structure at different scales and thus also take into account how the local 3D geometry behaves across these scales.The commonly used multi-scale neighborhoods typically focus on the combination of spherical neighborhoods with different radii (Brodu and Lague, 2012) or the combination of cylindrical neighborhoods with different radii (Niemeyer et al., 2014;Schmidt et al., 2014).Furthermore, it has been proposed to use multi-type neighborhoods by combining neighborhoods based on different entities such as voxels, blocks and pillars (Hu et al., 2013), or multi-scale, multi-type neighborhoods which result from a combination of both spherical and cylindrical neighborhoods with different scale parameters (Blomley et al., 2016).
In the scope of this work, we intend to analyze the impact of the neighborhood type on the derived classification results and on the relevance of single features with respect to the classification task.Accordingly, we integrate several of the commonly used singlescale neighborhood types into our framework.

Feature Extraction
After the recovery of local neighborhoods, geometric features can be extracted by considering the spatial arrangement of neighboring points.In this regard, the spatial coordinates of neighboring points are often used to derive the 3D structure tensor whose eigenvalues can be used to detect specific shape primitives (Jutzi and Gross, 2009).The eigenvalues of the 3D structure tensor can also be used to derive the local 3D shape features (West et al., 2004;Pauly et al., 2003) which allow a rather intuitive description of the local 3D structure with one value per feature and are therefore widely used for point cloud classification.In addition to these local 3D shape features, other features can be used to account for further characteristics of the local 3D structure, e.g.angular characteristics (Munoz et al., 2009), height and plane characteristics (Mallet et al., 2011), a variety of low-level geometric 3D and 2D features (Weinmann et al., 2015a;Weinmann, 2016), moments and height features (Hackel et al., 2016), or specific descriptors addressing surface properties, slope, height characteristics, vertical profiles and 2D projections (Guo et al., 2015).For the sake of clarity, we also mention that there are more complex features, e.g.sampled features in the form of spin images (Johnson and Hebert, 1999), shape distributions (Osada et al., 2002;Blomley et al., 2016), or point feature histograms (Rusu et al., 2009).
In the scope of this work, we do not intend to address feature design or feature learning.Instead, we aim at generally evaluating the relevance of standard features with respect to the classification task.Hence, we focus on the extraction of geometric features that are rather intuitive and represented by a single value per feature.The features presented in (Weinmann et al., 2015a;Weinmann, 2016) satisfy these constraints and are therefore used.

Classification
The extracted features are provided as input to a classifier that has been trained on representative training data and is therefore able to generalize to unseen data.In general, different strategies may be applied for classification.On the one hand, standard classifiers such as a Random Forest classifier (Chehata et al., 2009), a Support Vector Machine classifier (Mallet et al., 2011) or Bayesian Discriminant Analysis classifiers (Khoshelham and Oude Elberink, 2012) can be used which are easy-to-use and meanwhile available in a variety of software tools.A recent comparison of several respective classifiers relying on different learning principles reveals that a Random Forest classifier provides a good trade-off between classification accuracy and computational efficiency (Weinmann et al., 2015a;Weinmann, 2016).
On the other hand, it might be desirable to avoid a "noisy" behavior of the derived labeling due to treating each point individually and therefore use a classifier that enforces a spatially regular labeling.In this regard, statistical models of context are typically involved, e.g. in the form of Associative Markov Networks (Munoz et al., 2009), non-Associative Markov Networks (Shapovalov et al., 2010), or Conditional Random Fields (Niemeyer et al., 2014;Schmidt et al., 2014;Weinmann et al., 2015b).
In the scope of this work, we intend to consider the classification results derived with a standard classifier in order to evaluate the relevance of features with respect to the classification task.To be able to draw more general conclusions, we involve respective classifiers relying on different learning principles.Involving contextual information to derive a spatially regular labeling would also have an impact on the derived results, but it would be hard to have a decoupled conclusion about the impact of the involved features and the impact of contextual information.

Feature Relevance Assessment
Due to a lack of knowledge about the scene and/or the data, often as many features as possible are defined and provided as input to a classifier.However, some features may be more relevant, whereas others may be less suitable or even irrelevant.Although, in theory, many classifiers are considered to be insensitive to the given dimensionality, redundant or irrelevant information has been proven to influence their performance in practice.In particular for high-dimensional data representations, the Hughes phenomenon (Hughes, 1968) can often be observed according to which an increase of the number of features over a certain threshold results in a decrease in classification accuracy, given a constant number of training examples.As a consequence, attention has been paid to feature selection with the objectives of gaining predictive accuracy, improving computational efficiency with respect to both time and memory consumption, and retaining meaningful features (Guyon and Elisseeff, 2003).Some feature selection methods allow assessing the relevance of single features and thus ranking these features according to their relevance with respect to the classification task.In the context of point cloud classification, it has for instance been proposed to use an embedded method in the form of a Random Forest classifier which internally evaluates feature relevance (Chehata et al., 2009).Furthermore, wrapper-based methods interacting with a classifier and performing either sequential forward selection or sequential backward elimination have been used (Mallet et al., 2011;Khoshelham and Oude Elberink, 2012).However, both embedded methods and wrapper-based methods evaluate feature relevance with respect to the involved classifier, thus introducing a dependency on a classifier and its settings (e.g., for the case of a Random Forest classifier the number of involved weak learners, their type and the (ideally high) number of considered choices per variable).In contrast, filter-based methods are classifier-independent and only exploit a score function directly based on the training data which, in turn, results in simplicity and efficiency (Weinmann, 2016).
In the scope of this work, we intend to evaluate the relevance of single features with respect to the classification task.For this pur-pose, we use a filter-based method taking into account different characteristics of the given training data via a general relevance metric presented in (Weinmann, 2016).Instead of using the feature ranking for a sequential forward selection coupled with classification, we consider the ranking itself with respect to different reference labelings to draw general conclusions about generally relevant features, generally irrelevant features and features that vary in their relevance with respect to the classification task.

METHODOLOGY
The proposed framework for point cloud classification and feature relevance assessment comprises five components addressing point sampling (Section 3.1), neighborhood recovery (Section 3.2), feature extraction (Section 3.3), classification (Section 3.4) and feature relevance assessment (Section 3.5).An overview on the different components is provided in Figure 2.

Point Sampling
To investigate the influence of the point sampling on point cloud classification and feature relevance assessment, we take into account two different options for the point sampling.On the one hand, we consider the original point cloud.On the other hand, we consider a downsampling of the original point cloud via a voxelgrid filter (Theiler et al., 2014;Hackel et al., 2016) to roughly even out a varying point density as e.g.expected when using terrestrial or mobile laser scanning systems for data acquisition.

Neighborhood Recovery
A crucial prerequisite for the extraction of geometric features is represented by an appropriate neighborhood definition.As we intend to investigate the impact of the neighborhood type on the classification results and on the relevance of single features, we integrate different options to recover the local neighborhood of each considered point Xi into our framework: • a spherical neighborhood Ns,1m, where the sphere is centered at Xi and has a radius of 1m, • a cylindrical neighborhood Nc,1m, where the cylinder is centered at Xi, has a radius of 1m and is oriented along the vertical direction, • a spherical neighborhood N k=50 comprising the k = 50 nearest neighbors of Xi with respect to the Euclidean distance in 3D space, and • a spherical neighborhood N k opt comprising the optimal number kopt,i of nearest neighbors of Xi with respect to the Euclidean distance in 3D space, whereby kopt,i is selected for each 3D point individually via eigenentropy-based scale selection (Weinmann et al., 2015a;Weinmann, 2016).

Feature Extraction
For each neighborhood type, we extract a set of geometric features describing the spatial arrangement of points within the local neighborhood of each considered point Xi.More specifically, we calculate the features presented in (Weinmann et al., 2015a;Weinmann, 2016) as these features are rather intuitive and represented by only a single value per feature.This feature set comprises 14 geometric 3D features and four geometric 2D features.
Among the 3D features, some are represented by the local 3D shape features that rely on the eigenvalues λj with j = 1, 2, 3 of the 3D structure tensor derived from Xi and its neighboring points.These features are given by linearity L λ , planarity P λ , sphericity S λ , omnivariance O λ , anisotropy A λ , eigenentropy E λ , sum of eigenvalues Σ λ and change of curvature C λ (West et al., 2004;Pauly et al., 2003).Other 3D features are defined in terms of geometric 3D properties that are represented by the height H of Xi, the distance D3D between Xi and the farthest point in the local neighborhood, the local point density ρ3D, the verticality V , and the maximum difference ∆H as well as the standard deviation σH of the height values corresponding to those points within the local neighborhood.
The additional use of 2D features is motivated by the fact that specific assumptions about the point distribution can be made.Urban areas are for instance characterized by a variety of man-made objects of which many are characterized by almost perfectly vertical structures.To encode such characteristics with geometric features, a 2D projection of Xi and all other points within the local neighborhood onto a horizontally oriented plane is introduced.Based on these projections, local 2D shape features are defined by the sum Σ ξ and the ratio R ξ of the eigenvalues ξj with j = 1, 2 of the 2D structure tensor.Furthermore, geometric 2D properties are defined by the distance D2D between the projection of Xi and the farthest point in the local 2D neighborhood and the local point density ρ2D in 2D space.

Classification
To be able to draw more general conclusions, we integrate three different classifiers into our framework: a Nearest Neighbor (NN) classifier relying on instance-based learning, a Linear Discriminant Analysis (LDA) classifier relying on probabilistic learning and a Random Forest (RF) classifier (Breiman, 2001) relying on ensemble learning.Whereas the NN classifier (with respect to Euclidean distances in the feature space) and the LDA classifier do not involve a classifier-specific setting, the RF classifier involves several parameters that have to be selected appropriately based on the given training data via parameter tuning on a suitable subspace spanned by the considered parameters.

Feature Relevance Assessment
Among a variety of techniques for feature selection as for instance reviewed in (Saeys et al., 2007), filter-based feature selection methods have the advantage that they are classifierindependent and therefore relatively simple and efficient.More specifically, such methods evaluate relations between features and classes to identify relevant features and partially also relations among features to identify and discard redundant features (Weinmann, 2016).This is done based on the given training data by concatenating the values of a feature for all considered data points to a vector and comparing that vector with the vector containing the corresponding class labels.Thereby, the comparison is typically performed with a metric that delivers a single value as a score, thus allowing us to rank features with respect to their relevance to the considered classification task.Such metrics can easily be implemented, but some of them are also available in software packages (Zhao et al., 2010).As different metrics may address different intrinsic properties of the given training data (e.g.correlation, information, dependence or consistency), a common consideration of several metrics seems to be desirable.For this reason, we focus on a two-step approach for feature ranking.In the first step, different metrics are applied to derive separate rankings for the features fi with i = 1, . . ., N f (and N f = 18 for the considered set of low-level geometric features) with respect to different criteria (Weinmann, 2016): • The degree to which a feature is correlated with the class labels is described with Pearson's correlation coefficient (Pearson, 1896).
• A statistical measure of dispersion and thus an inequality measure quantifying a feature's ability to distinguish between classes is given by the Gini Index (Gini, 1912).
• The ratio between inter-class and intra-class variance is represented by the Fisher score (Fisher, 1936).
• The dependence between a feature and the class labels is described with the Information Gain (Quinlan, 1986).
• To derive the contribution of a feature to the separation of samples from different classes, the ReliefF measure (Kononenko, 1994) is used.
• To assess whether a class label is independent of a particular feature, a χ 2 -test is used.
• To analyze the effectivity of a feature regarding the separation of classes, a t-test on each feature is used.
Based on these criteria, we define metrics mj with j = 1, . . ., Nm and Nm = 7.In our implementation, smaller values for the rank reveal features of higher relevance when considering the respective metric, whereas higher values reveal less suitable features.In the second step, the separate rankings are combined by selecting the average rank r per feature fi according to where r(fi|mj) indicates the rank of a feature fi given the metric mj and hence r(fi|mj) ∈ [1, N f ].Finally, we map the derived average ranks to the interval [0, 1] in order to interpret the result as relevance R of the feature fi (Weinmann, 2016):

EXPERIMENTAL RESULTS
In the following, we first describe the dataset used for our experiments (Section 4.1).Subsequently, we focus on the impact of the selected neighborhood type on feature extraction (Section 4.2), before we present the derived classification results (Section 4.3) and the results of feature relevance assessment (Section 4.4).

Dataset
For our experiments, we use the Oakland 3D Point Cloud Dataset (Munoz et al., 2009).This dataset has been acquired with a mobile laser scanning system in the vicinity of the CMU campus in Oakland, USA.According to the provided specifications (Munoz et al., 2008;Munoz et al., 2009), the mobile laser scanning system was represented by a vehicle equipped with a side-looking Sick laser scanner used in push-broom mode, and the vehicle drove in an urban environment with a speed of up to 20km/h.Accordingly, significant variations in point density can be expected.To evaluate the performance of an approach for point cloud classification on this dataset, a split of the dataset into a training set comprising about 36.9k points, a validation set comprising about 91.5k points and a test set comprising about 1.3M points is provided.For each point, a reference labeling with respect to three structural classes represented by linear structures, planar structures and volumetric structures is available as well as a reference labeling with respect to five semantic classes represented by Wire, Pole/Trunk, Fac ¸ade, Ground and Vegetation.
Both reference labelings are visualized in Figure 1 for the validation set.To distinguish between the two classification tasks, we refer to Oakland-3C and Oakland-5C, respectively.

Impact of Neighborhood Type on Geometric Features
It can be expected that low-level geometric features reveal a different structural behavior for the different neighborhood types.Indeed, this can also be observed in our experiments for the involved geometric features.In Figure 3, we exemplarily consider the behavior of the three dimensionality features of linearity L λ , planarity P λ and sphericity S λ .A visualization of the number of considered points within the local neighborhood is provided in Figure 4 and indicates different characteristics as well.The neighborhoods Ns,1m and Nc,1m tend towards a larger number of points within the local neighborhood, whereas the number of neighboring points is by definition constant for N k=50 and the neighborhood N k opt tends towards a smaller number of points within the local neighborhood.

Classification Results
Due to their significantly different impact on feature extraction, it may be expected that the different neighborhood types will also significantly differ in their suitability with respect to the classification task.To verify this, we use each of the four presented neighborhood types Ns,1m, Nc,1m, N k=50 and N k opt to extract the 18 geometric features which, in turn, are provided as input to three classifiers relying on different learning principles.Thereby, we take into account that the number of training examples per class varies significantly which might have a detrimental effect on the classification results (Criminisi and Shotton, 2013).To avoid such issues, we reduce the training data by randomly sampling an identical number of 1,000 training examples per class, i.e. the reduced training set comprises 3k training samples for Oakland-3C and 5k training samples for Oakland-5C, respectively.Once a respective classifier has been trained on the reduced training data, we perform a prediction of the labels for the validation data and we compare the derived labeling to the reference labeling on a per-point basis.Thereby, we consider the global evaluation metrics represented by overall accuracy (OA) and Cohen's kappa coefficient (κ).Note that considering OA as the only indicator might not be sufficient if the number of examples per class is very inhomogeneous.For this reason, we also consider the κvalue which allows judging about the separability of classes.
First, we consider the classification of the original point cloud.
For Oakland-3C, the derived classification results are provided in Table 1.The overall accuracy is between 67% and 94%, while the κ-value is between 28% and 76%.For Oakland-5C, the derived classification results are provided in Table 2.The overall accuracy is between 68% and 96%, while the κ-value is between 49% and 90%.For the results obtained with the Random Forest classifier for Oakland-3C and Oakland-5C, a visualization of the derived labeling is depicted in Figure 5.
Furthermore, we consider the classification of the point cloud that results from a downsampling of the original point cloud with a voxel-grid filter.Thereby, the side length of the voxel is exemplarily selected as 10cm, and all points inside a voxel are replaced by their centroid.For the considered validation data, only 59,787 of 91,515 points (i.e.65.33%) are kept for a subsequent neighborhood recovery, feature extraction and classification.Finally, the derived labeling is transferred back to the original point cloud by associating each point of the original point cloud with the label derived for the closest point in the downsampled point cloud.
The respective classification results corresponding to the complete validation data are provided in Table 3 for Oakland-3C and in Table 4 for Oakland-5C, respectively.

Feature Relevance Assessment
To assess feature relevance with respect to a given classification task, we evaluate the general relevance metric based on the re-  6 for Oakland-3C and Oakland-5C when using different neighborhood types, and the respective result of feature relevance assessment based on a subset of the voxel-grid-filtered training data is provided in Figure 7.

DISCUSSION
The derived results reveal that, in comparison to the classification of the original point cloud (Tables 1 and 2), the classification of a voxel-grid-filtered point cloud and a subsequent transfer of the classification results to the original point cloud seem to be able to better cope with the varying point density (Tables 3 and 4).Furthermore, it can be seen that different neighborhood types have a different impact on geometric features (Figure 3).This might also be due to their different behavior, since the spherical and cylindrical neighborhood types parameterized by a radius tend to a larger number of points within the local neighborhood, whereas the neighborhoods derived via eigenentropy-based scale selection tend to be comparably small (Figure 4).As the latter neighborhood type provides a data-driven neighborhood size selection for each individual point of a point cloud, this neighborhood type takes into account that structures related to different classes might favor a different neighborhood size.This is not taken into account with the other neighborhood types which rely on a heuristically selected value for the scale parameter that is kept identical for all points of the point cloud.The derived classification results (Tables 1-4, Figure 5) also reveal that the cylindrical neighborhood type is not that suitable for classifying terrestrial or mobile laser scanning data, whereas using the other neighborhood types yields appropriate classification results for almost all cases.For classification, the LDA classifier and the RF classifier outperform the NN classifier.Due to the simplifying assumption of Gaussian distributions in the feature space -which cannot be guaranteed for the acquired data -the LDA classifier has a conceptual limitation.Hence, we consider the RF classifier as favorable option for classification.Finally, as expected, it becomes obvious that the relevance of single features varies depending on the classification task, the point sampling and the selected neighborhood type (Figures 6 and 7).In this regard, the most relevant features are represented by O λ , E λ , C λ , ∆H, σH and R ξ .

CONCLUSIONS
In this paper, we have presented a framework for point cloud classification which consists of five components addressing point sampling, neighborhood recovery, feature extraction, classification and feature relevance assessment.Using different configurations of the framework, i.e. different methods for some of its components, we have analyzed influencing factors regarding point cloud classification and feature relevance with respect to two different classification tasks.Concerning the point sampling, the downsampling of a point cloud via a voxel-grid filter, a subsequent classification and the transfer of the classification results to the original data tend to slightly improve the quality of the derived classification results in comparison to performing a classification on the original data.Among the considered neighborhood types, the cylindrical neighborhood type clearly reveals less suitability for classifying terrestrial or mobile laser scanning data, whereas the spherical neighborhoods (parameterized by either a radius or the number of nearest neighbors) have proven to be favorable.For classification, the LDA classifier and the RF classifier have delivered appropriate classification results.Furthermore, the relevance of features varies depending on the classification task, the point sampling and the selected neighborhood type.Among the most relevant features are the omnivariance O λ , the eigenentropy E λ , the change of curvature C λ , the maximum difference ∆H as well as the standard deviation σH of height values, and the ratio R ξ of the eigenvalues of the 2D structure tensor.
In future work, a more comprehensive analysis of influencing factors regarding point cloud classification and feature relevance with respect to the classification task is desirable.This certainly includes other types of features, but also a consideration of the behavior of multi-scale neighborhoods.The latter considerably increase the computational burden with respect to both processing time and memory consumption, so that more sophisticated approaches are required when dealing with larger datasets.A respective approach towards data-intensive processing has recently been presented and relies on a scale pyramid that is created by repeatedly downsampling a given point cloud via a voxel-grid filter (Hackel et al., 2016).Furthermore, it seems worth analyzing different approaches to impose spatial regularity on the derived classification results, e.g.via statistical models of context.

Figure 1 .
Figure 1.Visualization of a point cloud (left) and two reference labelings: the labeling in the center refers to three structural classes that are represented by linear structures (red), planar structures (gray) and volumetric structures (green); the labeling on the right refers to five semantic classes that are represented by Wire (blue), Pole/Trunk (red), Fac ¸ade (gray), Ground (orange) and Vegetation (green).

Figure 2 .
Figure 2. Overview on the proposed methodology: the original point cloud is either kept or downsampled via voxel-grid filtering; local neighborhoods are subsequently recovered to extract geometric features which are provided as input to a classifier; the training data is furthermore used to assess feature relevance with respect to the given classification task.

Figure 3 .
Figure 3. Behavior of the three dimensionality features of linearity L λ (top row), planarity P λ (center row) and sphericity S λ (bottom row) for the neighborhood types Ns,1m, Nc,1m, N k=50 and N k opt (from left to right): the color encoding indicates high values close to 1 in red and reaches via yellow, green, cyan and blue to violet for low values close to 0.

Figure 4 .
Figure 4. Number of points within the local neighborhood when using the neighborhood types Ns,1m, Nc,1m, N k=50 and N k opt (from left to right): the color encoding indicates neighborhoods with 10 or less points in red and reaches via yellow, green, cyan and blue to violet for 100 and more points.

Figure 5 .
Figure 5. Classification results obtained for Oakland-3C (top row) and Oakland-5C (bottom row) when using the original point cloud, the neighborhood types Ns,1m, Nc,1m, N k=50 and N k opt (from left to right) and a Random Forest classifier.

Figure 6 .
Figure 6.Feature relevance for Oakland-3C (top) and Oakland-5C (bottom) when using the reduced version of the original training data (1,000 examples per class) and the neighborhood types Ns,1m (blue), Nc,1m (green), N k=50 (yellow) and N k opt (red).

Figure 7 .
Figure 7. Feature relevance for Oakland-3C (top) and Oakland-5C (bottom) when using a reduced version of the voxel-grid-filtered training data (1,000 examples per class) and the neighborhood types Ns,1m (blue), Nc,1m (green), N k=50 (yellow) and N k opt (red).

Table 1 .
Classification results obtained for Oakland-3C when using the original point cloud, four different neighborhood types and three different classifiers.

Table 2 .
Classification results obtained for Oakland-5C when using the original point cloud, four different neighborhood types and three different classifiers.

Table 3 .
Classification results obtained for Oakland-3C when using the downsampled point cloud, four different neighborhood types and three different classifiers.

Table 4 .
Classification results obtained for Oakland-5C when using the downsampled point cloud, four different neighborhood types and three different classifiers.
duced training data with 1,000 examples per class.The result of feature relevance assessment based on a subset of the original training data is provided in Figure