AN EVALUATION PIPELINE FOR INDOOR LASER SCANNING POINT CLOUDS

The necessity for the modelling of building interiors has encouraged researchers in recent years to focus on improving the capturing and modelling techniques for such environments. State-of-the-art indoor mobile mapping systems use a combination of laser scanners and/or cameras mounted on movable platforms and allow for capturing 3D data of buildings’ interiors. As GNSS positioning does not work inside buildings, the extensively investigated Simultaneous Localisation and Mapping (SLAM) algorithms seem to offer a suitable solution for the problem. Because of the dead-reckoning nature of SLAM approaches, their results usually suffer from registration errors. Therefore, indoor data acquisition has remained a challenge and the accuracy of the captured data has to be analysed and investigated. In this paper, we propose to use architectural constraints to partly evaluate the quality of the acquired point cloud in the absence of any ground truth model. The internal consistency of walls is utilized to check the accuracy and correctness of indoor models. In addition, we use a floor plan (if available) as an external information source to check the quality of the generated indoor model. The proposed evaluation method provides an overall impression of the reconstruction accuracy. Our results show that perpendicularity, parallelism, and thickness of walls are important cues in buildings and can be used for an internal consistency check.


INTRODUCTION
During the last years, the scope of indoor mapping applications has widened to be involved in many important applications such as mapping hazardous sites, indoor navigation and positioning, displaying virtual reality, etc.Consequently, indoor mapping is a hot research topic.Since digital maps of public buildings (airports, hospitals, train stations, etc.) are a prerequisite for navigation inside them, there will be a trend towards development of indoor applications using geospatial data (Norris, 2013).While outdoor maps are widely available, indoor maps are difficult to generate and indoor mapping has remained a challenge as GNSS positioning does not work inside buildings.Extensively investigated are approaches for indoor map generation from the group of Simultaneous Localisation and Mapping (SLAM) algorithms.
Most traditional methods to map building interiors fundamentally depended on manual drawings, total stations, or terrestrial laser scanning (TLS).However, those methods are no more applicable when we deal with complex indoor environments since they would require setting up the total station/laser scanner at many different positions, which is labour and time intensive.In order to map an indoor environment with complex structures, several indoor mapping systems mounted on moveable platforms (pushcart, robot, or human) have been developed (Bosse et al., 2012;Viametris, 2014;Wen et al., 2016).Among these systems, a number of them utilize RGBD cameras, such as Microsoft Kinect, Google Tango, a few use laser scanners, such as Google Cartographer, and some apply the integration of laser scanners and cameras on pushcarts, such as TIMMS (Trimble, 2014) and i-MMS (Viametris, 2014).Unfortunately, the latter systems assume that the platform moves in 2D space and they are not applicable for applications that require 3D navigation.Nowadays, 3D SLAM-based systems have been designed to explore the 3D space for cases like human navigation, rescue operations, and environment mapping (Mahon and Williams, 2003;Baglietto et al. 2011).
The accuracy of indoor mobile mapping point clouds is significantly important as SLAM-based point clouds usually suffer from registration problems.Point cloud evaluation techniques usually use reference data as obtained by TLS or another indoor mobile mapping system (IMMS), or a Building Information Model (BIM) for comparison (see section 2.2).Providing such reference data is difficult and requires a large effort.Therefore, this paper investigates the possibility to do a partial evaluation in the absence of any ground truth model.Also, we utilize the benefits of an outdated map, which is available for many buildings nowadays, in the accuracy analysis.
The paper is organized as follows: section 2 describes the previously developed indoor mobile mapping systems and the evaluation criteria of generated maps.Section 3 presents the proposed methodology for both internal map consistency check and the external map consistency check with other sources, such as available floor plan.Section 4 explains the experiments whose results are discussed in section 5.The paper comes to the conclusions and gives suggestions for future work in section 6.

Indoor Mobile Mapping Systems
An IMMS is generally a kinematic platform that is composed of integrated and synchronized sensors suitable to localize the system and map the environment simultaneously.Commonly used sensors can be classified into navigation sensors such as Inertial Measurement Units (IMUs) and sensors that collect information of the system's environment such as laser scanners and cameras.The usual outputs of IMMSs are images and/or 3D point clouds as well as a trajectory of the system's motion in a local coordinate system.In order to retrieve the location of the platform IMMSs primarily make use of Simultaneous Location and Mapping (SLAM) as a typical solution for indoor mapping in the absence of GNSS.
These mapping systems are still facing many challenges.Some systems do not have the ability to access the entirety of interior areas such as trolley-based systems (i-MMS, NavVis M3 Trolley and TIMMS) which are not able to map staircases.Therefore, they need an alignment process of the point clouds from different floors, and this leads to more manual work.The advantage of hand-held and wearable mapping systems to other IMMSs is that they can move in a more flexible way and are faster for data acquisition.In spite of accessibility property of ZEB1 and HeadSLAM designs to most indoor areas, they have the same difficulty as pushcart systems in recognizing the movement in a featureless environment, open areas, long corridors, and so on (Cinaz & Kenn, 2008).
Most mapping systems that depend on SLAM for localization instead of GNSS and IMU have a limited application field.For instance, Viametris i-MMS is only able to map in an environment with small variance in height because of its reliance on 2D SLAM in positioning.Furthermore, assumptions that algorithms are built on can constrain them.For instance, algorithms that assume the floor is planar will only be of use in environments fulfilling the assumptions (Chen et al., 2010).

Evaluation Methods
The most common technique to investigate mapping systems and evaluate the generated point clouds is a cloud to cloud comparison after transforming both clouds to the same coordinate system (Maboudi et al., 2017;Thomson et al., 2013).Another method is using a building information model's (BIM) geometry derived from the mobile mapping system's point clouds to be compared to that derived from a static scanner (Thomson et al., 2013).In these works, they rely on the availability of another mobile mapping system or setting up a static scanner at different positions in the test areas which is time-consuming.

METHODOLOGY
We propose two methods for the evaluation of indoor point clouds either by comparison to an existing ground truth modelif availableor using architectural constraints like parallelism and perpendicularity.As the permanent structure of man-made indoor environments mainly consists of planar and vertical structures, we build our evaluation method on 2D edges derived from those structures.Figure 1 depicts the workflow of the proposed method; the main steps will be described in the following subsections.

Pre-processing
We use a pre-processing step to generate 2D edges from the acquired point cloud.Pre-processing comprises the segmentation of the input point cloud to extract planar segments using surface growing (Vosselman et al. 2004) and the projection of the vertical planar segments to the XY-plane.In addition, the minimum and maximum height information in the plane is stored per edge.

Evaluation Using Architectural Constraints
Architectural constraints may play a role in the analysis process if ground truth information in the form of a CAD/BIM model or a reference point cloud is unavailable.We utilize the perpendicularity and parallelism characteristics predominant in indoor man-made environments to evaluate the ability of the mapping system to capture the true geometry of its environment.In addition, we look at wall thickness that characterises, like parallelism, the ability of the IMMS to keep a good localisation when moving from one room to another.In particular, passing through a narrow gate such as door and moving from one room to another is a relatively critical issue for SLAM algorithms.Localisation errors will become measurable as variations in the observed wall thicknesses.
Two adjacent walls are often perpendicular in Manhattan World building geometry and two sides of a certain wall are parallel.Therefore, their planes generated from point clouds should be perpendicular and parallel, respectively.We detect and label the planes that are close to parallel or perpendicular within certain thresholds.For perpendicularity, the algorithm is looking for almost perpendicular edges with nearby end points.Two edges are nominated as both sides of a wall if they are almost parallel and the distance between them is close to a reasonable wall width.
Then, we compute the Root Mean Square Error (RMSE) of angles to find the deviation from the perfect model.In addition, a histogram of the computed angles between the reconstructed planes is computed in order to get an overall impression of the reconstruction accuracy and to filter outlier edges.

Evaluation Using a Floor Plan
In the following section, we describe the followed approach to evaluate point clouds using a floor plan.We start the analysis process on edge pairs after transforming the point cloud-based edges to the floor plan coordinate system and matching the corresponding edges.

Transformation
Nowadays, many buildings have (often outdated) floor plans reflecting the as-planned state from before the construction.We investigate the feasibility of using a simple 2D floor plan in analysing the accuracy of the point clouds.Since the 2D edges in this floor plan and those extracted from our point clouds are in two different coordinate systems, we have to register them in the same coordinate system for comparison.In order to preserve the geometry of the building, we use a rigid-body transformation.This transformation is only used to identify corresponding edges in the point cloud and floor plan and not to estimate residual distances or angles between an edge in the point cloud and an edge in the floor plan.As these kinds of comparisons strongly depend on the chosen bases of the coordinate system, we only compare angles and distances between edges extracted from the point cloud to the angles and distances between the corresponding edges in the floor plan.The use of these shape characteristics allows the comparison to remain independent of chosen coordinate systems.

Edge Matching
With both sets of edges co-registered in one coordinate system, we can match the corresponding edges.Firstly, we collect all point cloud-based edges that most probably belong to a room in the floor plan using a buffer around the room polygon (Figure 2, a).Secondly, we select which of the collected edges most likely represent a wall in that room.This is done using another buffer around each of the room's edges (Figure 2, b).
However, the buffer might contain edges that belong to the adjacent room and represent the other side of the wall.Therefore, it is necessary to refine the nominated edges and filter out the wrongly detected edges.Because the edges are derived from planes whose normal vectors point towards the IMMS system's trajectory, we do the refinement process based on the normal vector direction (Figure 2, c).
The remaining edges for each room represent not only the walls but might also represent windows, doors, and clutter.The proposed algorithm works on walls which usually are reconstructed as large and more reliable planes in comparison to other objects.Thus, in a further refinement process, only edges most probably corresponding to wall planes are kept and other edges are discarded.For this reason, we stored the height information of the 3D reconstructed planes (section 3.1) and do the processing room by room where we can estimate the floor and ceiling height for each room separately and exclude non-wall edges based on their height.
After removing the undesired edges during the aforementioned steps, we obtain edges   that most likely represent walls in the building and they form what we call the PC-based map.The final step of the pipeline is to pair edges from both sets.The result is a set of tuples of matched edges from   and edges from the floor plan   .

Analysis
In this subsection, we describe all needed computations in our analysis method to see how well the rooms are connected in the PC-based map, thus how well the environment is reconstructed: a) Error in angle in relation to distance We want to study the impact of the distance on the angle errors.Let   and   be edge sets extracted from point cloud and floor plan, respectively.Let (  ,   )i be pairs of matched edges where i=1, 2, … , n and n is the number of pairs.We pick the i th pair of edges (  ,   )i and compute the angles (αf , αpc)ij and distances (dmf)ij with respect to all other pairs of edges (  ,   )j where (αf)ij is the angle between (  )i and (  )j , (αpc)ij is the angle between (  )i and (  )j , (dmf)ij is the distance between midpoints of (  )i and (  )j, and j=i+1,i+2,…,n.For each pair of edges we compute the difference between the angle in the point cloud and angle in the floor plan: (Δα= αfαpc)ij.Hence, we obtain n(n-1)/2 angle differences and corresponding distances between the edges (Δα, dmf).We plot these values to investigate whether the distance between edges has an impact on the error in the angle between them.
Since the perpendicular and parallel edges are already labelled (section 3.2), we also compute these values for each type of edges separately to see if the error in angle over distance is related to the edges' relation.

b) Error in distance in relation to distance:
Besides the pairs of edges, we want to look also into pairs of their end points.But because the point cloud-based map usually suffers from a completeness problem, we first need to find corner points by intersecting the neighbouring edges.We use the topology of the floor plan and intersect edges from   if their matched floor plan edges   are connected.
Let   and   be intersection points obtained from floor plan and point cloud, respectively.Let (  ,   )i be pairs of points where i=1, 2, ….., n and n is the number of pairs.We pick the i th pair of points (  ,   )i and compute the distances (df , dpc)ij with respect to all other pairs of points (  ,   )j where (df)ij is the distance between (  )i and (  )j, (dpc)ij is the distance between (  )i and (  )j , and j=i+1,i+2,…,n .Then, the error in the distances (Δd= df -dpc)ij will be plotted against the distances (df)ij to check if the error in distance depends on the distance between floor plan points.Computing the error in distance in this way will remove the systematic error resulting from errors in the transformation.

EXPERIMENTS
The proposed method for analysis has been implemented on a data set acquired at the University of Braunschweig, Germany.
The floor looked at shows a distinct office environment and the rooms were nearly empty due to ongoing remodelling.The dataset was collected using our backpack indoor mobile mapping system (Vosselman, 2014).

Backpack Indoor Mobile Mapping System (BIMMS)
We have developed a novel indoor mapping system that allows accurate 3D data acquisition based on a feature-based 6DOF SLAM method.Using three 2D laser scanners provides sufficiently strong geometry to enable the estimation of 3D pose and plane parameters.The current configuration of our backpack scanning system consists of three TOF laser range finders (Hokuyo UTM-30LX) as shown in Figure 3. LRF S0 is mounted on the top of the backpack system.When worn by a person, S0 is approximately horizontal and located above the top of the user's head.The other two LRF S1 and S2 are mounted to the right and left of the top one and tilted by 45 º not only with respect to the moving direction, but also with respect to the axis running through the operator's shoulder.
Figure 3.The used laptop and the current backpack system with four sensors mounted: three scanners S0, S1, and S2 and Xsens IMU (below S0) In the proposed SLAM procedure, the range observations of all scanners will contribute to an accurate 6DOF pose estimation of the system.Our SLAM algorithm is based on planes which constitute the map, and linear segments which are detected in the single scanlines and matched to the map planes.The IMU observations have not been used in the current method.
In addition to the point clouds and the trajectory, the SLAM outputs the 3D reconstructed planes (rectangular faces, see Figure 4), therefore the pre-processing is simplified to the projection of the vertical planes to 2D.

Evaluation Using Architectural Constraints
Here, we look at the parallelism and perpendicularity constraints for the accuracy analysis process as described in section 3.2.

Parallelism
The main goal is to find all pairs of edges that most likely represent both sides of walls.In order to do this, the appropriate thresholds are set such that the angle between the edges is in the range [0 º ±5 º ] and the distance between them does not exceed 30 cm.Edges that meet the conditions are labelled (Figure 5) and the Root Mean Square Error (RMSE) of angles is computed to estimate the deviation from the perfect parallelism (Table 2).Moreover, we build a histogram of the angle errors with 0.5º bins as shown in Figure 6 and Table 2.We want to analyse the ability of BIMMS to capture the true geometry of the environment and keep a good localisation when moving from one room to another.

Perpendicularity
We look for almost perpendicular edges (angles between 85 º and 95 º ) (Figure 7) with endpoints that are within a proximity threshold (30 cm).The Root Mean Square Error (RMSE) of the deviation of the angles from 90 º is computed (Table 2).Similar to parallel edges, we build a histogram with 0.5º bins as shown in Figure 8 and Table 2.By looking at the values listed in table 2, we see that the errors in edges which are supposed to be parallel are larger.This shows that the angle between two sides of a wall is determined weaker than between two perpendicular planes in the same room.This is consistent with the expected performance of SLAM algorithms (Vosselman, 2014), as the two walls sides are not seen at the same point of time.
However, we note in this experiment that the RMSE values in table 2 are affected by wrongly reconstructed planes which cause the high percentages in the above histograms in bins 6 and 7 where the angles deviate more than 2.5 º from the expected values 0 º and 90 º .Therefore, it is hard to derive a conclusion from RMSE values regarding the quality of the reconstructed planes unless we exclude these planes.By tracking the source of these high percentages, we found that they are coming mainly from open doors where these planes are reconstructed from short segments.For example, the planes 1 and 10 in Figure 9 might not be excluded by the used constraints.

Wall Thickness
In order to check the quality of the mapping system in positioning when moving from one room to another, we look at the wall thickness.We compute the thickness as the shortest distance between edges labelled parallel and assumed to represent both sides of a wall.The results in Figure 10 look reasonable because they show that there are different types of walls in the mapped building which is clear in the digitized floor plan (Figure 11).Also, they show that there are two standard types of walls and the standard deviation of the thickness is around 1 cm which gives confidence that the precision in generating the width of wall is in the range of 1-2 cm.
Figure 10.Percentages of the wall thickness

Transformation
The floor plan of the captured floors was drawn from a pdf document (Figure 11).Since   and the digitized floor plan are in different coordinate systems, we estimate the rigid-body transformation parameters using 16 points to transform   to the floor plan system.

Edge Matching
As mentioned in the methodology, we nominate all possible PCedges as candidates for the final analysis process using polygon and edge buffers (width 30 cm).Then, height and normal vector constraints are implemented to exclude the undesired edges and keep long edges that most likely represent the walls.Since there is not much clutter in the building, we set the normal vector threshold to 10 º to be able to detect the wrongly reconstructed walls.An edge is classified as a door and removed if the corresponding plane is connected to the floor and its height is less than 2.2 m.Similarly, an edge is considered to be related to a window and removed if the corresponding plane is not connected to the floor and ceiling and its height is less than 2 m.  Figure 13 shows that the errors in angle between point cloud edges are small, approximately 81% are in the range [-1 º 1 º ].
There are two remarkable small peaks around ±3 º .All these errors might belong to only a few planes that are not reconstructed properly.Overall, it seems that the errors in angle do not depend on the distance (as can be seen in Figure 13, b).The decrease in the density of the points over distance is because we have fewer edges over a long distance (see Figure 12).
It is already clear from Figure 12 that there are a few poorly reconstructed planes that could likely be the reason for small peaks around ±3 º in Figure 13c).In order to identify the poorly estimated edges, we construct Figure 14a), which visualizes all edges pairs with an angle error of 3 º or more.This figure shows a pattern indicating which edges are involved in pairs with large angle errors, so-called outlier edges.This enables us to have a closer look into the problematic areas and see whether they are related to the system configuration, algorithm, environment, or the way of capturing the data.In order to get a better picture of the potential quality of the system, we exclude these five outlier edges from the computations (Figure 14, b and c).Table 3 shows standard deviation values and the number of edge pairs that are involved in the computations for both cases, before and after excluding outlier edges.We can see that the removal of the outlier edges leads to a 25% decrease in the estimated standard deviation.In order to see if the angles' errors over distance are related to the situation between edges (labelled perpendicular or parallel), we use the labels to compute the errors for each group separately.We find out that for both groups we have fairly similar pattern of the errors over distance.
Since BIMMS planes are reconstructed through the SLAM algorithm over time, we inspect the relation between the error in angle and time.The results show that the error does not grow over time.This is explained by the fact that the operator returned to the same corridor after visiting each room.The SLAM therefore implicitly resulted in frequent loop closures preventing the errors to accumulate (Vosselman, 2014).

Pairs of points:
After computing the corner points (128 points) from intersecting the neighbouring walls' edges, 8128 pairs of points are involved in finding the error in distance as relation of distance (Figure 15).The errors in distance are sometimes quite large (~40 cm).The source of such error is not only our system or SLAM algorithm but also the used outdated floor plan.There are differences in the width of some walls between the used floor plan in the analysis and the construction (Figure 16).We carried out an analysis similar to the one in Figure 14a to identify the poorly reconstructed corners (outlier points) using the distance between   and   .The comparison of the results before and after excluding these outlier points does not show a significant improvement.It is not possible to draw the conclusion whether these errors are caused by the used ground truth model or the mapping system.

CONCLUSIONS AND FUTURE WORK
This paper describes an evaluation pipeline for indoor laser scanning point clouds.The proposed method has the particular advantage that it is applicable even when there is no ground truth model or only an outdated map available.Moreover, the methodology is not limited to the presented indoor mobile mapping system.Although we do not consider the errors in the constructions, the outdated map, and the SLAM algorithm, this evaluation method provides an overall impression of the reconstruction accuracy.We found that the connection between two sides of wall is weaker than between two perpendicular planes in a room because they are not seen at the same point of time during SLAM.The statistics of the angle errors show that the rooms are connected well despite this fact.
Nevertheless, the proposed height constraints are not the best technique to exclude doors and windows, because the existence of e.g. a long curtain next to a window may lead to reconstructing a large plane which goes from floor up to nearly the ceiling.This plane is incorrectly labelled as a wall by the current constraints and will affect the resulting angle errors.In the future, we plan to use more sophisticated techniques to decrease the uncertainty of labelling.In addition, the statistics on the distance errors in the corners or between matched edges are still very much affected by errors in the floor plan or incorrectly identified correspondences between floor plan edges and point cloud-based edges.Therefore, these statistics do not provide insight in the accuracy of the reconstructed point cloud unless we include outlier detection in the correspondence identification and obtain a more reliable reference floor plan.
Preliminary results show the successful application of the current configuration of our mobile mapping system (BIMMS) in a Manhattan World building.In our immediate future work, we will use the presented evaluation approach to determine the optimal sensor configuration of our system.Different configurations of the sensors will be selected and some parts of indoor environments with different levels of complexity will be chosen carefully to serve as test areas.We also want to investigate the application of BIMMS as a mapping system and assess its performance in more complicated environments.

Figure 1 .
Figure 1.Workflow of the proposed methodology

Figure 2 .
Figure 2. The concept of the edge matching process.(a) Polygon buffer around a room in the floor plan (blue) with the nominated point cloud-based edges (red).(b) Edge buffers of room's walls (blue) with the nominated point cloud-based edges (red).(c) Zoom in (green circle) for filtering of wrongly oriented edges (>10 º ) based on normal vector directions (blue arrow).[best viewed in colour]

Figure 4 .
Figure 4.The generated point cloud (colours show plane association) and the reconstructed planes

Figure 12
Figure 12 shows the 144 point cloud-based edges matched to the floor plan edges.All the required statistical values in the analysis process are computed based on these final matched edges.

Figure 12 .
Figure 12.The final point cloud edges (blue) that match the floor plan edges (red) 5.2.3 Analysis Pairs of edges: For computing the error in angle as relation of distance, 10296 pairs of edges are involved in getting the results depicted in Figure (13, b, c). Figure (13, a) shows all edges   ij that are compared to the first one exemplarily.
Figure 13.(a) All edges pairs that first edge is involved in.(b) Errors in angle as relation of distance.(c) Histogram of the percentages of errors.
Figure 14.(a) All edge pairs with an angle error of 3 o or more, (b) errors in angle as relation of distance, (c) histogram of the percentages of errors Figure 15.(a) Errors in distance in relation to the distance.(b) Histogram of the percentages of errors.

Figure 16 .
Figure 16.Floor plan edges (blue) and point cloud edges (black) with clearly wrong floor plan edge inside the red circle.

Table 2 .
Values of mean, standard deviation, and the number of edges pairs that are involved in the computations for both cases, before and after excluding outlier edges