ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Publications Copernicus
Articles | Volume II-3/W2
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., II-3/W2, 7–11, 2013
ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., II-3/W2, 7–11, 2013

  02 Oct 2013

02 Oct 2013

A Frequency-Domain Implementation of a Sliding-Window Traffic Sign Detector for Large Scale Panoramic Datasets

I.M. Creusen1,2, L. Hazelhoff1,2, and P.H.N. De With1,2 I.M. Creusen et al.
  • 1Eindhoven University of Technology, Eindhoven, The Netherlands
  • 2Cyclomedia Technology, Zaltbommel, The Netherlands

Keywords: Traffic sign detection, Sliding window, Object Detection, Frequency Domain

Abstract. In large-scale automatic traffic sign surveying systems, the primary computational effort is concentrated at the traffic sign detection stage. This paper focuses on reducing the computational load of particularly the sliding window object detection algorithm which is employed for traffic sign detection. Sliding-window object detectors often use a linear SVM to classify the features in a window. In this case, the classification can be seen as a convolution of the feature maps with the SVM kernel. It is well known that convolution can be efficiently implemented in the frequency domain, for kernels larger than a certain size. We show that by careful reordering of sliding-window operations, most of the frequency-domain transformations can be eliminated, leading to a substantial increase in efficiency. Additionally, we suggest to use the overlap-add method to keep the memory use within reasonable bounds. This allows us to keep all the transformed kernels in memory, thereby eliminating even more domain transformations, and allows all scales in a multiscale pyramid to be processed using the same set of transformed kernels. For a typical sliding-window implementation, we have found that the detector execution performance improves with a factor of 5.3. As a bonus, many of the detector improvements from literature, e.g. chi-squared kernel approximations, sub-class splitting algorithms etc., can be more easily applied at a lower performance penalty because of an improved scalability.