A Novel Building Boundary Extraction Method for High-resolution Aerial Image

A Novel Building Boundary Extraction Method for High-resolution Aerial Image

Jihui Tu 

Electronics & Information School of Yangtze University, Jingzhou, Hubei 434023, China

Corresponding Author Email: 
green666@126.com
Page: 
19-22
|
DOI: 
http://dx.doi.org/10.18280/rces.010205
Received: 
| |
Accepted: 
| | Citation

OPEN ACCESS

Abstract: 

In this paper, a novel method is presented that applies high-resolution aerial images to extract the building using invariant color features and shadows information. The framework of the proposed method has three main steps. Firstly, we extract the shadow zone using invariant color features. Secondly, the building suspect area is obtained using the sun angle and illumination direction according to the relationship between the building and its cast shadow. Finally, the boundary of the building is extracted using the level set method. The results, evaluated on real data, show that this method is feasible and effective in building extraction. It has also shown that the proposed method is easily applicable and well suited for building boundary extraction from high resolution image.

Keywords: 

Building boundary extraction, Invariant color features, Shadows extraction

1. Introduction

Building as a shelter is an essential place for human, extracting the building from high-resolution aerial images is becoming more and more important as many remote sensing applications, such as automated map making, urban planning and damage detection, etc. Unfortunately, it is tedious and time-consuming for a human expert to manually label buildings in a given aerial image. Meanwhile, automation building extraction is very difficult to develop generic algorithms because of the diverse texture information, complex environment and the occlusion by trees. Shadows exist in most aerial remote sensing images with high resolutions, and its existence is useful to extract the building for high resolution aerial images. Therefore, one is challenging problem to extract the building using shadow information.

In the last two decades, researchers have developed many automated building detection methods using shadow information from aerial images. McKeown et al. [1] detected shadows in aerial images by thresholding. They showed that, building shadows and their boundaries contain important information about building heights and roof shapes. Zimmerman [2] integrated basic models and multiple cues for building detection. He used color, texture, edge information, shadow, and elevation data to detect buildings. His algorithm extracts buildings using blob detection. Tsai [3] compared several invariant color spaces including HSI, HSV , HCV , YIQ and YCbCr models for shadow detection and compensation in aerial images. Huertas and Nevatia [4] used the relationship between buildings and shadows. They firstly extracted corners from the image. They labeled these corners as either bright or shadow. They used the bright corners to form rectangles. Shadow corners confirm building hypothesis for these rectangles. Matsuoka et al. [5] also used shadows to model buildings. They showed that shadow information can be used to estimate damages and changes in buildings. Neuenschwander et al. [6] and Rüther et al. [7] used snake models to determine exact building shapes from the edge information.

In this study, we propose a novel method for extracting the building boudary from high resolution images. Our method firstly extract the shadow zone using the improved invariant color features, then the building suspect region is obtained using the sun angle and illumination direction according to the relationship between the building and its cast shadow. Finally, the boundary of the building is extracted using the edge information. Moreover, the method proposed is test in our experiment using high resolution aerial images. Theoretical analysis and experimental results show building can be extracted effectively by means of the proposed algorithm.

2. Methodology

2.1 Shadow region identification

Due to the absence of Infrared information, Red, green and blue bands were combined following the method suggested by Sibiryakov [8]. Color invariants, originally proposed by Gevers et al. in [9] are a set of color models independent of the viewpoint, surface orientation, illumination direction, illumination intensity, and highlights. Shadows are identified through the use of color invariant features [10].We detect the shadow in the aerial images by the following equation:

$S C(i, j)=\frac{4}{\pi} \times \arctan \left(\frac{R(i, j)-\sqrt{R(i, j)^{2}+G(i, j)^{2}+B(i, j)^{2}}}{R(i, j)+\sqrt{R(i, j)^{2}+G(i, j)^{2}+B(i, j)^{2}}}\right.$             (1)

Where SC is the value of color invariant features, R(i, j), G(i, j) and B(i, j) are the red, green and blue band at the pixel’s point (i, j) in the image. This image is Otsu thresholder and all pixels below the threshold are considered shadow. In order to detect the single building, shadow zone are segmented many connected regions using 8-connected neighborhood method.

2.2 Estimating the shadow direction

With increased spatial resolution, shadows from buildings or trees become an apparent class that may not be ignored. It is closely related to building shape, solar altitude angles, the sun azimuth angle, the luminous intensity, and so on. When the building height is constant, the shadow length L is mainly related to solar altitude angles а and building height H , have the following relation[11]:

$L=H \times c \tan \alpha$   (2)

When shooting the remote sensing image, the sun’s height and position relative to the building maintain constantly, and all building shadows is of the same direction. Set the center of the shadow region is at ( xs, ys ) , and the center of the rooftop region is at ( xb, yb ), then the shadow direction is

$\theta=\arctan \left(\frac{y_{b}-y_{s}}{x_{b}-x_{s}}\right)$       (3)

The shadow direction $\theta$ location is very important. We can adjust $\theta$ according to its actual angle as

$\theta=\left\{\begin{array}{lcc}\theta & \text { if } & x_{s}>x_{b}, y_{s}<y_{b} \\ \pi-\theta & \text { if } & x_{s}<x_{b}, y_{s}<y_{b} \\ \pi+\theta & \text { if } & x_{s}<x_{b}, y_{s}>y_{b} \\ 2 \pi-\theta & \text { if } & x_{s}>x_{b}, y_{s}>y_{b}\end{array}\right.$    (4)

2.3 Extracting the building boundary 

According estimating the shadow direction and equation (3)(4), we compute the center of the building boundary. To do so, set the distance is d between the center of the building boundary and the center of the shadow boundary, the point (xs, ys ) is the shadow of the shadow region, hence the point (xb, yb ) of the rooftop is obtain as following equations:

$\left(x_{b}, y_{b}\right)=\left(x_{s}+d \cos \theta, y+d \sin \theta\right)$        (5)

Where d is 17 pixels in our test set. As shown in Fig.1, then we set a candidate region of building as the buffer area, the buffer’s center is the point ( xb, yb ), the buffer zone area is twice than the shadow’s area, the buffer shape is retangle. At last, we segment the building boundary using level set method [12].

Figure 1. Building boundary extraction

3. Experiment Results

In this experimental part, we tested our building detection method on 10 aerial images. They have same illumination conditions. All test images are in RGB color format with a 0.2 m/pixel resolution. Test images are taken from Yangjiang, Guangdong, where are taken photo in April, 2012. The building detection results can be seen in Figure 2. These results show that our method works fairly well for the test images.

Figure 2. Building extraction example

In order to further validate the proposed method, the DP (detection performace) and BF (branching factor)[] was used to evaluate the performance of the proposed method. Many researchers use these definitions to report the efficiency of their methods. These are

$D P=\frac{T P}{T P+T N} \times 100$            (6)

$B F=\frac{F P}{T P+F P} \times 100$       (7)

Where TP (true positive) is the number of buildings detected both manually and approach. FP (false positive) is the number of buildings detected by the automatic approach but not manually. TN (true negative) is the number of building detected manually but not by the automatic approach. Table 1 shows the DP and BF values in the visualization domain for a dataset of 10 aerial images. As can be seen, 153 of the 166 buildings are correctly detected in test images. As a result, we obtain DP as 92.5% and BF as 12.6% for 166 test buildings. With respect to the DP and BF value, we also find quantitatively that our method yields better detection result.

Table 1. Evaluation of the detection results in the test image set

Test image

Buildings

TP

FP

No.1

20

18

2

No.2

9

9

1

No.3

6

6

0

No.4

32

30

3

No.5

25

23

3

No.6

10

9

2

No.7

21

20

3

No.8

7

7

0

No.9

31

29

3

No.10

35

32

5

Total

166

153

22

4. Conclusions

This paper has presented a novel building extraction method from high resolution aerial imagery. We use color invariant features to detect the shadows, then we estimate the building’s suspect zone using shadows information and direction. Finally, the building boundary is extracted using the level set method. The results, evaluated on real data, demonstrate that this method is feasible and effective in building extraction. In the future we intend to solve the occlusion by trees and the shadow overlapping problem of the building extraction, which is better to assess the level of the building extraction.

  References

1. R. B. Irvin and D. M. McKeown, Methods for Exploiting the Relationship between Buildings and Their Shadows in Aerial Imagery, IEEE Transactions on Systems, Man and Cybernetics, vol. 19, no. 1, pp.1564–1575, 1989.

2. P. Zimmermann, A New Framework for Automatic Building Detection Analyzing Multiple Cue Data, IAPRS, vol. 33, 2000, pp.1063–1070.

3. V. J. D. Tsai, A Comparative Study on Shadow Compensation of Color Aerial Images in Invariant Color Models, IEEE Transactions on Geoscience and Remote Sensing, vol. 44, no. 6, pp. 1661–1671, 2006.

4. Huertas and R. Nevatia, Detecting Buildings in Aerial Images, Computer Vision, Graphics and Image Processing, vol. 41, pp. 131–152, 1988.

5. T. Vu, M. Matsouka and F. Yamazaki, Shadow Analysis in Assisting Damage Detection Due to Earthquake from Quickbird Imagery, Proceedings of the 10th International Society for Photogrammetry and Remote Sensing Congress, pp. 607–611, 2004.

6. W. Neuenschwander, P. Fua, G. Szekely and O. Kubler, Automatic Exraction of Man-Made Objects from Aerial and Space Images, Birkhauser Verlang, 1995. 

7. H. Ruther, H. M. Martine and E. G. Mtalo, Application of Snakes and Dynamic Programming Optimization Technique in Modelling of Buildings in Informal Settlement Areas, ISPRS Journal of Photogrammetry and Remote Sensing, vol. 56, no. 4, pp. 269–282, July, 2002.

8. Sibiryakov, A., House Detection from Aerial Color Images; Internal Report, Institute of Geodesy and Photogrammetry, ETH, Zurich, Switzerland, 1996.

9. Gevers, T., Smeulders, A. W. M., PicToSeek: Combining Color and Shape Invariant Features for Image Retrieval, IEEE Trans. Image Process, vol.9, no.5, pp.102–119, 2000.

10. Nicholas Shorter and Takis Kasparis, Automatic Vegetation Identification and Building Detection from a Single Nadir Aerial Image, Remote Sensing, vol.1, no.2, pp.731-757, 2009.

11. Dare, P. M., Shadow Analysis in High-Resolution Satellite Imagery of Urban  Areas, Photogrammetric Engineering & Remote Sensing, vol.71, no.2, pp.169–177, 2005.

12. M. Rousson and N. Paragios, Prior Knowledge, Level Set Representations & Visual Grouping, International Journal of Computer Vision, vol.76, no.3, pp.231-243, 2008.