An Object-Oriented Remote Sensing Image Segmentation Approach Based on Edge Detection
TAN Yu-min1,HUAI Jian-zhu1,TANG Zhong-shi2
1. School of Transportation Science & Engineering, Beihang University, Beijing 100191, China 2. Department of Civil Engineering, Tsinghua University, Beijing 100084, China
Abstract:Satellite sensor technology endorsed better discrimination of various landscape objects. Image segmentation approaches to extracting conceptual objects and patterns hence have been explored and a wide variety of such algorithms abound. To this end, in order to effectively utilize edge and topological information in high resolution remote sensing imagery, an object-oriented algorithm combining edge detection and region merging is proposed. Susan edge filter is firstly applied to the panchromatic band of Quickbird imagery with spatial resolution of 0.61 m to obtain the edge map. Thanks to the resulting edge map, a two-phrase region-based segmentation method operates on the fusion image from panchromatic and multispectral Quickbird images to get the final partition result. In the first phase, a quad tree grid consisting of squares with sides parallel to the image left and top borders agglomerates the square subsets recursively where the uniform measure is satisfied to derive image object primitives. Before the merger of the second phrase, the contextual and spatial information, (e.g., neighbor relationship, boundary coding) of the resulting squares are retrieved efficiently by means of the quad tree structure. Then a region merging operation is performed with those primitives, during which the criterion for region merging integrates edge map and region-based features. This approach has been tested on the QuickBird images of some site in Sanxia area and the result is compared with those of ENVI Zoom and Definiens. In addition, quantitative evaluation of the quality of segmentation results is also presented. Experiment results demonstrate stable convergence and efficiency.
[1] Blaschke T. ISPRS Journal of Photogrammetry and Remote Sensing, 2010, 65(1): 2. [2] Wuest Ben,Zhang Yun. ISPRS Journal of Photogrammetry and Remote Sensing, 2009, 64(1): 55. [3] Giora E, Casco C. Vision Research, 2007, 47: 879. [4] LUO Sheng, CHEN Ping, YE Xin-quan, et al(罗 胜,陈 平,叶忻泉,等). Opto-Electronic Engineering(光电工程), 2008, 35(12): 101. [5] Marina Mueller, Karl Segl, Hermann Kaufmann. Pattern Recognition, 2004, 37(8): 1619. [6] Fan S L, Man Z, Samur R. VRCAI ’04. New York:NY, ACM Press, 2004, June 16-18: 302. [7] CHANG Yu-chou, ZHOU-Yue, WANG Yong-Gang. Lecture Notes in Computer Science, 2005, 3809: 1237. [8] YU Sheng-yang, ZHANG Yan, WANG Yong-gang, et al. Lecture Notes in Computer Science, 2007, 4577: 286. [9] YU Qi-yao, Clausi David A. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008, 30(12): 2126. [10] Rotem O, Greenspan H, Goldberger J. IEEE Conference on Computer Vision and Pattern Recognition(CVPP’07), 2007. 1. [11] HU Tan-gao, ZHU Wen-quan, YANG Xiao-qiong, et al(胡潭高, 朱文泉, 阳小琼, 等). Spectroscopy and Spectral Analysis(光谱学与光谱分析), 2009, 29(10): 2703. [12] Dusan Heric, Damjan Zazula. Image and Vision Computing, 2007, 25: 652. [13] Kelkar D, Gupta S. 1st International Conference on Emerging Trends in Engineering and Technology, 2008. 44. [14] TAN Yu-min, HUAI Jian-zhu, TANG Zhong-shi(谭玉敏, 槐建柱, 唐中实). Journal of Dalian Maritime University(大连海事大学学报), 2009, 35(2): 81. [15] Marcal André R S, Rodrigues Arlete S. Computers & Geosciences, 2009, 35(8): 1574.