针对高分辨率遥感影像的特点,提出了基于多类分割与模板匹配的建筑物轮廓矢量化方法:首先对影像进行多尺度SVM分割获取建筑物候选点;然后在Radon变换结合主轴分析获取建筑物主方向的基础上,引入多类分割思想,构建用于分割的能量函数,基于α-扩展算法解算能量函数,将轮廓线分割为3类边缘线段;接着构建形状先验的边缘模板,进行精确定位以获取边缘线的实际位置,去除锯齿状变形,降低提取结果的影响;最后相邻边缘线相正交得到拐角点,依次连接每一个拐角点,得到规则化的建筑物轮廓。相比于同类其他方法,此方法考虑了边缘点的方向信息和相邻边缘点趋于同一类的先验知识,可得到近似全局最优的边缘线段分割结果,避免了规则化过程中选择初始点和处理顺序的麻烦和不利影响,同时充分利用了影像特征,对边缘线段进行精确定位,减弱建筑物提取结果误差的影响。对不同影像的实验结果证明此方法可得到规则化的建筑物外轮廓线,不考虑提取过程中遗漏的建筑物,矢量化结果平均准确度为89%、完整度98%、几何形状相似性87%、整体质量85%。
The extraction of building outlines from high spatial resolution imagery is a key element of numerous geospatial applications and has been addressed by various approaches. However, the final extraction results are always irregular or inaccurate owing to the boundary regularization algorithm and variability of building shape. This paper proposes a new method for the regularization and vectorization of two dimensional building outlines from high spatial resolution imagery. To accomplish this task, we utilize image segmentation to detect the two main orientations (0 and θ+90°) of the building blobs. To effectively refine boundaries, we then divide the boundary points, which were obtained clockwise, into the first principal, second principal, and unknown orientation classes. Then, we use least square template matching to precisely position the edge to reduce accuracy loss. Finally, the building outlines are generated by connecting the comer points with intersecting adjacent lines. In addition to the omission buildings, experimental results confirm the ability of the presented system to effectively and steadily extract building outlines with an overall average correctness of 89%, completeness of 98%, shape accuracy of 87%, and quality of 85%. This method can be widely used in various applications. Specifically, our method can work with a relatively low-accuracy image segmentation. Therefore, it can be applied for the vector quantization of large-area building outlines. However, our method only focuses on building outlines and does not consider the internal structure of the building. In the future, more attention should be givento solve this issue.