首页 » 文章 » 文章详细信息
Journal of Robotics Volume 2017 ,2017-01-11
Keyframes Global Map Establishing Method for Robot Localization through Content-Based Image Matching
Research Article
Tianyang Cao 1 , 2 Haoyuan Cai 1 Dongming Fang 1 Hui Huang 1 Chang Liu 1
Show affiliations
DOI:10.1155/2017/1646095
Received 2016-07-18, accepted for publication 2016-11-15, Published 2016-11-15
PDF
摘要

Self-localization and mapping are important for indoor mobile robot. We report a robust algorithm for map building and subsequent localization especially suited for indoor floor-cleaning robots. Common methods, for example, SLAM, can easily be kidnapped by colliding or disturbed by similar objects. Therefore, keyframes global map establishing method for robot localization in multiple rooms and corridors is needed. Content-based image matching is the core of this method. It is designed for the situation, by establishing keyframes containing both floor and distorted wall images. Image distortion, caused by robot view angle and movement, is analyzed and deduced. And an image matching solution is presented, consisting of extraction of overlap regions of keyframes extraction and overlap region rebuild through subblocks matching. For improving accuracy, ceiling points detecting and mismatching subblocks checking methods are incorporated. This matching method can process environment video effectively. In experiments, less than 5% frames are extracted as keyframes to build global map, which have large space distance and overlap each other. Through this method, robot can localize itself by matching its real-time vision frames with our keyframes map. Even with many similar objects/background in the environment or kidnapping robot, robot localization is achieved with position RMSE <0.5 m.

授权许可

Copyright © 2017 Tianyang Cao et al. 2017
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

图表

The relationship between wall and upward-looking camera. (a) Robot and camera. (b) Relationship between front wall and side wall and camera.

The relationship between wall and upward-looking camera. (a) Robot and camera. (b) Relationship between front wall and side wall and camera.

The image processing of content-based image matching.

The image processing of rotation and translation.

The overlap region extraction progress. (a) Frame A, (b) Frame B, (c) the translation result, (d) the rotation result, (e) the overlap between two frames, and (f) the overlap region mask.

The overlap region extraction progress. (a) Frame A, (b) Frame B, (c) the translation result, (d) the rotation result, (e) the overlap between two frames, and (f) the overlap region mask.

The overlap region extraction progress. (a) Frame A, (b) Frame B, (c) the translation result, (d) the rotation result, (e) the overlap between two frames, and (f) the overlap region mask.

The overlap region extraction progress. (a) Frame A, (b) Frame B, (c) the translation result, (d) the rotation result, (e) the overlap between two frames, and (f) the overlap region mask.

The overlap region extraction progress. (a) Frame A, (b) Frame B, (c) the translation result, (d) the rotation result, (e) the overlap between two frames, and (f) the overlap region mask.

The overlap region extraction progress. (a) Frame A, (b) Frame B, (c) the translation result, (d) the rotation result, (e) the overlap between two frames, and (f) the overlap region mask.

The connect lines on the ceiling. (a) The connect lines on the original ceiling. (b) The connect lines on the ceiling after rotation and translation.

The connect lines on the ceiling. (a) The connect lines on the original ceiling. (b) The connect lines on the ceiling after rotation and translation.

The image processing of ceiling feature point detection (red points are the feature points). (a) Original feature points in frame A, (b) ceiling points detecting result in frame A, (c) original feature points in frame B, and (d) ceiling points detecting result in frame B.

The image processing of ceiling feature point detection (red points are the feature points). (a) Original feature points in frame A, (b) ceiling points detecting result in frame A, (c) original feature points in frame B, and (d) ceiling points detecting result in frame B.

The image processing of ceiling feature point detection (red points are the feature points). (a) Original feature points in frame A, (b) ceiling points detecting result in frame A, (c) original feature points in frame B, and (d) ceiling points detecting result in frame B.

The image processing of ceiling feature point detection (red points are the feature points). (a) Original feature points in frame A, (b) ceiling points detecting result in frame A, (c) original feature points in frame B, and (d) ceiling points detecting result in frame B.

The overlap region rebuilding process.

The rebuild overlap region by two similar frames. (a) The overlap region in frame A, (b) the overlap region in frame B, and (c) the rebuild overlap region in frame A.

The rebuild overlap region by two similar frames. (a) The overlap region in frame A, (b) the overlap region in frame B, and (c) the rebuild overlap region in frame A.

The rebuild overlap region by two similar frames. (a) The overlap region in frame A, (b) the overlap region in frame B, and (c) the rebuild overlap region in frame A.

The rebuild overlap region by two dissimilar frames. (a) Frame A, (b) frame B, (c) the overlap region in frame A, and (d) the rebuild overlap region in frame A.

The rebuild overlap region by two dissimilar frames. (a) Frame A, (b) frame B, (c) the overlap region in frame A, and (d) the rebuild overlap region in frame A.

The rebuild overlap region by two dissimilar frames. (a) Frame A, (b) frame B, (c) the overlap region in frame A, and (d) the rebuild overlap region in frame A.

The rebuild overlap region by two dissimilar frames. (a) Frame A, (b) frame B, (c) the overlap region in frame A, and (d) the rebuild overlap region in frame A.

The image processing of keyframes global establishing.

The image processing of robot localization.

The experiment site. (a) The robot moving in the experiment site. (b) The ceiling in the robot vision.

The experiment site. (a) The robot moving in the experiment site. (b) The ceiling in the robot vision.

The global position relationship of keyframes sequence.

Indoor environment described by keyframes sequence.

The robot route resolved through keyframes global map and content-based image matching.

The global position relationship of keyframes sequence; “∗” is the position of each keyframe.

The global map for kidnap test described by keyframes sequence.

The kidnap experiment; eight consecutive kidnaps were executed by experimenters, and sign “o” is the localization result for each kidnap position fix by the robot after landing.

通讯作者

1. Haoyuan Cai.State Key Laboratory of Transducer Technology, Institute of Electronics, Chinese Academy of Sciences, Beijing 100190, China, cas.cn.hycai@mail.ie.ac.cn
2. Chang Liu.State Key Laboratory of Transducer Technology, Institute of Electronics, Chinese Academy of Sciences, Beijing 100190, China, cas.cn.tuengineer@qq.com

推荐引用方式

Tianyang Cao,Haoyuan Cai,Dongming Fang,Hui Huang,Chang Liu. Keyframes Global Map Establishing Method for Robot Localization through Content-Based Image Matching. Journal of Robotics ,Vol.2017(2017)

您觉得这篇文章对您有帮助吗?
分享和收藏
0

是否收藏?

参考文献
[1] W. Y. Jeong, K. M. Lee. CV-SLAM: a new ceiling vision-based slam technique. :3195-3200. DOI: 10.1017/S0263574714000782.
[2] S. T. Pfister, J. W. Burdick. Multi-scale point and line range data algorithms for mapping and localization. :1159-1166. DOI: 10.1017/S0263574714000782.
[3] S. Lee, S. Lee, S. Baek. (2012). Vision-based kidnap recovery with SLAM for home cleaning robots. Journal of Intelligent & Robotic Systems.67(1):7-24. DOI: 10.1017/S0263574714000782.
[4] R. L. Stewart, H. Zhang. Image similarity from feature-flow for keyframe detection in appearance-based SLAM. :305-312. DOI: 10.1017/S0263574714000782.
[5] D. Y. Kim, H. Choi, H. Lee, E. Kim. et al.(2013). A new cvSLAM exploiting a partially known landmark association. Advanced Robotics.27(14):1073-1086. DOI: 10.1017/S0263574714000782.
[6] D. Caballero, T. Antequera, A. Caro, M. L. Duran. et al.(2016). Data mining on MRI-computational texture features to predict sensory characteristics in ham. Food and Bioprocess Technology.9(4):699-708. DOI: 10.1017/S0263574714000782.
[7] Z. Li, C. Shao, Y. M. Liu. Motion estimation based on axis affine model. :572-576. DOI: 10.1017/S0263574714000782.
[8] T. Schöps, J. Enge, D. Cremers. Semi-dense visual odometry for AR on a smartphone. :145-150. DOI: 10.1017/S0263574714000782.
[9] J. Engel, J. Stückler, D. Cremers. Large-scale direct SLAM with stereo cameras. :1935-1942. DOI: 10.1017/S0263574714000782.
[10] H. Bay, A. Ess, T. Tuytelaars, L. Van Gool. et al.(2008). Speeded-Up Robust Features (SURF). Computer Vision and Image Understanding.110(3):346-359. DOI: 10.1017/S0263574714000782.
[11] J. M. Santos, M. S. Couceiro, D. Portugal, R. P. Rocha. et al.(2015). A sensor fusion layer to cope with reduced visibility in SLAM. Journal of Intelligent & Robotic Systems.80(3):401-422. DOI: 10.1017/S0263574714000782.
[12] J. Yuan, Y. L. Huang, F. C. Sun, T. Tao. et al.(2014). Active exploration using a scheme for autonomous allocation of landmarks. Robotica.32(5):757-782. DOI: 10.1017/S0263574714000782.
[13] T. Schöps, J. Enge, D. Cremers. Dense planar SLAM. :157-164. DOI: 10.1017/S0263574714000782.
[14] S. Jia, K. Wang, X. Li. (2016). Mobile robot simultaneous localization and mapping based on a monocular camera. Journal of Robotics.2016-11. DOI: 10.1017/S0263574714000782.
[15] P. W. Smith, K. B. Johnson, M. A. Abidi. Efficient techniques for wide-angle stereo vision using surface projection mode. :113-118. DOI: 10.1017/S0263574714000782.
[16] K. Tateno, F. Tombari, N. Navab. Real-time and scalable incremental segmentation on dense SLAM. :4465-4472. DOI: 10.1017/S0263574714000782.
[17] H. Choi, Y. K. Dong, J. P. Hwang. CV-SLAM using ceiling boundary. :228-233. DOI: 10.1017/S0263574714000782.
[18] E. Garcia-Fidalgo, A. Ortiz. (2015). Vision-based topological mapping and localization by means of local invariant features and map refinement. Robotica.33(7):1446-1470. DOI: 10.1017/S0263574714000782.
[19] J. Kim, I.-S. Kweon. Robust feature matching for loop closing and localization. :3905-3910. DOI: 10.1017/S0263574714000782.
[20] W. Y. Jeong, K. M. Lee. CV-SLAM: a new ceiling vision-based SLAM technique. :3195-3200. DOI: 10.1017/S0263574714000782.
[21] F. Y. Zhou, X. F. Yuan, Y. Yang. (2016). A high precision visual localization sensor and its working methodology for an indoor mobile robot. Frontiers of Information Technology & Electronic Engineering.17(4):365-374. DOI: 10.1017/S0263574714000782.
[22] K. H. Strobl, M. Lingenauber. (2016). Stepwise calibration of focused plenoptic cameras. Computer Vision and Image Understanding.145:140-147. DOI: 10.1017/S0263574714000782.
[23] Y. Ji, A. Yamashita, H. Asama. (2015). RGB-D SLAM using vanishing point and door plate information in corridor environment. Intelligent Service Robotics.8(2):105-114. DOI: 10.1017/S0263574714000782.
[24] H. Choi, D. Y. Kim, J. P. Hwang, C.-W. Park. et al.(2012). Efficient simultaneous localization and mapping based on ceiling-view: ceiling boundary feature map approach. Advanced Robotics.26(5-6):653-671. DOI: 10.1017/S0263574714000782.
[25] I. Dryanovski, R. G. Valenti, J. Xiao. Fast visual odometry and mapping from RGB-D data. :2305-2310. DOI: 10.1017/S0263574714000782.
[26] P. Chen, Z. Gu, G. Zhang, H. Liu. et al.Ceiling vision localization with feature pairs for home service robots. :2274-2279. DOI: 10.1017/S0263574714000782.
[27] R. Mur-Artal, J. M. M. Montiel, J. D. Tardos. (2015). ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Transactions on Robotics.31(5):1147-1163. DOI: 10.1017/S0263574714000782.
[28] S. Jo, H. Choi, E. Kim. Ceiling vision based SLAM approach using sensor fusion of sonar sensor and monocular camera. :1461-1464. DOI: 10.1017/S0263574714000782.
[29] J.-S. Cho, H.-J. Lee, J.-H. Park, J.-H. Sung. et al.(2016). Image analysis to evaluate the browning degree of banana ( spp.) peel. Food Chemistry.194:1028-1033. DOI: 10.1017/S0263574714000782.
[30] Z. Zhang. (2000). A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence.22(11):1330-1334. DOI: 10.1017/S0263574714000782.
文献评价指标
浏览 108次
下载全文 36次
评分次数 0次
用户评分 0.0分
分享 0次