首页 » 文章 » 文章详细信息
Computational Intelligence and Neuroscience Volume 2017 ,2017-06-27
Object Extraction in Cluttered Environments via a P300-Based IFCE
Research Article
Xiaoqian Mao 1 Wei Li 2 , 3 Huidong He 1 Bin Xian 1 Ming Zeng 1 Huihui Zhou 4 Linwei Niu 5 Genshe Chen 6
Show affiliations
DOI:10.1155/2017/5468208
Received 2016-12-14, accepted for publication 2017-05-24, Published 2017-05-24
PDF
摘要

One of the fundamental issues for robot navigation is to extract an object of interest from an image. The biggest challenges for extracting objects of interest are how to use a machine to model the objects in which a human is interested and extract them quickly and reliably under varying illumination conditions. This article develops a novel method for segmenting an object of interest in a cluttered environment by combining a P300-based brain computer interface (BCI) and an improved fuzzy color extractor (IFCE). The induced P300 potential identifies the corresponding region of interest and obtains the target of interest for the IFCE. The classification results not only represent the human mind but also deliver the associated seed pixel and fuzzy parameters to extract the specific objects in which the human is interested. Then, the IFCE is used to extract the corresponding objects. The results show that the IFCE delivers better performance than the BP network or the traditional FCE. The use of a P300-based IFCE provides a reliable solution for assisting a computer in identifying an object of interest within images taken under varying illumination intensities.

授权许可

Copyright © 2017 Xiaoqian Mao et al. 2017
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

图表

RGB space coordinate system.

(a) Membership functions for angles. (b) Membership functions for defuzzification.

(a) Membership functions for angles. (b) Membership functions for defuzzification.

The subregion growing towards (a) its 4 adjacent neighbors, (b) its 4 diagonal neighbors, and (c) its 8 surrounding neighbors.

The subregion growing towards (a) its 4 adjacent neighbors, (b) its 4 diagonal neighbors, and (c) its 8 surrounding neighbors.

The subregion growing towards (a) its 4 adjacent neighbors, (b) its 4 diagonal neighbors, and (c) its 8 surrounding neighbors.

The process of P300-based seed-pixel selection.

3 × 3 P300 speller user interface.

The time sequence of the offline trial, online trial, one repetition, and one flash.

The original images. (a) Early in the morning with sunlight. (b) Late in the evening with the lights on.

The original images. (a) Early in the morning with sunlight. (b) Late in the evening with the lights on.

Comparison of the results obtained using a BP network, FCE, and IFCE (early in the morning with sunlight).

Comparison of the results obtained using a BP network, FCE, and IFCE (late in the evening with the lights on).

通讯作者

Wei Li.Department of Computer & Electrical Engineering and Computer Science, California State University, Bakersfield, CA 93311, USA, calstate.edu;State Key Laboratory of Robotics, Shenyang Institute of Automation, Shenyang, Liaoning 110016, China, cas.cn.wli@csub.edu

推荐引用方式

Xiaoqian Mao,Wei Li,Huidong He,Bin Xian,Ming Zeng,Huihui Zhou,Linwei Niu,Genshe Chen. Object Extraction in Cluttered Environments via a P300-Based IFCE. Computational Intelligence and Neuroscience ,Vol.2017(2017)

您觉得这篇文章对您有帮助吗?
分享和收藏
15

是否收藏?

参考文献
[1] J. Malik, S. Belongie, T. Leung, J. Shi. et al.(2001). Contour and texture analysis for image segmentation. International Journal of Computer Vision.43(1):7-27. DOI: 10.1023/B:VISI.0000022288.19776.77.
[2] J. Shi, J. Malik. (2000). Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence.22(8):888-905. DOI: 10.1023/B:VISI.0000022288.19776.77.
[3] Y. Tian, X. Kang, Y. Li, W. Li. et al.(2013). Identifying rhodamine dye plume sources in near-shore oceanic environments by integration of chemical and visual sensors. Sensors.13(3):3776-3798. DOI: 10.1023/B:VISI.0000022288.19776.77.
[4] N. A. Mat-Isa, M. Y. Mashor, N. H. Othman. (2005). Seeded region growing features extraction algorithm; its potential use in improving screening for cervical cancer. International Journal of the Computer the Internet & Management.13. DOI: 10.1023/B:VISI.0000022288.19776.77.
[5] E. Tidoni, P. Gergondet, A. Kheddar, S. M. Aglioti. et al.(2014). Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot. Frontiers in Neurorobotics.8, article 20. DOI: 10.1023/B:VISI.0000022288.19776.77.
[6] Y. Takemura, Y. Sato, K. Azeura. SOM based color constancy algorithm for RoboCup robots. .2008:1495-1500. DOI: 10.1023/B:VISI.0000022288.19776.77.
[7] M. Li, W. Li, H. Zhou. (2015). Increasing N200 potentials via visual stimulus depicting humanoid robot behavior. International Journal of Neural Systems.26(1). DOI: 10.1023/B:VISI.0000022288.19776.77.
[8] C. Gönner, M. Rous, K.-F. Kraiss. (2004). Real-time adaptive colour segmentation for the RoboCup middle size league. RoboCup 2004: Robot Soccer World Cup VIII:402-409. DOI: 10.1023/B:VISI.0000022288.19776.77.
[9] B. J. Choi, S. H. Jo. (2013). A low-cost EEG system-based hybrid brain-computer interface for humanoid robot navigation and recognition. PLoS ONE.8(9). DOI: 10.1023/B:VISI.0000022288.19776.77.
[10] F. A. Albalooshi, V. K. Asari. A self-organizing lattice Boltzmann active contour (SOLBAC) approach for fast and robust object region segmentation. :1329-1333. DOI: 10.1023/B:VISI.0000022288.19776.77.
[11] W. Li. An iterative fuzzy segmentation algorithm for recognizing an odor source in near shore ocean environments. :101-106. DOI: 10.1023/B:VISI.0000022288.19776.77.
[12] K. Brewer, E. Williams. The Psychology of Object and Pattern Recognition: A Brief Introduction. . DOI: 10.1023/B:VISI.0000022288.19776.77.
[13] J. Zhao, W. Li, M. Li. (2015). Comparative study of SSVEP- and P300-based models for the telepresence control of humanoid robots. PLoS ONE.10(11). DOI: 10.1023/B:VISI.0000022288.19776.77.
[14] X. Mao, H. He, W. Li. Path finding for a NAO humanoid robot by fusing visual and proximity sensors. :2574-2579. DOI: 10.1023/B:VISI.0000022288.19776.77.
[15] X. Liang Y, L. I. Yong-Xin, J. Zhang. (2008). Application of hough transform in robocup vision system. Machinery Design & Manufacture. DOI: 10.1023/B:VISI.0000022288.19776.77.
[16] K. Bouyarmane, J. Vaillant, N. Sugimoto, F. Keith. et al.(2014). Brain-machine interfacing control of whole-body humanoid motion. Frontiers in Systems Neuroscience.8, article 138. DOI: 10.1023/B:VISI.0000022288.19776.77.
[17] H. Kitano, M. Asada, Y. Kuniyoshi, I. Noda. et al.(1997). Robocup: a challenge problem for AI and robotics. AI Magazine.18(1):73-85. DOI: 10.1023/B:VISI.0000022288.19776.77.
[18] J. Zhao, Q. Meng, W. Li, M. Li. et al.An OpenViBE-based brainwave control system for Cerebot. :1169-1174. DOI: 10.1023/B:VISI.0000022288.19776.77.
[19] J. Kulk, J. S. Welsh. (2011). Evaluation of walk optimisation techniques for the NAO robot. PLOS ONE.7(6):306-311. DOI: 10.1023/B:VISI.0000022288.19776.77.
[20] G. Zhao, Y. Li, G. Chen, Q. Meng. et al.A fuzzy-logic based approach to color segmentation. .8739. DOI: 10.1023/B:VISI.0000022288.19776.77.
[21] P. F. Felzenszwalb, D. P. Huttenlocher. (2004). Efficient graph-based image segmentation. International Journal of Computer Vision.59(2):167-181. DOI: 10.1023/B:VISI.0000022288.19776.77.
[22] M. Cheriet, J. N. Said, C. Y. Suen. (1998). A recursive thresholding technique for image segmentation. IEEE Transactions on Image Processing.7(6):918-921. DOI: 10.1023/B:VISI.0000022288.19776.77.
[23] C.-F. Juang, L.-T. Chen. (2008). Moving object recognition by a shape-based neural fuzzy network. Neurocomputing.71(13-15):2937-2949. DOI: 10.1023/B:VISI.0000022288.19776.77.
[24] C. J. Gonsalvez, J. Polich. (2002). P300 amplitude is determined by target-to-target interval. Psychophysiology.39(3):388-396. DOI: 10.1023/B:VISI.0000022288.19776.77.
[25] P. Stawicki, F. Gembler, I. Volosyak. (2016). Driving a semiautonomous mobile robotic car controlled by an SSVEP-based BCI. Computational Intelligence and Neuroscience.2016-14. DOI: 10.1023/B:VISI.0000022288.19776.77.
[26] U. Kaufmann, G. Mayer, G. Kraetzschmar, G. Palm. et al.(2005). Visual robot detection in RoboCup using neural networks. RoboCup 2004: Robot Soccer World Cup VIII:262-273. DOI: 10.1023/B:VISI.0000022288.19776.77.
[27] R. Dony, S. Wesolkowski. Edge detection on color images using RGB vector angles. :687-692. DOI: 10.1023/B:VISI.0000022288.19776.77.
[28] M. Li, W. Li, J. Zhao, Q. Meng. et al.(2014). A p300 model for cerebot—a mind-controlled humanoid robot. Robot Intelligence Technology and Applications 2.274:495-502. DOI: 10.1023/B:VISI.0000022288.19776.77.
[29] J. Zhang, W. Li, J. Yu, X. Mao. et al.Operating an underwater manipulator via P300 brainwaves. :1-5. DOI: 10.1023/B:VISI.0000022288.19776.77.
[30] M. Li, W. Li, J. Zhao, Q. Meng. et al.(2014). A p300 model for cerebot—a mind-controlled humanoid robot. Robot Intelligence Technology and Applications 2.274:495-502. DOI: 10.1023/B:VISI.0000022288.19776.77.
[31] C. J. Bell, P. Shenoy, R. Chalodhorn, R. P. N. Rao. et al.(2008). Control of a humanoid robot by a noninvasive brain-computer interface in humans. Journal of Neural Engineering.5(2):214-220. DOI: 10.1023/B:VISI.0000022288.19776.77.
[32] X. Mao, M. Li, W. Li, L. Niu. et al.(2017). Progress in EEG-based brain robot interaction systems. Computational Intelligence and Neuroscience.2017-25. DOI: 10.1023/B:VISI.0000022288.19776.77.
[33] P. F. Felzenszwalb, D. P. Huttenlocher. (2005). Pictorial structures for object recognition. International Journal of Computer Vision.61(1):55-79. DOI: 10.1023/B:VISI.0000022288.19776.77.
[34] J. M. Cañas, D. Puig, E. Perdices. (2015). Visual Goal Detection for the RoboCup Standard Platform League. Combinatorics Probability & Computing.24(2):1344-1349. DOI: 10.1023/B:VISI.0000022288.19776.77.
文献评价指标
浏览 179次
下载全文 52次
评分次数 0次
用户评分 0.0分
分享 15次