首页 » 文章 » 文章详细信息
International Journal of Biomedical Imaging Volume 2017 ,2017-02-21
Medical Image Fusion Based on Feature Extraction and Sparse Representation
Research Article
Yin Fei 1 , 2 Gao Wei 2 Song Zongxi 2
Show affiliations
DOI:10.1155/2017/3020461
Received 2016-08-31, accepted for publication 2017-01-10, Published 2017-01-10
PDF
摘要

As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM) and energy information map (EM) as well as structure and energy map (SEM) to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG) and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods.

授权许可

Copyright © 2017 Yin Fei et al. 2017
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

图表
通讯作者

Yin Fei.Xi’an Institute of Optics and Precision Mechanics, Chinese Academic of Sciences, Xi’an 710119, China, cas.cn;University of Chinese Academy of Sciences, Beijing 100049, China, ucas.ac.cn.yinfei@opt.cn

推荐引用方式

Yin Fei,Gao Wei,Song Zongxi. Medical Image Fusion Based on Feature Extraction and Sparse Representation. International Journal of Biomedical Imaging ,Vol.2017(2017)

您觉得这篇文章对您有帮助吗?
分享和收藏
0

是否收藏?

参考文献
[1] T. Guha, R. K. Ward. (2012). Learning sparse representations for human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence.34(8):1576-1588. DOI: 10.4236/cs.2016.78139.
[2] V. D. Calhoun, T. Adali. (2009). Feature-based fusion of medical imaging data. IEEE Transactions on Information Technology in Biomedicine.13(5):711-720. DOI: 10.4236/cs.2016.78139.
[3] Y. Liu, S. Liu, Z. Wang. (2015). A general framework for image fusion based on multi-scale transform and sparse representation. Information Fusion.24:147-164. DOI: 10.4236/cs.2016.78139.
[4] B. Yang, S. Li. (2010). Multifocus image fusion and restoration with sparse representation. IEEE Transactions on Instrumentation and Measurement.59(4):884-892. DOI: 10.4236/cs.2016.78139.
[5] D. Hausmann, L. K. Bittencourt, U. I. Attenberger, M. Sertdemir. et al.(2014). Diagnostic accuracy of 18F choline PET/CT using time-of-flight reconstruction algorithm in prostate cancer patients with biochemical recurrence. Clinical Nuclear Medicine.39(3):e197-e201. DOI: 10.4236/cs.2016.78139.
[6] M. Aharon, M. Elad, A. Bruckstein. (2006). K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing.54(11):4311-4322. DOI: 10.4236/cs.2016.78139.
[7] Y. Yao, P. Guo, X. Xin, Z. Jiang. et al.(2014). Image fusion by hierarchical joint sparse representation. Cognitive Computation.6(3):281-292. DOI: 10.4236/cs.2016.78139.
[8] P. Ganasala, V. Kumar. (2014). CT and MR image fusion scheme in nonsubsampled contourlet transform domain. Journal of Digital Imaging.27(3):407-418. DOI: 10.4236/cs.2016.78139.
[9] Y. Yang, S. Tong, S. Huang, P. Lin. et al.(2014). Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks. Sensors.14(12):22408-22430. DOI: 10.4236/cs.2016.78139.
[10] R. Shen, I. Cheng, A. Basu. (2013). Cross-scale coefficient selection for volumetric medical image fusion. IEEE Transactions on Biomedical Engineering.60(4):1069-1079. DOI: 10.4236/cs.2016.78139.
[11] M. Singh, M. K. Mandal, A. Basu. (2005). Gaussian and Laplacian of Gaussian weighting functions for robust feature based tracking. Pattern Recognition Letters.26(13):1995-2005. DOI: 10.4236/cs.2016.78139.
[12] G. Piella, H. Heijmans. A new quality metric for image fusion. .2:173-176. DOI: 10.4236/cs.2016.78139.
[13] S. T. Li, H. T. Yin, L. Y. Fang. (2012). Group-sparse representation with dictionary learning for medical image denoising and fusion. IEEE Transactions on Biomedical Engineering.59(12):3450-3459. DOI: 10.4236/cs.2016.78139.
[14] B. Zhang, X. Lu, H. Pei, Y. Zhao. et al.(2015). A fusion algorithm for infrared and visible images based on saliency analysis and non-subsampled Shearlet transform. Infrared Physics and Technology.73:286-297. DOI: 10.4236/cs.2016.78139.
[15] J. Saeedi, K. Faez. (2012). Infrared and visible image fusion using fuzzy logic and population-based optimization. Applied Soft Computing Journal.12(3):1041-1054. DOI: 10.4236/cs.2016.78139.
[16] M. Nejati, S. Samavi, S. Shirani. (2015). Multi-focus image fusion using dictionary-based sparse representation. Information Fusion.25:72-84. DOI: 10.4236/cs.2016.78139.
[17] M. Kim, D. K. Han, H. Ko. (2016). Joint patch clustering-based dictionary learning for multimodal image fusion. Information Fusion.27:198-214. DOI: 10.4236/cs.2016.78139.
[18] C. M. Falco, X. Jiang, F. Yin, W. Gao. et al.Image fusion based on group sparse representation. . DOI: 10.4236/cs.2016.78139.
[19] Y. Yang, S. Tong, S. Huang, P. Lin. et al.(2015). Multifocus image fusion based on NSCT and focused area detection. IEEE Sensors Journal.15(5):2824-2838. DOI: 10.4236/cs.2016.78139.
[20] Y. Liu, S. Liu, Z. Wang. (2015). Multi-focus image fusion with dense SIFT. Information Fusion.23:139-155. DOI: 10.4236/cs.2016.78139.
[21] Y. Liu, S. Liu, Z. F. Wang. (2014). Medical image fusion by combining nonsubsampled contourlet transform and sparse representation. Pattern Recognition.484:372-381. DOI: 10.4236/cs.2016.78139.
[22] H. Yin, S. Li, L. Fang. (2013). Simultaneous image fusion and super-resolution using sparse representation. Information Fusion.14(3):229-240. DOI: 10.4236/cs.2016.78139.
[23] P. S. Gomathi, B. Kalaavathi. (2016). Multimodal medical image fusion in non-subsampled contourlet transform domain. Circuits & Systems.7(8):1598-1610. DOI: 10.4236/cs.2016.78139.
[24] J. Wang, J. Peng, X. Feng, G. He. et al.(2013). Image fusion with nonsubsampled contourlet transform and sparse representation. Journal of Electronic Imaging.22(4). DOI: 10.4236/cs.2016.78139.
[25] L. Yang, B. L. Guo, W. Ni. (2008). Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform. Neurocomputing.72(1-3):203-211. DOI: 10.4236/cs.2016.78139.
[26] C. S. Xydeas, V. Petrović. (2000). Objective image fusion performance measure. Electronics Letters.36(4):308-309. DOI: 10.4236/cs.2016.78139.
[27] G. Bhatnagar, Q. M. Jonathan Wu, Z. Liu. (2013). Human visual system inspired multi-modal medical image fusion framework. Expert Systems with Applications.40(5):1708-1720. DOI: 10.4236/cs.2016.78139.
[28] S. Li, X. Kang, L. Fang, J. Hu. et al.(2017). Pixel-level image fusion: a survey of the state of the art. Information Fusion.33:100-112. DOI: 10.4236/cs.2016.78139.
[29] Z. Wang, Y. Ma. (2008). Medical image fusion using -PCNN. Information Fusion.9(2):176-185. DOI: 10.4236/cs.2016.78139.
[30] M. D. C. Valdés Hernández, K. J. Ferguson, F. M. Chappell, J. M. Wardlaw. et al.(2010). New multispectral MRI data fusion technique for white matter lesion segmentation: method and comparison with thresholding in FLAIR images. European Radiology.20(7):1684-1691. DOI: 10.4236/cs.2016.78139.
[31] Y. Yang, S. Tong, S. Huang, P. Lin. et al.(2014). Log-Gabor energy based multimodal medical image fusion in NSCT domain. Computational and Mathematical Methods in Medicine.2014-12. DOI: 10.4236/cs.2016.78139.
[32] M. R. Mohammadi, E. Fatemizadeh, M. H. Mahoor. (2014). PCA-based dictionary building for accurate facial expression recognition via sparse representation. Journal of Visual Communication and Image Representation.25(5):1082-1092. DOI: 10.4236/cs.2016.78139.
[33] Z. Liu, H. Yin, Y. Chai, S. X. Yang. et al.(2014). A novel approach for multimodal medical image fusion. Expert Systems with Applications.41(16):7425-7435. DOI: 10.4236/cs.2016.78139.
[34] J. Tian, M. F. Smith, P. Chinnadurai, V. Dilsizian. et al.(2009). Clinical application of PET-CT fusion imaging for three-dimensional myocardial scar and left ventricular anatomy during ventricular tachycardia ablation. Journal of Cardiovascular Electrophysiology.20(6):597-604. DOI: 10.4236/cs.2016.78139.
[35] G. Qu, D. Zhang, P. Yan. (2002). Information measure for performance of image fusion. Electronics Letters.38(7):313-315. DOI: 10.4236/cs.2016.78139.
[36] S. Agili, D. B. Bjornberg, A. Morales. (2007). Optimized search over the Gabor dictionary for note decomposition and recognition. Journal of the Franklin Institute.344(7):969-990. DOI: 10.4236/cs.2016.78139.
[37] B. Yang, S. Li. (2014). Visual attention guided image fusion with sparse representation. Optik.125(17):4881-4888. DOI: 10.4236/cs.2016.78139.
[38] Q. H. Zhang, Y. L. Fu, H. F. Li, J. Zou. et al.(2013). Dictionary learning method for joint sparse representation-based image fusion. Optical Engineering.52(5):532-543. DOI: 10.4236/cs.2016.78139.
文献评价指标
浏览 162次
下载全文 111次
评分次数 0次
用户评分 0.0分
分享 0次