Image Fusion Techniques: A Review
DOI:
https://doi.org/10.61841/zpwm5j51Keywords:
Fusion, IHS, PCAAbstract
Image Fusion is being used to gather important data from such an input image array and to place it in a single output picture to make it much more meaningful & usable than either of the input images. Image fusion boosts the quality and application of data. The accuracy of the image that has fused depending on the application. It is widely used in smart robotics, audio camera fusion, photonics, system control and output, construction and inspection of electronic circuits, complex computer, software diagnostics, also smart line assembling robots. In this paper provides a literature review of different image fusion techniques in the spatial domain and frequency domain, such as averaging, min-max, block substitution, Intensity-Hue-Saturation (IHS), Principal Component Analysis (PCA), pyramid-based techniques, and transforming. Different quality metrics for quantitative analysis of these approaches have been debated.
Downloads
References
[1] L. Wald, “Definitions and Architectures: Fusion of Images of Different Spatial Resolutions,” Press. l’Ecole, Ec. des Mines Paris, Paris, Fr., pp. 161–162, 2002.
[2] H. Wang, J. Peng, and W. Wu, “Fusion algorithm for multisensor images based on discrete multiwavelet transform,” IEE Proceedings-Vision, Image Signal Process., vol. 149, no. 5, pp. 283–289, 2002.
[3] L. Bentabet, S. Jodouin, D. Ziou, and J. Vaillancourt, “Road vectors update using SAR imagery: a snake-based method,” IEEE Trans. Geosci. Remote Sens., vol. 41, no. 8, pp. 1785–1803, 2003.
[4] S.-M. Jung, S.-C. Shin, H. Baik, and M.-S. Park, “New fast successive elimination algorithm,” in Proceedings of the 43rd IEEE Midwest Symposium on Circuits and Systems (Cat. No. CH37144), 2000, vol. 2, pp. 616–619. 2210.
[5] I. J. Cox, M. L. Miller, and A. L. McKellips, “Watermarking as communications with side information,” Proc. IEEE, vol. 87, no. 7, pp. 1127–1141, 1999.
[6] S. Masood, M. Sharif, M. Yasmin, M. A. Shahid, and A. Rehman, “Image Fusion Methods: A Survey.,” J. Eng. Sci. Technol. Rev., vol. 10, no. 6, 2017.
[7] Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Process. Lett., vol. 9, no. 3, pp. 81–84, 2002.
[8] K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques—An introduction, review and comparison,” ISPRS J. Photogramm. Remote Sens., vol. 62, no. 4, pp. 249–263, 2007.
[9] S. Li, J. T. Kwok, and Y. Wang, “Combination of images with diverse focuses using the spatial frequency,” Inf. fusion, vol. 2, no. 3, pp. 169–176, 2001.
[10] Y. Liu, X. Chen, J. Cheng, and H. Peng, “A medical image fusion method based on convolutional neural networks,” in 2017 20th International Conference on Information Fusion (Fusion), 2017, pp. 1–7.
[11] G. Piella, “A general framework for multiresolution image fusion: from pixels to regions,” Inf. fusion, vol. 4, no. 4, pp. 259–280, 2003.
[12] R. Minhas, A. A. Mohammed, and Q. M. J. Wu, “Shape from focus using fast discrete curvelet transform,” Pattern Recognit., vol. 44, no. 4, pp. 839–853, 2011.
[13] L. J. Chipman, T. M. Orr, and L. N. Graham, “Wavelets and image fusion,” in Proceedings., International Conference on Image Processing, 1995, vol. 3, pp. 248–251.
[15] Z. Shao and J. Cai, “Remote sensing image fusion with deep convolutional neural network,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 11, no. 5, pp. 1656–1669, 2018.
[16] Y. Yang, M. Yang, S. Huang, M. Ding, and J. Sun, “Robust sparse representation combined with adaptive PCNN for multifocus image fusion,” IEEE Access, vol. 6, pp. 20138–20151, 2018.
[17] H. Wang, S. Li, L. Song, and L. Cui, “A novel convolutional neural networkbased fault recognition method via image fusion of multi-vibration-signals,” Comput. Ind., vol. 105, pp. 182–190, 2019.
[18] G. Qi, Q. Zhang, F. Zeng, J. Wang, and Z. Zhu, “Multi-focus image fusion via morphological similarity based dictionary construction and sparse representation,” CAAI Trans. Intell. Technol., vol. 3, no. 2, pp. 83–94, 2018.
[19] W. Yin, W. Zhao, D. You, and D. Wang, “Local binary pattern metric-based multi-focus image fusion,” Opt. Laser Technol., vol. 110, pp. 62–68, 2019.
[20] W. Zeng, F. Li, H. Huang, Y. Huang, and X. Ding, “Two-stream Multi-focus Image Fusion Based on the Latent Decision Map,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 1762–1766.
[21] H. Song, Q. Liu, G. Wang, R. Hang, and B. Huang, “Spatiotemporal satellite image fusion using deep convolutional neural networks,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 11, no. 3, pp. 821–829, 2018. 2211.
[22] Z. Nian and C. Jung, “CNN-Based Multi-Focus Image Fusion with Light Field Data,” in 2019 IEEE International Conference on Image Processing (ICIP), 2019, pp. 1044–1048.
[23] H. T. Mustafa, J. Yang, and M. Zareapoor, “Multi-scale convolutional neural network for multi-focus image fusion,” Image Vis. Comput., vol. 85, pp. 26–35, 2019.
[24] W. Zhao, D. Wang, and H. Lu, “Multi-Focus Image Fusion With a Natural Enhancement via a Joint Multi-Level Deeply Supervised Convolutional Neural Network,” IEEE Trans. Circuits Syst. Video Technol., vol. 29, no. 4, pp. 1102–1115, 2018.
[25] P. Mathiyalagan and N. Suvitha, “Image Fusion Using Convolutional Neural Network with Bilateral Filtering,” in 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT), 2018, pp. 1–11.
[26] H. Li, R. Nie, D. Zhou, and X. Gou, “Convolutional Neural Network Based Multi-Focus Image Fusion,” in Proceedings of the 2018 2nd International Conference on Algorithms, Computing and Systems, 2018, pp. 148–154.
[27] R. Lai, Y. Li, J. Guan, and A. Xiong, “Multi-scale visual attention deep convolutional neural network for multi-focus image fusion,” IEEE Access, vol. 7, pp. 114385–114399, 2019.
[28] R. Hou, D. Zhou, R. Nie, D. Liu, and X. Ruan, “Brain CT and MRI medical image fusion using convolutional neural networks and a dual-channel spiking cortical model,” Med. Biol. Eng. Comput., vol. 57, no. 4, pp. 887–900, 2019.
[29] P. S. J. Kumar, T. L. Huan, X. Li, and Y. Yuan, “Panchromatic and Multispectral Remote Sensing Image Fusion Using Particle Swarm Optimization of Convolutional Neural Network for Effective Comparison of Bucolic and Farming Region,” Earth Sci. Remote Sens. Appl. Ser. Remote Sensing/Photogrammetry, vol. 43, pp. 1–30, 2018.
[30] Z. Gao, J. Li, J. Guo, Y. Chen, Z. Yi, and J. Zhong, “Diagnosis of Diabetic Retinopathy Using Deep Neural Networks,” IEEE Access, vol. 7, pp. 3360–3370, 2018.
[31] B. Rajalingam and R. Priya, “Multimodal medical image fusion based on deep learning neural network for clinical treatment analysis,” Int. J. Chem Tech Res. CODEN IJCRGG, ISSN, pp. 974–4290, 2018.
[32] J. Z. Chang, “Training Neural Networks to Pilot Autonomous Vehicles: Scaled Self-Driving Car,” 2018.
[33] C. Gupta and P. Gupta, “A study and evaluation of transform domain based image fusion techniques for visual sensor networks,” Int. J. Comput. Appl., vol. 116, no. 8, 2015.
[34] R. Patel, M. Rajput, and P. Parekh, “Comparative study on multi-focus image fusion techniques in dynamic scene,” Int. J. Comput. Appl., vol. 109, no. 6, 2015.
[35] C. Morris and R. S. Rajesh, “MODIFIED PRIMITIVE IMAGE FUSION TECHNIQUES FOR THE SPATIAL DOMAIN/MODIFICIRANO SPAJANJE JEDNOSTAVNIH SLIKA ZA PROSTORNU DOMENU,” Informatologia, vol. 48, no. 1/2, p. 71, 2015. [36] B. Nobariyan, N. Amini, S. Daneshvar, and A. Abbasi, “A Novel Architecture of Medical Image Fusion 2212. Based on YCbCr-DWT Transform,” Int. Arab J. Inf. Technol., vol. 15, no. 5, pp. 850–856, 2018.
[37] B. Ayhan, M. Dao, C. Kwan, H.-M. Chen, J. F. Bell, and R. Kidd, “A novel utilization of image registration techniques to process mastcam images in mars rover with applications to image fusion, pixel clustering, and anomaly detection,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 10, no. 10, pp. 4553–4564, 2017.
[38] S. Bhagyamma, A. L. Choodarathnakara, T. K. Ranjitha, A. P. Ramya, and K. M. Niranjan, “Performance Analysis of Image Fusion Techniques to Improve Quality of Satellite Data,” 2017.
[39] B. Rajalingam and R. Priya, “Combining multi-modality medical image fusion based on hybrid intelligence for disease identification,” Int. J. Adv. Res. Trends Eng. Technol., vol. 5, no. 12, pp. 862 887, 2018.
[40] L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2414 2423. [41] N. B. Kolekar and R. P. Shelkikar, “Decision level based Image Fusion using Wavelet Transform and Support Vector Machine,” Int. J. Sci. Eng. Res., vol. 4, no. 12, pp. 54–58, 2016.
[42] N. Kausar and A. Majid, “Random forest-based scheme using feature and decision levels information for multi-focus image fusion,” Pattern Anal. Appl., vol. 19, no. 1, pp. 221–236, 2016.
[43] S. Koley, A. Galande, B. Kelkar, A. K. Sadhu, D. Sarkar, and C. Chakraborty, “Multispectral MRI image fusion for enhanced visualization of meningioma brain tumors and edema using contourlet transform and fuzzy statistics,” J. Med. Biol. Eng., vol. 36, no. 4, pp. 470–484, 2016.
[44] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
[45] H. Li, L. Li, and J. Zhang, “Multi-focus image fusion based on sparse feature matrix decomposition and morphological filtering,” Opt. Commun., vol. 342, pp. 1–11, 2015.
[46] F. Palsson, J. R. Sveinsson, and M. O. Ulfarsson, “Multispectral and hyperspectral image fusion using a 3-D-convolutional neural network,” IEEE Geosci. Remote Sens. Lett., vol. 14, no. 5, pp. 639–643, 2017.
[47] Z. Wang, S. Wang, and Y. Zhu, “Multi-focus image fusion based on the improved PCNN and guided filter,” Neural Process. Lett., vol. 45, no. 1, pp. 75–94, 2017.
[48] K. Xia, H. Yin, and J. Wang, “A novel improved deep convolutional neural network model for medical image fusion,” Cluster Comput., vol. 22, no. 1, pp. 1515–1527, 2019.
[49] J. Zhong, B. Yang, Y. Li, F. Zhong, and Z. Chen, “Image fusion and super-resolution with convolutional neural network,” in Chinese Conference on Pattern Recognition, 2016, pp. 78–88. [50] J. J. Lewis, R. J. O’callaghan, S. G. Nikolov, D. R. Bull, and C. N. Canagarajah, “Region-based image fusion using complex wavelets,” in Seventh International Conference on Information Fusion 2213 (FUSION), 2004, vol. 1, pp. 555–562. [51] P. K. Atrey, M. A. Hossain, A. El Saddik, and M. S. Kankanhalli, “Multimodal fusion for multimedia analysis: a survey,” Multimed. Syst., vol. 16, no. 6, pp. 345–379, 2010.
[52] Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, “A comparative analysis of image fusion methods,” IEEE Trans. Geosci. Remote Sens., vol. 43, no. 6, pp. 1391–1402, 2005.
[53] T. Stathaki, Image fusion: algorithms and applications. Elsevier, 2011.
[54] B. Zitova and J. Flusser, “Image registration methods: a survey,” Image Vis. Comput., vol. 21, no. 11, pp. 977–1000, 2003.
[55] S. Sayed and S. Jangale, “Image registration & image fusion,” in Proceedings of the International Conference and Workshop on Emerging Trends in Technology, 2010, p. 1004.
[56] Å. Rinnan, F. Van Den Berg, and S. B. Engelsen, “Review of the most common pre-processing techniques for near-infrared spectra,” TrAC Trends Anal. Chem., vol. 28, no. 10, pp. 1201–1222, 2009. [57] Y. Zhang, “Understanding Image Fusion. Photogrammetric Engineering&Remote Sensing,” 2004. [58] K. G. Nikolakopoulos, “Comparison of nine fusion techniques for very high resolution data,” Photogramm. Eng. Remote Sens., vol. 74, no. 5, pp. 647–659, 2008.
[59] T. Tu, Y.-C. Lee, C.-P. Chang, and P. S. Huang, “Adjustable intensity-hue-saturation and Brovey transform fusion technique for IKONOS/QuickBird imagery,” Opt. Eng., vol. 44, no. 11, p. 116201, 2005.
[60] X. Li, W. Rhee, W. Jia, and Z. Wang, “A multi-bit FIR filtering technique for two-point modulators with dedicated digital high-pass modulation path,” in 2015 IEEE International Symposium on Circuits and Systems (ISCAS), 2015, pp. 894–897.
[61] A. K. Helmy, A. H. Nasr, and G. S. El-Taweel, “Assessment and evaluation of different data fusion techniques,” Int. J. Comput., vol. 4, no. 4, pp. 107–115, 2010.
[62] S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput., vol. 26, no. 7, pp. 971–979, 2008.
[63] A. Afify, “A New Intensity-Hue-Saturation Based Method for Fusing High Resolution Satellite Images,” Int. J. Geoinformatics, vol. 8, no. 4, 2012.
[64] S. Arivazhagan and S. Nirmala, “Rotation and Scale Invariant Texture Classification Using Gabor and Curvelet Transforms,” Int. J. Tomogr. Simul., vol. 28, pp. 94–105, 2015.
[65] R. A. Ansari and K. M. Buddhiraju, “k-means based hybrid wavelet and curvelet transform approach for denoising of remotely sensed images,” Remote Sens. Lett., vol. 6, no. 12, pp. 982–991, 2015.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
You are free to:
- Share — copy and redistribute the material in any medium or format for any purpose, even commercially.
- Adapt — remix, transform, and build upon the material for any purpose, even commercially.
- The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms:
- Attribution — You must give appropriate credit , provide a link to the license, and indicate if changes were made . You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Notices:
You do not have to comply with the license for elements of the material in the public domain or where your use is permitted by an applicable exception or limitation .
No warranties are given. The license may not give you all of the permissions necessary for your intended use. For example, other rights such as publicity, privacy, or moral rights may limit how you use the material.
