基于小波深度神经网络的条纹噪声去除
原文地址:https://ieeexplore.ieee.org/document/8678750/authors#authors
doi:10.1109/ACCESS.2019.2908720
ABSRTACT
在红外成像系统中,条纹噪声会严重影响图像质量。现有的去条纹算法在噪声抑制、细节保持和实时性等方面仍存在较大的困难,阻碍了其在光谱成像和信号处理领域的应用。为了解决这一问题,本文从变换域的角度提出了一种新颖的小波深层神经网络,它充分考虑了条纹噪声的固有特性和不同小波子带系数之间的互补信息,以较低的计算量准确估计噪声。此外,还定义了一种特殊的方向正则化器,使场景细节与条纹噪声分离更彻底,细节恢复更准确。大量的模拟和真实数据实验表明,本文提出的方法在定量和定性评价上都优于几种经典的去条带方法。
INTRODUCTION
红外图像在遥感、医学诊断、视觉跟踪、物联网传感等领域有着广泛的应用[1][2][3][4]。由于硬件制作工艺的限制,探测器对同一辐照度的光电响应可能不完全一致,导致观测结果上叠加了固定条纹噪声,严重降低了红外成像系统的灵敏度[5][6][7]。因此,在去除条纹噪声的同时保持真实场景的结构是非常关键的。条纹噪声衰减模型可以表示为
其中$y(i,j)$、$x(i, j)$和$n(i, j)$分别表示观测响应、理想响应和由位于$(i,j)$的探测器所产生的条纹状噪声。
近年来,基于不同框架的算法被提出用于条纹噪声去噪。大体可以将这些方法分为三种:(1)基于先验信息类方法;(2)基于统计学方法;(3)深度学习方法。[8][9]基于先验信息方法,主要有块匹配(block-matching)和3D滤波(BM3D)[10]、TV(Total variation)[11],导向滤波(guided filter, GF)[12],$non-local means(NLM)$滤波[13]以及低秩正则化[14]等,但是这些方法在消除条纹噪声时也会无差别地消除图片的细节信息,如此一来可能会在输出图片中产生假象。基于统计学的方法,例如$histogram equalization(MHE)$[15]等,在相邻列中引入冗余信息来消除条纹噪声,这种方法只适用于较弱的条纹噪声。当前,基于深度学习的方法被广泛应用于图像处理领域,均表现出非凡的效果。Kuang等提出了三层条纹噪声消除卷积神经网络$SNRCNN$,直接将条纹去噪视为图像去噪和超分辨率[19],但并没有考虑条纹噪声的具体特性,因此很难在去除条纹噪声的同时保留高频细节信息。为了消除这一限制,He等提出了具有更大感受野的$DLSNUC$模型来达到更好的消除效果[17]。Xiao等提出$ICSRN$,其将CNN中的局部信息和全局信息结合起来以更好地保留边缘信息。但是仍然很难处理强条纹噪声。总结来讲,现有的深度学习去条纹方法只提取了图像在时间域的信息而忽视了其在时频域的冗余信息,因此限制了条纹去噪算法的应用。
过去一段时期内,基于转换域的图像处理方法被广泛应用[19][20]。在此基础上,Huang等提出了超分辨率网络来预测图像在小波域的缺失细节信息[21];Kang等利用$Contourlet$变换分析图像特征,在CT图像去噪方面取得了较好的效果[22]。受这些方法的启发,我们提出了基于小波的深度神经网络去除条纹噪声(SNRWDNN),其挖掘了图像在小波域的特征并且将多个频带的信息作为有效补充来更好的去除条纹噪声。本文主要创新点和贡献如下:
- 引入小波域条纹去噪神经网络,其能够自适应的估计噪声强度和分布。
- 提出方向正则化器来避免模型产生不规则条纹噪声,并更好地恢复场景细节信息。
- 利用小波分解将输入图像变为一系列四分之一大小的系数,以提升计算效率并提升去噪效果。
本文其余部分组织结构如下:在Section2中,我们分析了条纹噪声在小波域的表现形式。Section3详细阐述了SNRWDNN的实现细节。为了更好地展示本文所提方法的效果,模拟数据和真实数据实验均用于分析(Section4)。Section5为结论。PROPERTY ANALYSIS OF STRIPE NOISE

为了准确地估计图像中含有的条纹噪声,关键是要挖掘出条纹噪声分量的属性并使用合理的方式描述它们。如文献[23]中所述,条纹噪声具有明显的方向属性,这一属性有利于去噪过程中图像细节和噪声分量的分离。图1展示了含条纹噪声图像的水平梯度和垂向梯度。从结果中我们可以看出,条纹噪声明显存在于水平梯度中,并且严重影响着垂直方向的图像细节信息。与之相反的是,垂向梯度中的条纹噪声分量表现出较好的平滑性质,因此对图像细节影响不大。基于以上简单的分析,可以基于不同方向梯度信息来从含噪图像中去除条纹噪声并可能保留图像的结构信息。
$Harr$离散小波变换(HDWT)具有提取图像水平方向、垂直方向、对角方向高频信息、以及保留低频结构信息的能力。本文中,$HDWT$被用于提取条纹噪声的梯度信息并更彻底地从条纹噪声中区分场景细节信息。$HDWT$的结果如图2所示。可以明显看出条纹噪声在近似系数(cA)和水平系数(cH)中有明显的响应,与之相对应的,垂向系数(cV)和对角系数(cD)则主要描述了场景的细节信息。总结来讲,不同子频带的补充性信息有助于在去除条纹噪声的同时保留图像的细节信息。THE PROPOSED SRNWDNN MODEL
在这一章节中,我们将详细介绍提出的$SNRWDNN$模型结构,然后深入讨论所设计的方向损失函数对于保留图像纹理细节的重要性,最后给出训练策略。A.网络结构

$SNRWDNN$网络结构如图3所示。与现有的基于深度学习的条纹噪声去除方法不同,我们将条带噪声去除视为小波域的变换系数预测问题。$SNRWDNN$方法包括三个步骤:$HDWT$、小波系数预测和$Haar$离散小波逆变换$IHDWT$。首先,利用$HDWT$得到反映条纹噪声固有特性的四个子带系数。然后将这些系数串联成一个输入张量,送入小波系数预测网络,估计条纹分量,然后将输入张量与估计的条纹分量跳转连接,进行噪声消除,产生去条纹系数。值得注意的是,级联操作融合了不同子带中的信息,并保持了它们之间的一致性。最后,利用$IHDWT$对估计系数进行反变换,重建空间域去噪后结果。通过这种策略,我们提出的网络可以利用条纹的方向特性来抑制噪声,同时减少细节损失。小波系数预测网络由M层具有残差连接的卷积层构成,其中所有卷积滤波器共享大小为$3 \times3$、$stride$为1的卷积核。另外,采用零填充的方法使每个特征映射与输入张量保持相同的大小。除了最后一层输出4通道条纹分量外,每个卷积层的核数被设置为64。此外,将前一卷积层的输出输入到校正线性单元(ReLU)激活函数中进行非线性映射。由于SNRWDNN的输入和输出非常相似,我们采用了残差学习方法,使训练更加稳定、快速和精确。小波分解中的下采样操作有效地扩大了感受野,有利于场景细节的恢复[25],同时其减少了计算复杂性。此外,小波分解大大降低了计算复杂度。B.方向小波损失函数
当前基于深度学习的铜像处理任务通常关注于最小化均方误差目标函数[26]。本文延续前人[21][25]工作,引入小波均方误差损失来处理小波域的条纹噪声去除任务。小波均方误差被定义为其中,$\mid\mid·\mid\mid_2^2$表示$L2$范数,$cA$,$cV$,$cH$,$cD$分别代表真实图像的近似系数、水平系数、垂向系数和对角系数,$\overline{cA}$,$\overline{cV}$,$\overline{cH}$,$\overline{cD}$,分别表示它们的预测值。
从局部角度看,单个条纹内的像素强度在较窄的范围内变化,这意味着条纹的沿条纹方向上具有良好的平滑性。为了更好地估计条纹噪声,我们通过最小化条纹相关子带中条纹分量方向的部分差异来描述其平滑程度[27]。为此,构造方向化正则器其中$\nabla$代表沿条纹方向的偏微分算子。$S_{cA}$和$S_{cH}$表示$cA$和$cH$子带的条纹分量。
最后,所提出的方向小波损失函数为其中,$\lambda$为控制方向损失的常量。参考文献
- 1.A. Jara et al., ‘‘Joint de-blurring and nonuniformity correction method for infrared microscopy imaging,’’ Infr. Phys. Technol., vol. 90, pp. 199–206, May 2018. ↩
- 2.Y. Cao and Y. Li, ‘‘Strip non-uniformity correction in uncooled long-wave infrared focal plane array based on noise source characterization,’’ Opt. Commun., vol. 339, pp. 236–242, Mar. 2015. ↩
- 3.Y. Huang, C. He, H. Fang, and X. Wang, ‘‘Iteratively reweighted unidirectional variational model for stripe non-uniformity correction,’’ Infr. Phys. Technol., vol. 75, pp. 107–116, Mar. 2016. ↩
- 4.C. Chen, X. Liu, H.-H. Chen, M. Li, and L. Zhao, ‘‘A rear-end collision risk evaluation and control scheme using a Bayesian network model,’’ IEEE Trans. Intell. Transp. Syst., vol. 20, no. 1, pp. 264–284, Jan. 2019. ↩
- 5.R. Lai, G. Yue, and G. Zhang, ‘‘Total variation based neural network regression for nonuniformity correction of infrared images,’’ Symmetry, vol. 10, no. 5, p. 157, 2018. ↩
- 6.R. Lai, J. Guan, Y. Yang, and A. Xiong, ‘‘Spatiotemporal adaptive nonuniformity correction based on BTV regularization,’’ IEEE Access, vol. 7, pp. 753–762, 2019. ↩
- 7.K. Liang, C.Yang, L. Peng, and B. Zhou, ‘‘Nonuniformity correction based on focal plane array temperature in uncooled long-wave infrared cameras without a shutter,’’ Appl. Opt., vol. 56, no. 4, pp. 884–889, 2017. ↩
- 8.X. Jian, R. Lu, Q. Guo, and G.-P. Wang, ‘‘Single image non-uniformity correction using compressive sensing,’’ Infr. Phys. Technol., vol. 76, pp. 360–364, May 2016. ↩
- 9.S. Rong, H. Zhou, D. Zhao, K. Cheng, K. Qian, and H. Qin, ‘‘Infrared fix pattern noise reduction method based on shearlet transform,’’ Infr. Phys. Technol., vol. 91, pp. 243–249, Jun. 2018. ↩
- 10.K. Dabov, A. Foi, and K. Egiazarian, ‘‘Video denoising by sparse 3D transform-domain collaborative filtering,’’ in Proc. Eur. Signal Process. Conf., Poznan, Poland, Sep. 2007, pp. 145–149. ↩
- 11.L. I. Rudin, S. Osher, and E. Fatemi, ‘‘Nonlinear total variation based noise removal algorithms,’’ Phys. D, Nonlinear Phenomena, vol. 60, nos. 1–4, pp. 259–268, 1992. ↩
- 12.K. He, J. Sun, and X. Tang, ‘‘Guided image filtering,’’ IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 6, pp. 1397–1409, Jun. 2013. ↩
- 13.H. Li and C. Y. Suen, ‘‘A novel non-local means image denoising method based on grey theory,’’ Pattern Recognit., vol. 49, pp. 237–248, Jan. 2016. VOLUME 7, 2019 ↩
- 14.Y. Chang, L. Yan, T. Wu, and S. Zhong, ‘‘Remote sensing image stripe noise removal: From image decomposition perspective,’’ IEEE Trans. Geosci. Remote Sens., vol. 54, no. 12, pp. 7018–7031, Dec. 2016. ↩
- 15.Y. Tendero, J. Gilles, S. Landeau, and J. M. Morel, ‘‘Efficient single image non-uniformity correction algorithm,’’ Proc. SPIE, vol. 7834, Oct. 2010, Art. no. 78340E. ↩
- 16.X. Kuang, X. Sui, Q. Chen, and G. Gu, ‘‘Single infrared image stripe noise removal using deep convolutional networks,’’ IEEEPhoton. J., vol. 9, no. 4, Aug. 2017, Art. no. 3900913. ↩
- 17.Z. He, Y. Cao, Y. Dong, J. Yang, Y. Cao, and C.-L. Tisse, ‘‘Single-imagebased nonuniformity correction of uncooled long-wave infrared detectors: A deep-learning approach,’’ Appl. Opt., vol. 57, no. 18, pp. D155–D164, 2018. ↩
- 18.P. Xiao, Y. Guo, and P. Zhuang, ‘‘Removing stripe noise from infrared cloud images via deep convolutional networks,’’ IEEE Photon. J., vol. 10, no. 4, Aug. 2018, Art. no. 7801114. ↩
- 19.S. G. Chang, B. Yu, and M. Vetterli, ‘‘Adaptive wavelet thresholding for image denoising and compression,’’ IEEE Trans. Image Process., vol. 9, no. 9, pp. 1532–1546, Sep. 2000. ↩
- 20.A. L. da Cunha, J. Zhou, and M. N. Do, ‘‘The nonsubsampled contourlet transform: Theory, design, and applications,’’ IEEETrans. Image Process., vol. 15, no. 10, pp. 3089–3101, Oct. 2006. ↩
- 21.H. Huang, R. He, Z. Sun, and T. Tan, ‘‘Wavelet-SRNet: A wavelet-based CNN for multi-scale face super resolution,’’ in Proc. IEEE Conf. CVPR, Honolulu, HI, USA, Oct. 2017, pp. 1689–1697. ↩
- 22.E. Kang, J. Min, and J. C. Ye, ‘‘A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction,’’ Med. Phys., vol. 44, no. 10, pp. e360–e375, 2017. ↩
- 23.Y. Chen, T.-Z. Huang, L.-J. Deng, X.-L. Zhao, and M. Wang, ‘‘Group sparsity based regularization model for remote sensing image stripe noise removal,’’ Neurocomputing, vol. 267, pp. 95–106, Dec. 2017. ↩
- 24.B. L. Lai and L. W. Chang, ‘‘Adaptive data hiding for images based on harr discrete wavelet transform,’’ in Advances in Image and Video Technology (Lecture Notes in Computer Science), vol. 4319. Berlin, Germany: Springer, 2006, pp. 1085–1093. ↩
- 25.H. Chen, X. He, L. Qing, S. Xiong, and T. Q. Nguyen, ‘‘DPW-SDNet: Dual pixel-wavelet domain deep CNNs for soft decoding of JPEG-compressed images,’’ presented at the IEEE Conf. CVPR, Salt Lake City, UT, USA, 2018. ↩
- 26.C. Ledig et al., ‘‘Photo-realistic single image super-resolution using a generative adversarial network,’’ in Proc. IEEE Conf. CVPR, Honolulu, HI, USA, Jul. 2017, pp. 105–114. ↩
- 27.X. Liu, X. Lu, H. Shen, Q. Yuan, Y. Jiao, and L. Zhang, ‘‘Stripe noise separation and removal in remote sensing images by consideration of the global sparsity and local variational properties,’’ IEEE Trans. Geosci. Remote Sens., vol. 54, no. 5, pp. 3049–3060, May 2016. ↩
- 28.W. Luo, J. Li, W. Xu, and J. Yang, ‘‘Learning sparse features in convolutional neural networks for image classification,’’ in Proc. Int. Conf. Intell. Sci. Big Data Eng., Suzhou, China, 2015, pp. 29–38. ↩
- 29.K. Simonyan and A. Zisserman. (2014). ‘‘Very deep convolutional networks for large-scale image recognition.’’ [^Online]:. Available: https:// arxiv.org/abs/1409.1556 ↩
- 30.K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, ‘‘Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,’’ IEEE Trans. Image Process., vol. 26, no. 7, pp. 3142–3155, Jul. 2017. ↩
- 31.D. Graziotin and P. Abrahamsson, ‘‘A Web-based modeling tool for the SEMAT essence theory of software engineering,’’ J. Open Res. Softw., to be published. doi: 10.5334/jors.ad. ↩
- 32.D. P. Kingma and J. Ba. (2014). ‘‘Adam: A method for stochastic optimization.’’ [^Online]:. Available: https://arxiv.org/abs/1412.6980 ↩
- 33.J. Kim, J. K. Lee, and K. M. Lee, ‘‘Accurate image super-resolution using very deep convolutional networks,’’ in Proc. IEEE Conf. CVPR, Las Vegas, NV, USA, Jun. 2016, pp. 1646–1654. ↩
- 34.Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, ‘‘Image quality assessment: From error visibility to structural similarity,’’ IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004. ↩
- 35.Y. Cao, M. Y. Yang, and C.-L. Tisse, ‘‘Effective strip noise removal for low-textured infrared images based on 1-D guided filtering,’’ IEEE Trans. Circuits Syst. Video Technol., vol. 26, no. 12, pp. 2176–2188, Dec. 2016. ↩
- 36.N. Liu, L. Wan, Y. Zhang, T. Zhou, H. Huo, and T. Fang, ‘‘Exploiting convolutional neural networks with deeply local description for remote sensing image classification,’’ IEEE Access, vol. 6, pp. 11215–11228, 2018. ↩
- 37.Z. Jin et al., ‘‘EEG classification using sparse Bayesian extreme learning machine for brain–computer interface,’’ Neural Comput. Appl., to be published. doi: 10.1007/s00521-018-3735-3. ↩
- 38.G. Zhou et al., ‘‘Linked component analysis from matrices to high-order tensors: Applications to biomedical data,’’ Proc. IEEE, vol. 104, no. 2, pp. 310–331, Feb. 2016. ↩
- 39.R. Lai, Y. Mo, Z. Liu, and J. Guan, ‘‘Local and nonlocal steering kernel weighted total variation model for image denoising,’’ Symmetry, vol. 11, no. 3, p. 329, 2019. ↩
- 40.M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, ‘‘MobileNetV2: Inverted residuals and linear bottlenecks,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Salt Lake City, UT, USA, Jun. 2018, pp. 4510–4520. ↩
- 41.W.-Q. Lim, ‘‘The discrete shearlet transform: A new directional transform and compactly supported shearlet frames,’’ IEEE Trans. Image Process., vol. 19, no. 5, pp. 1166–1180, May 2010. ↩