Sparse Signal Processing(稀疏信号处理)

    'Signals' always contain structures, such as smoothness, periodicity, continuity, and so on, which induce the inherent low-dimensional embeddings, leading to compressibility, and regularities. Sparsity is one of such structures, which states that signals are with the as smallest amount of non-zero elements (itself or in some other domain) as possible. Exploiting sparsity leads to a large number of applications in various fields, e.g. medical imaging (MRI), Radar, image processing (denoising, inpainting, deblurring ...) face recognition, seismology ... And the method behind sparsity is called Sparse Signal Processing (SSP)

    In the last decades, due to the emergence of Compressive Sensing (or Compressed Sensing), the theory behind sparsity has been tremendously developed. Besides sparsity, the dependency between zero (or nonzero) elements has been considered and theoretically, it can improve the performance of SSP. Formally, the techniques behind are called "Structured Sparsity" or "Group Sparsity", where block (or cluster) sparsity is one of an important issue. Correspondingly, the methods that focus on recovering block (or cluster) sparse signals are appealing in this domain.

    On the other hand, considering the 'sparsity' in 2-D case, i.e. the MATRIX, one can refer to low-rankness, where the eigenvalue vector of the matrix is sparse (with lots of zero eigenvalues). Alternatively, the low-rankness of the matrix can replace sparsity in some applications and obtain improvements, for instance, the low-rank model for image restoration, background subtraction, change detection, and so on. Meanwhile, the low-rankness and sparsity can be exploited simultaneously to model the signals, such as TILT, Robust PCA, and so on. 

    And if we continue expanding 2-D to higher dimensions, i.e. 'sparsity' behind tensors, more complicated techniques should be used to precisely describe the property, and it can be one other topic ...


    During the last ten years, our group has been working on many different topics in SSP and proposed the following algorithms or theories. And this webpage is to give a brief introduction to our former works.


    Related Talks:

    [1] L. Yu, Sparse Signal Processing, Lecture for Masters in Wuhan University, 2018. [Slides]



    Dynamical Sparse Recovery

    with Prof. Barbot and Prof. Gang Zheng, collaboration between Wuhan Univ., ECS-Lab and INRIA

    Even though sparse recovery (SR) has been successfully applied in a wide range of research communities, there still exists a barrier to real applications because of the inefficiency of the state-of-the-art algorithms. In this paper, we propose a dynamical approach to SR, which is highly efficient and with finite-time convergence property. First, instead of solving the 11 regularized optimization programs that require exhausting iterations, which is computer-oriented, the solution to the SR problem in this paper is resolved through the evolution of a continuous dynamical system that can be realized by analog circuits. Moreover, the proposed dynamical system is proved to have the finite-time convergence property, and thus more efficient than a locally competitive algorithm (LCA) (the recently developed dynamical system to solve SR) with exponential convergence property. Consequently, our proposed dynamical system is more appropriate than LCA to deal with time-varying situations. Simulations are carried out to demonstrate the superior properties of our proposed system.

    Related Publications:

    [1] L. Yu, G. Zheng and J. Barbot, "Dynamical Sparse Recovery With Finite-Time Convergence," in IEEE Transactions on Signal Processing, vol. 65, no. 23, pp. 6146-6157, 1 Dec.1, 2017. [IEEE]

    [2] S. Nateghi, Y. Shtessel, J. Barbot, G. Zheng, and L. Yu, "Cyber-Attack Reconstruction via Sliding Mode Differentiation and Sparse Recovery Algorithm: Electrical Power Networks Application," 2018 15th International Workshop on Variable Structure Systems (VSS), Graz, 2018, pp. 285-290. [IEEE]

    [3] J. Ren, L. Yu, Y. Jiang, J. Barbot, H. Sun, A Dynamical System with Fixed Convergence Time for Sparse Recovery, IEEE Access, vol. 7, 2019. [IEEE]

    [4] J. Ren, L. Yu, C. Lyu, G. Zheng, J. Barbot, H. Sun, Dynamical Sparse Signal Recovery with Fixed Time Convergence, 2019, Accepted. [Elsevier]



    Sparse Bayesian Learning with Structures

    with Prof. Barbot and Prof. Sun, collaboration between Wuhan Univ. and ECS-Lab.

    In the traditional framework of Compressive Sensing (CS), only sparse prior on the property of signals in the time or frequency domain is adopted to guarantee the exact inverse recovery. Other than sparse prior, structures on the sparse pattern of the signal have also been used as an additional prior.
    In this work, we exploit a hierarchical Bayesian framework (Graphical model) to model both sparsity and structure and infer an algorithm to solve the sparse inverse problems with structure prior. Particularly, by exploiting the latent variables, the sparse signals are considered as a Hadamard product of the latent variable and the weight variable, where the latent variable indicates whether the element is nonzero. Then, statistical models can be respectively built for the latent and weight variables: sparse inducing model on weight variables and structure promoting model on latent variables.

    Algorithms:  

    CluSS-MCMC: BCS for Cluster Structured Sparse Signals via MCMC

    CluSS-VB: BCS for Cluster Structured Sparse Signals via VB, same model with CluSS-MCMC;

    MBCS-LBP: A new model, simpler than CluSS-VB, and faster than CluSS-MCMC

    RNN-OMP: A new algorithm to introduce Cluster Structure by exploiting RNN

    Related Publications:

    [1] L. Yu; Hong Sun; Barbot, J.P.; Gang Zheng;, "Compressive Sensing for Clustered Sparse Signals", ICASSP 2011.  [Paper]   

    [2] L. Yu, H. Sun, J.-P. Barbot, G. Zheng, Bayesian Compressive Sensing for Cluster Structured Sparse Signals, Signal Processing, 2012. In Press. [Code]   

    [3] L. Yu; Barbot, J.P.; Gang Zheng; Hong Sun;, "Bayesian Compressive Sensing for Cluster Structured Sparse Signals: Variational Approach", [online]   

    [4] L. Yu, H. Sun, G. Zheng, J.-P. Barbot, Model based Bayesian Compressive sensing via Local Beta Process, Signal Processing, vol. 108, March 2015, Pages 259-271. [Elsevier] [CODE] [Paper]

    [5] C. Lyu, Z. Liu, L. Yu, Block-sparsity recovery via recurrent neural network, Signal Processing, Volume 154, 2019, Pages 129-135. [Elsevier]



    Compressive Sensing

    with Prof. Barbot and Prof. Sun, collaboration between Wuhan Univ. and ECS-Lab.  
    Compressive sensing is a new methodology to capture signals at sub-Nyquist rate. To guarantee exact recovery from compressed measurements, one should choose a specific matrix, which satisfies the Restricted Isometry Property (RIP), to implement the sensing procedure.  
    In this work, we propose to construct the sensing matrix with a chaotic sequence following a trivial method.  And we prove that with overwhelming probability, the RIP of this kind of matrix is guaranteed. Meanwhile, its experimental comparisons with the Gaussian random matrix, Bernoulli random matrix, and sparse matrix are carried out and show that the performances among these sensing matrices are almost equal.  

    Related Materials:

    Related Publications:   

    [1] Yu Lei; Barbot, J.P.; Zheng Gang; Sun Hong; "Compressive Sensing With Chaotic Sequence," Signal Processing Letters, IEEE, vol.17, no.8, pp.731-734, Aug. 2010.     

    [2] Yu Lei; Barbot, J.P.; Zheng Gang; Sun Hong; "Toeplitz-structured Chaotic Sensing Matrix for Compressive Sensing," Communication Systems Networks and Digital Signal Processing (CSNDSP), 2010 7th International Symposium on , pp.229-233, 21-23 July 2010.

    [3] Frunzete, M., Yu, L., Barbot, J., & Vlad, A. (2011, September). Compressive sensing matrix designed by tent map, for secure data transmission. In    Signal Processing Algorithms, Architectures, Arrangements, and Applications Conference Proceedings (SPA), 2011 (pp. 1-6). IEEE.  

     

    Image Restoration + Non-Local, Sparsity, Structured Sparsity, Low-rank, and Smoothness (TV)

    Algorithms:

    NLS-BPFA: Nonparametric Bayesian dictionary learning has shown a powerful potential in image restoration. However, it still lacks exploiting image structure to improve the performance. In this work, we propose a sparse Bayesian dictionary learning framework with structure prior called nonlocal structured beta process factor analysis (NLS-BPFA) which connects nonlocal self-similarity and sparse Bayesian dictionary learning. A nonlocal structured beta process is proposed to introduce the nonlocal self-similarity as a structure prior to image denoising and inpainting. Unlike most of the existing image denoising methods, our proposed method does not need to know noise variance in advance like unsupervised learning. The experimental results demonstrate the effectiveness of our proposed model.

    Homo-SPARSE: We establish an image denoising model and propose a new image denoising algorithm based on the study of sparse representation and dictionary learning theory. The homotopy method is used to learn a dictionary, which has the characteristics of fast convergence speed and high accuracy in signal recovery. As we can use the OMP algorithm to derive the sparse representation of the noisy image with respect to the learned dictionary, which is learned by the homotopy method, then by combining the sparse denoising model we can use our proposed method to denoise the noisy image. Experiment results show that the proposed algorithm can achieve nice performance for different noise environments. In the experiment of comparing the convergence speed with K-SVD algorithm, the experiment results sufficiently show that the proposed method has faster implementation than K-SVD which fully displays the advantage of using the homotopy method in learning a dictionary.

    SBL-TILT: Comparing to the low-level local features, transform invariant low-rank textures (TILT) can in some sense globally rectify a large class of low-rank textures in 2D images, and thus more accurate and robust. However, the existing algorithms based on the alternating direction method (ADM) and the linearized alternating direction method with an adaptive penalty (LADMAP), suffer from the weak robustness and the local minima, especially with plenty of corruptions and occlusions. In this paper, instead of exploiting optimization methods, we propose to build a hierarchical Bayesian model to TILT and then a variational method is implemented for Bayesian inference. Instead of point estimation, the proposed Bayesian approach introduces the uncertainty of the parameters, which has been proven to have much fewer local minima. Experimental results on both synthetic and real data indicate that our new algorithm outperforms the existing algorithms especially for the case with corruptions and occlusions.


    Related Publications:

    [1] Liu, Z., Yu, L. and Sun, H., 2018. Image restoration via Bayesian dictionary learning with nonlocal structured beta process. Journal of Visual Communication and Image Representation, 52, pp.159-169. [Elsevier]

    [2] Z. Liu, L. Yu, H. Sun, Image Denoising via Nonlocal Low Rank Approximation with Local Structure Preserving, vol. 7, 2019. [IEEE]

    [3] 张梦磊, 刘舟, 任俊英 and 余磊, 2018. 同伦方法在图像稀疏去噪中的应用. 信号处理, 34(1), pp.89-97.

    [4] Hu, S., Liu, Z., Yu, L. and Sun, H., 2017. Sparse Bayesian learning for image rectification with transform invariant low-rank textures. Signal Processing, 137(C), pp.298-308.


  • 附件【SP_MasterCourse.pdf】已下载