This webpage describes the Matlab files used in our improved work on universal compressed sensing signal estimation, described in the following paper.

- J. Zhu, D. Baron, and M. Duarte, "Recovery from Linear Measurements with Complexity-Matching Universal Signal Estimation," to appear in IEEE. Trans. Signal Process., December 2014 (ArXiv, pdf).

The current implementation uses the Markov chain Monte Carlo framework pioneered by Jalali and Weissman in a compressed sensing framework, where instead of approximating a known input we approximate an unknown input from linear measurements. The current algorithm consists of four stages, and employs an adaptive framework to match the number of quantization levels to the complexity of the signal.

Below is a brief description of files used in our implementation. Comments will be appreciated.

Feel free to also browse through other software packages developed by our group.

Junan Zhu, Dror Baron, and Marco Duarte, December 2014

**SLAM.m**- main function of size- and level- adaptive MCMC (SLA-MCMC).**add_inside.m**- add levels inside of the current reproduction range, used to increase the size of the reproduction alphabet.**level_emerge.m**- add one randomly populated level inside of the current reproduction range. Used in add_inside.m.**add_outside.m**- add levels outside of the current reproduction range, used to increase the size as well as the range of the reproduction alphabet.**level_outside.m**- add one or two empty level(s) outside of the current reproduction range. Used in add_outside.m.**remove_inside.m**- remove levels inside of the current reproduction range, used to decrease the size of the reproduction alphabet.**level_eliminate.m**- remove one level inside of the current reproduction range by merging the two closest levels. Used in remove_inside.m.**count_update_all_compact.m**- evaluates all possible symbols and computes Gibbs distribution; then generates new symbol and updates data structures accordingly. Instead of working directly on the context counts data structure, only locations of changes are saved; this accelerates the algorithm when the context count matrix is big.**ell2_update_all_compact.m**- Examines all possible characters beta to replace the current character in the sequence. For each such beta, we compute the optimal representation levels qZ that minimizes the ell2 error between measurements y and Phi*qZ(z'), where qZ(z') is the currently estimated x. Knowing ell2(y-Phi(z'))^2 allows us to compute the delta (the change) in the ell2 when swapping the character. To compute the optimal reproduction levels we need to compute mldivide(mu'*mu,mu'*y), these are data structures related to computing the ell2 error. However, computing the Gram matrix mu'*mu is computationally intense, and to a lesser extent computing mu'*y is also intense. To reduce computation, we compute the Gram matrix and mu'*y once, and update them later each time we evaluate a different character.(1) Gram update: when one column of mu is changed, we need only change one row of the Gram and one column. Each of these is changed in the same way, because the Gram is symmetric.

(2) updating mu'*y only requires to change one location in the mu_y vector, because other columns of mu corresponding to other entries of mu_y are unchanged.

**MCMC_compact_mat.m**- The signal estimation algorithm from 2011. Sets up data structures for contexts and optimization of ell2 error; runs d_update and count_update_all each iteration to update data structures.

**example.m**- a simple example to show how to run SLA-MCMC.**alph2int.m**- converts string over finite alphabet into integer.**compute_mu.m**- mu is a set of auxiliary variables used to set up the linear system used to optimize distortion levels in compute_levels.m**count.m**- creates depth-k context counts for finite alphabet.**ell2_error.m**- computes squared ell2 error between measurements y and Phi*x for estimated signal x.**ent.m**- calculate the initial entropy.**sort_level.m**- sort the current reproduction levels. book-keeping.**last_siter_mat.m**- run last super-iteration: perform mixture over Gibbs distribution for each entry of the reconstructed signal.

Here is a demo on the reconstruction quality of size- and level- adaptive MCMC (SLAM). We selected 9600 samples of the ''Chirp'' sound clip from Matlab (denoted by x). We took compressed measurements through the vector channel y = Phi x + z, where Phi is a matrix with M=70% x 9600=6720 rows and N=9600 columns, and each entry follows an i.i.d. Gaussian distribution. The channel noise z is additive white Gaussian noise, and we set the variance of z such that the input SNR=15dB.

Instead of reconstructing x, we reconstruct the short-time discrete cosine transform (DCT) coefficients theta of length 9600. Then, we carry out an inverse short-time DCT on theta to obtain the reconstrcuted ''Chirp'' in the time domain.

The reconstructed ''Chip'' in the time domain by SLAM and a well-known algorithm EM-GM-AMP-MOS can be downloaded here. The ell2 error of EM-GM-AMP-MOS is 0.0023, and the ell2 error of SLAM is 0.0015. You can hear the difference in reconstruction quality by running the demo:

In Matlab,

load demo_chirp; %load the demo file.

wavplay(x,Fs); %play the original Chirp sound clip.

wavplay(xhat_MOS,Fs); %play the reconstructed Chirp by EM-GM-AMP-MOS.

wavplay(xhat_SLAM,Fs); %play the reconstrcuted Chirp by SLAM.