资源描述
Click to edit Master title style,Click to edit Master text styles,Second level,Third level,Fourth level,Fifth level,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,*,MMSE Estimation for Sparse Representation Modeling,Michael Elad,The Computer Science Department,The Technion Israel Institute of technology,Haifa 32000,Israel,*,Joint work with,Irad Yavneh&Matan Protter,The CS Department,The Technion,April 6,th,2009,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,2,Noise Removal?,In this talk we focus on signal/image denoising,Important:,(i)Practical application;(ii)A convenient platform for testing basic ideas in signal/image processing.,Many Considered Directions:,Partial differential equations,Statistical estimators,Adaptive filters,Inverse problems®ularization,Wavelets,Example-based techniques,Sparse representations,Main Massage Today:,Several sparse representations can be found and used for better denoising performance we introduce,motivate,discuss,demonstrate,and explain this new idea.,Remove Additive Noise,?,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,3,Background on Denoising with Sparse Representations,Using More than One Representation:Intuition,Using More than One Representation:Theory,A Closer Look At the Unitary Case,Summary and Conclusions,Agenda,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,4,Part I,Background on Denoising with Sparse Representations,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,5,Relation to measurements,Denoising By Energy Minimization,Thomas,Bayes 1702-1761,Prior or regularization,y,:Given measurements,x,:Unknown to be recovered,Many of the proposed signal denoising algorithms are related to the minimization of an energy function of the form,This is in-fact a Bayesian point of view,adopting the Maximum-A-posteriori Probability(MAP)estimation.,Clearly,the wisdom in such an approach is within the choice of the prior,modeling the signals,of interest.,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,6,Sparse Representation Modeling,M,K,N,A fixed Dictionary,Every column in,D,(,dictionary,)is a prototype signal(,atom,).,The vector,is generated randomly with few(say L for now)non-zeros at random locations and with random values.,A sparse&random vector,N,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,7,D,-y,=-,Back to Our MAP Energy Function,The L,0,“norm”is effectively counting the number of non-zeros in,.,The vector,is the representation(,sparse,/,redundant,).,Bottom line:Denoising of,y,is done by minimizing,or,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,8,Next steps:given the previously found atoms,find the next,one,to,best fit,the residual.,The algorithm stops when the error is below the destination threshold.,The MP,is one of the greedy algorithms that finds one atom at a time,Mallat&Zhang(93),.,Step 1:find the one atom that,best matches,the signal.,The Orthogonal MP(OMP)is an improved version that re-evaluates the coefficients by Least-Squares after each round.,The Solver We Use:Greed Based,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,9,Orthogonal Matching Pursuit,Initialization,Main Iteration,1.,2.,3.,4.,5.,Stop,Yes,No,OMP finds one atom at a time for approximating the solution of,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,10,Part II,Using More than One Representation:Intuition,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,11,Back to the Beginning.What If,Consider the denoising problem,and suppose that we can find a group of J candidate solutions,such that,Basic Questions:,What,could we do with such a set of competing solutions in order to better denoise,y,?,Why,should this help?,How,shall we practically find such a set of solutions?,Relevant work:,Larsson&Selen(07),Schintter et.al.(08),Elad and Yavneh(08),MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,12,Motivation General,Why bother with such a set?,Because each representation conveys a different story about the desired signal.,Because pursuit algorithms are often wrong in finding the sparsest representation,and then relying on their solution is too sensitive.,Maybe there are“deeper”reasons?,D,D,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,13,Our Motivation,An intriguing relationship between this idea and the common-practice in example-based techniques,where several examples are merged.,Consider the Non-Local-Means,Buades,Coll,&Morel(05),.It uses (i)a,local dictionary,(the neighborhood patches),(ii)it builds,several sparse representations,(of cardinality 1),and (iii)it,merges,them.,Why not take it further,and use general sparse representations?,D,D,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,14,Generating Many Representations,Our Answer:Randomizing the OMP,Initialization,Main Iteration,1.,2.,3.,4.,5.,Stop,Yes,No,*Larsson and Schnitter propose a more complicated and deterministic tree pruning method,*,For now,lets set the parameter c manually for best performance.Later we shall define a way to set it automatically,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,15,Lets Try,Proposed Experiment:,Form a random dictionary,D,.,Multiply by a sparse vector,0,().,Add Gaussian iid noise,v,with,=1 and obtain .,Solve the problem,using OMP,and obtain .,Use Random-OMP and obtain .,Lets look at the obtained representations,100,200,D,+,=,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,16,Some Observations,We see that,The OMP gives the sparsest solution,Nevertheless,it is not the most effective for denoising.,The cardinality of a representation does not reveal its efficiency.,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,17,The Surprise(at least for us),Lets propose the average,as our representation,This representation,IS NOT SPARSE AT ALL,but it gives,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,18,Is It Consistent?Yes!,Here are the results of 1000 trials with the same parameters,?,Cases of zero solution,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,19,Part III,Using More than One Representation:Theory,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,20,Our Signal Model,K,N,A fixed Dictionary,D,is fixed and known.,The vector,is built by:,Choosing the support s with probability P(s)from all the 2,K,possibilities,.,For simplicity,assume that|s|=k is fixed and known.,Choosing the,s,coefficients using iid Gaussian entries N(0,x,).,The ideal signal is,x,=,D,=,D,s,s,.,The p.d.f.P(,)and P(,x,)are clear and known,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,21,Adding Noise,K,N,A fixed Dictionary,+,Noise Assumed:,The noise,v,is additive white Gaussian vector with probability P,v,(,v,),The conditional p.d.f.s P(y|s),P(s|y),and even also P(,x,|y)are all clear and well-defined,(although they may appear nasty).,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,22,The Key The Posterior P(,x,|y),We have access to,MAP,MMSE,The estimation of,and multiplication by,D,is equivalent to the above.,These two estimators are impossible to compute,as we show next.,Oracle,known support s,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,23,Lets Start with The Oracle,*When s is known,*,Comments:,This estimate is both the MAP and MMSE.,The oracle estimate of,x,is obtained by multiplication by,D,s,.,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,24,We have seen this as the oracles probability for the support s:,The MAP Estimation,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,25,The MAP Estimation,Implications:,The MAP estimator requires to test all the possible supports for the maximization.In typical problems,this is impossible as,there is a combinatorial set of possibilities.,This is why we rarely use exact MAP,and we typically replace it with approximation algorithms(e.g.,OMP).,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,26,This is the oracle for s,as we have seen before,The MMSE Estimation,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,27,The MMSE Estimation,Implications:,The best estimator(in terms of L,2,error)is a weighted average of,many sparse representations,!,As in the MAP case,in typical problems one cannot compute this expression,as,the summation is over a combinatorial set of possibilities.We should propose approximations here as well.,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,28,This is our c in the Random-OMP,The Case of|s|=k=1,The k-th atom in,D,Based on this we can propose a greedy algorithm for both MAP and MMSE:,MAP,choose the atom with the largest inner product(out of K),and do so one at a time,while freezing the previous ones(almost OMP).,MMSE,draw at random an atom in a greedy algorithm,based on the above probability set,getting close to P(s|,y,)in the overall draw.,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,29,Bottom Line,The MMSE estimation we got requires a sweep through all supports(binatorial search)impractical.,Similarly,an explicit expression for P(,x,/,y,)can be derived and maximized this is the MAP estimation,and it also requires a sweep through all possible supports impractical too.,The OMP is a(good)approximation for the MAP estimate.,The Random-OMP is a(good)approximation of the Minimum-Mean-Squared-Error(MMSE)estimate.It is close to the Gibbs sampler of the probability P(s|,y,)from which we should draw the weights.,Back to the beginning:Why Use Several Representations?,Because their average leads to a provable better noise suppression.,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,30,0.5,1,1.5,s,0,0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,Relative Mean-Squared-Error,Comparative Results,The following results correspond to a small dictionary(2030),where the combinatorial formulas can be evaluated as well.,Parameters:,N=20,K=30,True support=3,x,=1,J=10(RandOMP),Averaged over 1000 experiments,2,0,1.Emp.Oracle,2.Theor.Oracle,3.Emp.MMSE,4.Theor.MMSE,5.Emp.MAP,6.Theor.MAP,7.OMP,8.RandOMP,Known support,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,31,Part IV,A Closer Look At the Unitary Case,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,32,Few Basic Observations,Let us denote,(The Oracle),MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,33,Back to the MAP Estimation,We assume|s|=k fixed with equal probabilities,*,This part becomes a constant,and thus can be discarded,This means that MAP estimation can be easily evaluated by computing,sorting its entries in descending order,and choosing the k leading ones.,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,34,Closed-Form Estimation,It is well-known that MAP enjoys a closed form and simple solution in the case of a unitary dictionary,D,.,This closed-form solution takes the structure of thresholding or shrinkage.The specific structure depends on the fine details of the model assumed.,It is also known that OMP in this case becomes exact.,What about the MMSE?Could it have a simple closed-form solution too?,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,35,The MMSE Again,This is the formula we got:,=,We combine linearly many sparse representations(with proper weights),+,+,+,+,+,+,+,The result is one effective representation(not sparse anymore),MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,36,The MMSE Again,This is the formula we got:,We change the above summation to,where there are K contributions(one per each atom)to be found and used.,We have developed a closed-form recursive formula for computing the q coefficients.,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,37,Towards a Recursive Formula,We have seen that the governing probability for the weighted averaging is given by,Indicator function stating if j is in s,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,38,The Recursive Formula,where,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,39,0.5,1,1.5,s,0,0.07,0.08,0.09,0.1,Relative Mean-Squared-Error,This is a synthetic experiment resembling the previous one,but with few important changes:,2,Oracle,Known support,An Example,D,is unitary,The representations cardinality is 5(the higher it is,the weaker the Random-OMP becomes),Dimensions are different:N=K=64,J=20(RandOMP runs),Theor.MAP,OMP,Recursive MMSE,Theor.MMSE,RandOMP,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,40,Part V,Summary and Conclusions,MMSE Estimation for Sparse Representation Modeling,By:Michael Elad,41,Today We Have Seen that,By finding the sparsest representation and using it to recover the clean signal,How?,Sparsity,and,Redundancy,are used for denoising of signals/images,Can we do better?,Today we have shown that averaging several sparse representations for a signal lead to better denoising,as it approximates the MMSE estimator.,More on these(including the slides and the relevant papers)can be found in,www.cs.technion.ac.il/elad,
展开阅读全文