Disjoint sparsity for signal separation and applications to hybrid imaging inverse problems

Size: px
Start display at page:

Download "Disjoint sparsity for signal separation and applications to hybrid imaging inverse problems"

Transcription

1 Disjoint sparsity for signal separation and applications to hybrid imaging inverse problems Giovanni S Alberti (joint with H Ammari) DMA, École Normale Supérieure, Paris June 16, 2015 Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

2 Photoacoustic tomography (From Wikipedia, 1. The image is H(x) = Γ(x)µ(x)u(x) 2. How to extract the unknowns from H? Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

3 Photoacoustic tomography (From Wikipedia, 1. The image is H(x) = Γ(x)µ(x)u(x) 2. How to extract the unknowns from H? Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

4 Photoacoustic tomography (From Wikipedia, 1. The image is H(x) = Γ(x)µ(x)u(x) 2. How to extract the unknowns from H? Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

5 PDE-based methods Works by Arridge, Bal, Beard, Cox, Gao, Jollivet, Jugnon, Kaipio, Köstli, Laufer, Naetar, Pulkkinen, Ren, Scherzer, Tarvainen, Uhlmann, Zhao A possible way to obtain Γ and µ from H = Γµu is based on the PDE satisfied by u. In the diffusive regime for light propagation there holds div(d u) + µu = 0 in Ω, where D is the diffusion parameter. This approach is sometimes very successful. However, there are some drawbacks PDE model non accurate (e.g. transport regime for light) Too many unknowns (e.g. if Γ 1 above) Several derivatives needed (problem with noise) Several measurements ui needed satisfying certain independence conditions. The focus of this talk is a new approach to this issue based on the separation of the unknowns from the fields via sparsity conditions: h = log H = log Γµ + log u = f + g. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

6 PDE-based methods Works by Arridge, Bal, Beard, Cox, Gao, Jollivet, Jugnon, Kaipio, Köstli, Laufer, Naetar, Pulkkinen, Ren, Scherzer, Tarvainen, Uhlmann, Zhao A possible way to obtain Γ and µ from H = Γµu is based on the PDE satisfied by u. In the diffusive regime for light propagation there holds div(d u) + µu = 0 in Ω, where D is the diffusion parameter. This approach is sometimes very successful. However, there are some drawbacks PDE model non accurate (e.g. transport regime for light) Too many unknowns (e.g. if Γ 1 above) Several derivatives needed (problem with noise) Several measurements ui needed satisfying certain independence conditions. The focus of this talk is a new approach to this issue based on the separation of the unknowns from the fields via sparsity conditions: h = log H = log Γµ + log u = f + g. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

7 PDE-based methods Works by Arridge, Bal, Beard, Cox, Gao, Jollivet, Jugnon, Kaipio, Köstli, Laufer, Naetar, Pulkkinen, Ren, Scherzer, Tarvainen, Uhlmann, Zhao A possible way to obtain Γ and µ from H = Γµu is based on the PDE satisfied by u. In the diffusive regime for light propagation there holds div(d u) + µu = 0 in Ω, where D is the diffusion parameter. This approach is sometimes very successful. However, there are some drawbacks PDE model non accurate (e.g. transport regime for light) Too many unknowns (e.g. if Γ 1 above) Several derivatives needed (problem with noise) Several measurements ui needed satisfying certain independence conditions. The focus of this talk is a new approach to this issue based on the separation of the unknowns from the fields via sparsity conditions: h = log H = log Γµ + log u = f + g. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

8 PDE-based methods Works by Arridge, Bal, Beard, Cox, Gao, Jollivet, Jugnon, Kaipio, Köstli, Laufer, Naetar, Pulkkinen, Ren, Scherzer, Tarvainen, Uhlmann, Zhao A possible way to obtain Γ and µ from H = Γµu is based on the PDE satisfied by u. In the diffusive regime for light propagation there holds div(d u) + µu = 0 in Ω, where D is the diffusion parameter. This approach is sometimes very successful. However, there are some drawbacks PDE model non accurate (e.g. transport regime for light) Too many unknowns (e.g. if Γ 1 above) Several derivatives needed (problem with noise) Several measurements ui needed satisfying certain independence conditions. The focus of this talk is a new approach to this issue based on the separation of the unknowns from the fields via sparsity conditions: h = log H = log Γµ + log u = f + g. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

9 PDE-based methods Works by Arridge, Bal, Beard, Cox, Gao, Jollivet, Jugnon, Kaipio, Köstli, Laufer, Naetar, Pulkkinen, Ren, Scherzer, Tarvainen, Uhlmann, Zhao A possible way to obtain Γ and µ from H = Γµu is based on the PDE satisfied by u. In the diffusive regime for light propagation there holds div(d u) + µu = 0 in Ω, where D is the diffusion parameter. This approach is sometimes very successful. However, there are some drawbacks PDE model non accurate (e.g. transport regime for light) Too many unknowns (e.g. if Γ 1 above) Several derivatives needed (problem with noise) Several measurements ui needed satisfying certain independence conditions. The focus of this talk is a new approach to this issue based on the separation of the unknowns from the fields via sparsity conditions: h = log H = log Γµ + log u = f + g. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

10 PDE-based methods Works by Arridge, Bal, Beard, Cox, Gao, Jollivet, Jugnon, Kaipio, Köstli, Laufer, Naetar, Pulkkinen, Ren, Scherzer, Tarvainen, Uhlmann, Zhao A possible way to obtain Γ and µ from H = Γµu is based on the PDE satisfied by u. In the diffusive regime for light propagation there holds div(d u) + µu = 0 in Ω, where D is the diffusion parameter. This approach is sometimes very successful. However, there are some drawbacks PDE model non accurate (e.g. transport regime for light) Too many unknowns (e.g. if Γ 1 above) Several derivatives needed (problem with noise) Several measurements ui needed satisfying certain independence conditions. The focus of this talk is a new approach to this issue based on the separation of the unknowns from the fields via sparsity conditions: h = log H = log Γµ + log u = f + g. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

11 Introduction to sparse representations Let h R n be a column vector (n = d d is the resolution of the image). Let A R n m be a dictionary of m atoms, which are used as building blocks: (1) h = Ay, for some coefficient vector y R m (weights). If m > n then (1) is in general underdetermined, and has many solutions y. Select the sparsest one, i.e. with fewest non-zero entries: min y R m y 0 subject to h = Ay, where y 0 := #supp y = #{α {1,..., m} : y(α) 0}. If the dictionary A is well chosen, it is possible to represent an n-dimensional vector with much fewer coefficients. In practice, we minimise min y R m y 0 + λ h Ay 2 for some λ > 0 (hard problem, but many available algorithms). Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

12 Introduction to sparse representations Let h R n be a column vector (n = d d is the resolution of the image). Let A R n m be a dictionary of m atoms, which are used as building blocks: (1) h = Ay, for some coefficient vector y R m (weights). If m > n then (1) is in general underdetermined, and has many solutions y. Select the sparsest one, i.e. with fewest non-zero entries: min y R m y 0 subject to h = Ay, where y 0 := #supp y = #{α {1,..., m} : y(α) 0}. If the dictionary A is well chosen, it is possible to represent an n-dimensional vector with much fewer coefficients. In practice, we minimise min y R m y 0 + λ h Ay 2 for some λ > 0 (hard problem, but many available algorithms). Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

13 Introduction to sparse representations Let h R n be a column vector (n = d d is the resolution of the image). Let A R n m be a dictionary of m atoms, which are used as building blocks: (1) h = Ay, for some coefficient vector y R m (weights). If m > n then (1) is in general underdetermined, and has many solutions y. Select the sparsest one, i.e. with fewest non-zero entries: min y R m y 0 subject to h = Ay, where y 0 := #supp y = #{α {1,..., m} : y(α) 0}. If the dictionary A is well chosen, it is possible to represent an n-dimensional vector with much fewer coefficients. In practice, we minimise min y R m y 0 + λ h Ay 2 for some λ > 0 (hard problem, but many available algorithms). Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

14 Introduction to sparse representations Let h R n be a column vector (n = d d is the resolution of the image). Let A R n m be a dictionary of m atoms, which are used as building blocks: (1) h = Ay, for some coefficient vector y R m (weights). If m > n then (1) is in general underdetermined, and has many solutions y. Select the sparsest one, i.e. with fewest non-zero entries: min y R m y 0 subject to h = Ay, where y 0 := #supp y = #{α {1,..., m} : y(α) 0}. If the dictionary A is well chosen, it is possible to represent an n-dimensional vector with much fewer coefficients. In practice, we minimise min y R m y 0 + λ h Ay 2 for some λ > 0 (hard problem, but many available algorithms). Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

15 Introduction to sparse representations Let h R n be a column vector (n = d d is the resolution of the image). Let A R n m be a dictionary of m atoms, which are used as building blocks: (1) h = Ay, for some coefficient vector y R m (weights). If m > n then (1) is in general underdetermined, and has many solutions y. Select the sparsest one, i.e. with fewest non-zero entries: min y R m y 0 subject to h = Ay, where y 0 := #supp y = #{α {1,..., m} : y(α) 0}. If the dictionary A is well chosen, it is possible to represent an n-dimensional vector with much fewer coefficients. In practice, we minimise min y R m y 0 + λ h Ay 2 for some λ > 0 (hard problem, but many available algorithms). Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

16 Introduction to sparse representations Let h R n be a column vector (n = d d is the resolution of the image). Let A R n m be a dictionary of m atoms, which are used as building blocks: (1) h = Ay, for some coefficient vector y R m (weights). If m > n then (1) is in general underdetermined, and has many solutions y. Select the sparsest one, i.e. with fewest non-zero entries: min y R m y 0 subject to h = Ay, where y 0 := #supp y = #{α {1,..., m} : y(α) 0}. If the dictionary A is well chosen, it is possible to represent an n-dimensional vector with much fewer coefficients. In practice, we minimise min y R m y 0 + λ h Ay 2 for some λ > 0 (hard problem, but many available algorithms). Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

17 Introduction to sparse representations Let h R n be a column vector (n = d d is the resolution of the image). Let A R n m be a dictionary of m atoms, which are used as building blocks: (1) h = Ay, for some coefficient vector y R m (weights). If m > n then (1) is in general underdetermined, and has many solutions y. Select the sparsest one, i.e. with fewest non-zero entries: min y R m y 0 subject to h = Ay, where y 0 := #supp y = #{α {1,..., m} : y(α) 0}. If the dictionary A is well chosen, it is possible to represent an n-dimensional vector with much fewer coefficients. In practice, we minimise min y R m y 0 + λ h Ay 2 for some λ > 0 (hard problem, but many available algorithms). Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

18 Morphological component analysis (Starck, Elad, Donoho, 2004,...) Back to the signal separation problem. Let h = f + g R n be the sum of two components. Let A f R n m f and A g R n mg two dictionaries such that: f can be sparsely represented w.r.t. Af but not w.r.t. A g; g can be sparsely represented w.r.t. Ag but not w.r.t. A f ; Decompose h w.r.t. the concatenated dictionary A = [A f, A g ]: min y R m f +mg y 0 subject to [A f, A g ] [ y f y g ] = h. Recover f A f y f, g A g y g. In other words, the original components can be recovered provided that they have different features. This is expressed in terms of the sparsity of their decompositions with different dictionaries. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

19 Morphological component analysis (Starck, Elad, Donoho, 2004,...) Back to the signal separation problem. Let h = f + g R n be the sum of two components. Let A f R n m f and A g R n mg two dictionaries such that: f can be sparsely represented w.r.t. Af but not w.r.t. A g; g can be sparsely represented w.r.t. Ag but not w.r.t. A f ; Decompose h w.r.t. the concatenated dictionary A = [A f, A g ]: min y R m f +mg y 0 subject to [A f, A g ] [ y f y g ] = h. Recover f A f y f, g A g y g. In other words, the original components can be recovered provided that they have different features. This is expressed in terms of the sparsity of their decompositions with different dictionaries. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

20 Morphological component analysis (Starck, Elad, Donoho, 2004,...) Back to the signal separation problem. Let h = f + g R n be the sum of two components. Let A f R n m f and A g R n mg two dictionaries such that: f can be sparsely represented w.r.t. Af but not w.r.t. A g; g can be sparsely represented w.r.t. Ag but not w.r.t. A f ; Decompose h w.r.t. the concatenated dictionary A = [A f, A g ]: min y R m f +mg y 0 subject to [A f, A g ] [ y f y g ] = h. Recover f A f y f, g A g y g. In other words, the original components can be recovered provided that they have different features. This is expressed in terms of the sparsity of their decompositions with different dictionaries. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

21 Morphological component analysis (Starck, Elad, Donoho, 2004,...) Back to the signal separation problem. Let h = f + g R n be the sum of two components. Let A f R n m f and A g R n mg two dictionaries such that: f can be sparsely represented w.r.t. Af but not w.r.t. A g; g can be sparsely represented w.r.t. Ag but not w.r.t. A f ; Decompose h w.r.t. the concatenated dictionary A = [A f, A g ]: min y R m f +mg y 0 subject to [A f, A g ] [ y f y g ] = h. Recover f A f y f, g A g y g. In other words, the original components can be recovered provided that they have different features. This is expressed in terms of the sparsity of their decompositions with different dictionaries. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

22 Morphological component analysis (Starck, Elad, Donoho, 2004,...) Back to the signal separation problem. Let h = f + g R n be the sum of two components. Let A f R n m f and A g R n mg two dictionaries such that: f can be sparsely represented w.r.t. Af but not w.r.t. A g; g can be sparsely represented w.r.t. Ag but not w.r.t. A f ; Decompose h w.r.t. the concatenated dictionary A = [A f, A g ]: min y R m f +mg y 0 subject to [A f, A g ] [ y f y g ] = h. Recover f A f y f, g A g y g. In other words, the original components can be recovered provided that they have different features. This is expressed in terms of the sparsity of their decompositions with different dictionaries. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

23 Morphological component analysis (Starck, Elad, Donoho, 2004,...) Back to the signal separation problem. Let h = f + g R n be the sum of two components. Let A f R n m f and A g R n mg two dictionaries such that: f can be sparsely represented w.r.t. Af but not w.r.t. A g; g can be sparsely represented w.r.t. Ag but not w.r.t. A f ; Decompose h w.r.t. the concatenated dictionary A = [A f, A g ]: min y R m f +mg y 0 subject to [A f, A g ] [ y f y g ] = h. Recover f A f y f, g A g y g. In other words, the original components can be recovered provided that they have different features. This is expressed in terms of the sparsity of their decompositions with different dictionaries. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

24 Toy example h = f + g Choose A f = A δ and A g = A F and let y = [ y f y g ] give the sparsest representation of h w.r.t. A = [A f, A g ]. This clearly provides the right reconstruction: only 4 atoms are used. f g Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

25 Toy example h = f + g Choose A f = A δ and A g = A F and let y = [ y f y g ] give the sparsest representation of h w.r.t. A = [A f, A g ]. This clearly provides the right reconstruction: only 4 atoms are used. f g Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

26 Theoretical justification (Elad & Bruckstein, 2002) Why does writing f = A f y f and g = A g y g give the correct reconstruction? (Uncertainty principle) If h 0 has the representations h = Ay A = By B w. r. t. two orthonormal bases A = [a 1,..., a n ] and B = [b 1,..., b n ], then y A 0 + y B 0 2/M, where M = max i,j (a i, b j ) 2 is the mutual coherence. If f and g have representations y f and y g satisfying y f 0 + y g 0 < 1/M, then the reconstruction is correct. In practice, the assumption y f 0 + y g 0 < 1/M is almost never satisfied, and so the above argument remains only a theoretical speculation. Focus of this new approach: extension to multiple measurements (many other generalisations by Georgiev, Theis, Cichocki, Bobin, Moudden, Starck,...). Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

27 Theoretical justification (Elad & Bruckstein, 2002) Why does writing f = A f y f and g = A g y g give the correct reconstruction? (Uncertainty principle) If h 0 has the representations h = Ay A = By B w. r. t. two orthonormal bases A = [a 1,..., a n ] and B = [b 1,..., b n ], then y A 0 + y B 0 2/M, where M = max i,j (a i, b j ) 2 is the mutual coherence. If f and g have representations y f and y g satisfying y f 0 + y g 0 < 1/M, then the reconstruction is correct. In practice, the assumption y f 0 + y g 0 < 1/M is almost never satisfied, and so the above argument remains only a theoretical speculation. Focus of this new approach: extension to multiple measurements (many other generalisations by Georgiev, Theis, Cichocki, Bobin, Moudden, Starck,...). Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

28 Theoretical justification (Elad & Bruckstein, 2002) Why does writing f = A f y f and g = A g y g give the correct reconstruction? (Uncertainty principle) If h 0 has the representations h = Ay A = By B w. r. t. two orthonormal bases A = [a 1,..., a n ] and B = [b 1,..., b n ], then y A 0 + y B 0 2/M, where M = max i,j (a i, b j ) 2 is the mutual coherence. If f and g have representations y f and y g satisfying y f 0 + y g 0 < 1/M, then the reconstruction is correct. In practice, the assumption y f 0 + y g 0 < 1/M is almost never satisfied, and so the above argument remains only a theoretical speculation. Focus of this new approach: extension to multiple measurements (many other generalisations by Georgiev, Theis, Cichocki, Bobin, Moudden, Starck,...). Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

29 Theoretical justification (Elad & Bruckstein, 2002) Why does writing f = A f y f and g = A g y g give the correct reconstruction? (Uncertainty principle) If h 0 has the representations h = Ay A = By B w. r. t. two orthonormal bases A = [a 1,..., a n ] and B = [b 1,..., b n ], then y A 0 + y B 0 2/M, where M = max i,j (a i, b j ) 2 is the mutual coherence. If f and g have representations y f and y g satisfying y f 0 + y g 0 < 1/M, then the reconstruction is correct. In practice, the assumption y f 0 + y g 0 < 1/M is almost never satisfied, and so the above argument remains only a theoretical speculation. Focus of this new approach: extension to multiple measurements (many other generalisations by Georgiev, Theis, Cichocki, Bobin, Moudden, Starck,...). Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

30 Theoretical justification (Elad & Bruckstein, 2002) Why does writing f = A f y f and g = A g y g give the correct reconstruction? (Uncertainty principle) If h 0 has the representations h = Ay A = By B w. r. t. two orthonormal bases A = [a 1,..., a n ] and B = [b 1,..., b n ], then y A 0 + y B 0 2/M, where M = max i,j (a i, b j ) 2 is the mutual coherence. If f and g have representations y f and y g satisfying y f 0 + y g 0 < 1/M, then the reconstruction is correct. In practice, the assumption y f 0 + y g 0 < 1/M is almost never satisfied, and so the above argument remains only a theoretical speculation. Focus of this new approach: extension to multiple measurements (many other generalisations by Georgiev, Theis, Cichocki, Bobin, Moudden, Starck,...). Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

31 Multi-measurement case Let h i = f + g i R n, i = 1,..., N be N measurements. The problem is to recover f and the g i s. Let A f R n m f and A g R n mg two dictionaries as before. Assume that A f and A g are left invertible. The reconstruction method applied here consists in the minimisation of min y 0 + λ y R m f +Nmg where y = t [ t y f, t y 1 g,..., t y N g ]. N [A f, A g ] i=1 [ yf y i g ] h i 2, The noisy case: h i = f + g i + n i, n i 2 ε for some ε > 0. Why does this provide a better reconstruction? Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

32 Multi-measurement case Let h i = f + g i R n, i = 1,..., N be N measurements. The problem is to recover f and the g i s. Let A f R n m f and A g R n mg two dictionaries as before. Assume that A f and A g are left invertible. The reconstruction method applied here consists in the minimisation of min y 0 + λ y R m f +Nmg where y = t [ t y f, t y 1 g,..., t y N g ]. N [A f, A g ] i=1 [ yf y i g ] h i 2, The noisy case: h i = f + g i + n i, n i 2 ε for some ε > 0. Why does this provide a better reconstruction? Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

33 Multi-measurement case Let h i = f + g i R n, i = 1,..., N be N measurements. The problem is to recover f and the g i s. Let A f R n m f and A g R n mg two dictionaries as before. Assume that A f and A g are left invertible. The reconstruction method applied here consists in the minimisation of min y 0 + λ y R m f +Nmg where y = t [ t y f, t y 1 g,..., t y N g ]. N [A f, A g ] i=1 [ yf y i g ] h i 2, The noisy case: h i = f + g i + n i, n i 2 ε for some ε > 0. Why does this provide a better reconstruction? Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

34 Multi-measurement case Let h i = f + g i R n, i = 1,..., N be N measurements. The problem is to recover f and the g i s. Let A f R n m f and A g R n mg two dictionaries as before. Assume that A f and A g are left invertible. The reconstruction method applied here consists in the minimisation of min y 0 + λ y R m f +Nmg where y = t [ t y f, t y 1 g,..., t y N g ]. N [A f, A g ] i=1 [ yf y i g ] h i 2, The noisy case: h i = f + g i + n i, n i 2 ε for some ε > 0. Why does this provide a better reconstruction? Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

35 Multi-measurement case Let h i = f + g i R n, i = 1,..., N be N measurements. The problem is to recover f and the g i s. Let A f R n m f and A g R n mg two dictionaries as before. Assume that A f and A g are left invertible. The reconstruction method applied here consists in the minimisation of min y 0 + λ y R m f +Nmg where y = t [ t y f, t y 1 g,..., t y N g ]. N [A f, A g ] i=1 [ yf y i g ] h i 2, The noisy case: h i = f + g i + n i, n i 2 ε for some ε > 0. Why does this provide a better reconstruction? Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

36 Assumptions In the applications we have in mind, we shall have (indirect) control on the g i s. We need enough incoherent data: this is measured by their disjoint sparsity. Write f A f ỹ f and g i A g ỹ i g. Definition Take β > 0, N N, ỹ f R m f and ỹ 1 g,..., ỹ N g R mg. We say that {ỹ 1 g,..., ỹ N g } is a complete set of measurements if these two conditions hold true: 1. if ỹ i g(α) ỹ j g(α) β for some α {1,..., m g } then i = j; 2. for every p R n such that {α : (A 1 g p)(α) ε} there holds N #supp (A 1 f p)\supp ỹ f + #{α : (A 1 g p)(α) ε}\supp ỹg i > 6+# N supp ỹg i + ỹ 0 f. i=1 i=1 1. is mainly a technical assumption. The uncertainty principle A 1 f p 0 + A 1 g p 0 2/M and a big enough N make 2. easy to satisfy, provided that the g i s are disjointly sparse. The bigger the noise level ε is, the bigger N needs to be. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

37 Assumptions In the applications we have in mind, we shall have (indirect) control on the g i s. We need enough incoherent data: this is measured by their disjoint sparsity. Write f A f ỹ f and g i A g ỹ i g. Definition Take β > 0, N N, ỹ f R m f and ỹ 1 g,..., ỹ N g R mg. We say that {ỹ 1 g,..., ỹ N g } is a complete set of measurements if these two conditions hold true: 1. if ỹ i g(α) ỹ j g(α) β for some α {1,..., m g } then i = j; 2. for every p R n such that {α : (A 1 g p)(α) ε} there holds N #supp (A 1 f p)\supp ỹ f + #{α : (A 1 g p)(α) ε}\supp ỹg i > 6+# N supp ỹg i + ỹ 0 f. i=1 i=1 1. is mainly a technical assumption. The uncertainty principle A 1 f p 0 + A 1 g p 0 2/M and a big enough N make 2. easy to satisfy, provided that the g i s are disjointly sparse. The bigger the noise level ε is, the bigger N needs to be. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

38 Assumptions In the applications we have in mind, we shall have (indirect) control on the g i s. We need enough incoherent data: this is measured by their disjoint sparsity. Write f A f ỹ f and g i A g ỹ i g. Definition Take β > 0, N N, ỹ f R m f and ỹ 1 g,..., ỹ N g R mg. We say that {ỹ 1 g,..., ỹ N g } is a complete set of measurements if these two conditions hold true: 1. if ỹ i g(α) ỹ j g(α) β for some α {1,..., m g } then i = j; 2. for every p R n such that {α : (A 1 g p)(α) ε} there holds N #supp (A 1 f p)\supp ỹ f + #{α : (A 1 g p)(α) ε}\supp ỹg i > 6+# N supp ỹg i + ỹ 0 f. i=1 i=1 1. is mainly a technical assumption. The uncertainty principle A 1 f p 0 + A 1 g p 0 2/M and a big enough N make 2. easy to satisfy, provided that the g i s are disjointly sparse. The bigger the noise level ε is, the bigger N needs to be. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

39 Assumptions In the applications we have in mind, we shall have (indirect) control on the g i s. We need enough incoherent data: this is measured by their disjoint sparsity. Write f A f ỹ f and g i A g ỹ i g. Definition Take β > 0, N N, ỹ f R m f and ỹ 1 g,..., ỹ N g R mg. We say that {ỹ 1 g,..., ỹ N g } is a complete set of measurements if these two conditions hold true: 1. if ỹ i g(α) ỹ j g(α) β for some α {1,..., m g } then i = j; 2. for every p R n such that {α : (A 1 g p)(α) ε} there holds N #supp (A 1 f p)\supp ỹ f + #{α : (A 1 g p)(α) ε}\supp ỹg i > 6+# N supp ỹg i + ỹ 0 f. i=1 i=1 1. is mainly a technical assumption. The uncertainty principle A 1 f p 0 + A 1 g p 0 2/M and a big enough N make 2. easy to satisfy, provided that the g i s are disjointly sparse. The bigger the noise level ε is, the bigger N needs to be. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

40 Assumptions In the applications we have in mind, we shall have (indirect) control on the g i s. We need enough incoherent data: this is measured by their disjoint sparsity. Write f A f ỹ f and g i A g ỹ i g. Definition Take β > 0, N N, ỹ f R m f and ỹ 1 g,..., ỹ N g R mg. We say that {ỹ 1 g,..., ỹ N g } is a complete set of measurements if these two conditions hold true: 1. if ỹ i g(α) ỹ j g(α) β for some α {1,..., m g } then i = j; 2. for every p R n such that {α : (A 1 g p)(α) ε} there holds N #supp (A 1 f p)\supp ỹ f + #{α : (A 1 g p)(α) ε}\supp ỹg i > 6+# N supp ỹg i + ỹ 0 f. i=1 i=1 1. is mainly a technical assumption. The uncertainty principle A 1 f p 0 + A 1 g p 0 2/M and a big enough N make 2. easy to satisfy, provided that the g i s are disjointly sparse. The bigger the noise level ε is, the bigger N needs to be. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

41 Assumptions In the applications we have in mind, we shall have (indirect) control on the g i s. We need enough incoherent data: this is measured by their disjoint sparsity. Write f A f ỹ f and g i A g ỹ i g. Definition Take β > 0, N N, ỹ f R m f and ỹ 1 g,..., ỹ N g R mg. We say that {ỹ 1 g,..., ỹ N g } is a complete set of measurements if these two conditions hold true: 1. if ỹ i g(α) ỹ j g(α) β for some α {1,..., m g } then i = j; 2. for every p R n such that supp (A 1 g ) N + # ( supp (A 1 f p) \ supp ỹ f i=1 p) there holds # ( supp (A 1 g p) \ supp ỹi g) > C. 1. is mainly a technical assumption. The uncertainty principle A 1 f p 0 + A 1 g p 0 2/M and a big enough N make 2. easy to satisfy, provided that the g i s are disjointly sparse. The bigger the noise level ε is, the bigger N needs to be. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

42 Assumptions In the applications we have in mind, we shall have (indirect) control on the g i s. We need enough incoherent data: this is measured by their disjoint sparsity. Write f A f ỹ f and g i A g ỹ i g. Definition Take β > 0, N N, ỹ f R m f and ỹ 1 g,..., ỹ N g R mg. We say that {ỹ 1 g,..., ỹ N g } is a complete set of measurements if these two conditions hold true: 1. if ỹ i g(α) ỹ j g(α) β for some α {1,..., m g } then i = j; 2. for every p R n such that supp (A 1 g ) N + # ( supp (A 1 f p) \ supp ỹ f i=1 p) there holds # ( supp (A 1 g p) \ supp ỹi g) > C. 1. is mainly a technical assumption. The uncertainty principle A 1 f p 0 + A 1 g p 0 2/M and a big enough N make 2. easy to satisfy, provided that the g i s are disjointly sparse. The bigger the noise level ε is, the bigger N needs to be. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

43 Assumptions In the applications we have in mind, we shall have (indirect) control on the g i s. We need enough incoherent data: this is measured by their disjoint sparsity. Write f A f ỹ f and g i A g ỹ i g. Definition Take β > 0, N N, ỹ f R m f and ỹ 1 g,..., ỹ N g R mg. We say that {ỹ 1 g,..., ỹ N g } is a complete set of measurements if these two conditions hold true: 1. if ỹ i g(α) ỹ j g(α) β for some α {1,..., m g } then i = j; 2. for every p R n such that supp (A 1 g ) N + # ( supp (A 1 f p) \ supp ỹ f i=1 p) there holds # ( supp (A 1 g p) \ supp ỹi g) > C. 1. is mainly a technical assumption. The uncertainty principle A 1 f p 0 + A 1 g p 0 2/M and a big enough N make 2. easy to satisfy, provided that the g i s are disjointly sparse. The bigger the noise level ε is, the bigger N needs to be. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

44 Assumptions In the applications we have in mind, we shall have (indirect) control on the g i s. We need enough incoherent data: this is measured by their disjoint sparsity. Write f A f ỹ f and g i A g ỹ i g. Definition Take β > 0, N N, ỹ f R m f and ỹ 1 g,..., ỹ N g R mg. We say that {ỹ 1 g,..., ỹ N g } is a complete set of measurements if these two conditions hold true: 1. if ỹ i g(α) ỹ j g(α) β for some α {1,..., m g } then i = j; 2. for every p R n such that {α : (A 1 g p)(α) ε} there holds N #supp (A 1 f p)\supp ỹ f + #{α : (A 1 g p)(α) ε}\supp ỹg i > 6+# N supp ỹg i + ỹ 0 f. i=1 i=1 1. is mainly a technical assumption. The uncertainty principle A 1 f p 0 + A 1 g p 0 2/M and a big enough N make 2. easy to satisfy, provided that the g i s are disjointly sparse. The bigger the noise level ε is, the bigger N needs to be. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

45 Assumptions In the applications we have in mind, we shall have (indirect) control on the g i s. We need enough incoherent data: this is measured by their disjoint sparsity. Write f A f ỹ f and g i A g ỹ i g. Definition Take β > 0, N N, ỹ f R m f and ỹ 1 g,..., ỹ N g R mg. We say that {ỹ 1 g,..., ỹ N g } is a complete set of measurements if these two conditions hold true: 1. if ỹ i g(α) ỹ j g(α) β for some α {1,..., m g } then i = j; 2. for every p R n such that {α : (A 1 g p)(α) ε} there holds N #supp (A 1 f p)\supp ỹ f + #{α : (A 1 g p)(α) ε}\supp ỹg i > 6+# N supp ỹg i + ỹ 0 f. i=1 i=1 1. is mainly a technical assumption. The uncertainty principle A 1 f p 0 + A 1 g p 0 2/M and a big enough N make 2. easy to satisfy, provided that the g i s are disjointly sparse. The bigger the noise level ε is, the bigger N needs to be. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

46 Main result Theorem Take β > 0, N N, x f R m f and ỹg, 1..., ỹg N R mg. Assume that {ỹg, 1..., ỹg N } is complete. There exists ε > 0 depending only on N, A g, ỹ f 0 and β such that for every ε ε the following is true. Assume that f, g i, n i R n satisfy n i 2 ε and Af ỹ f f 2 ε, Ag ỹ i g g i 2 ε, i = 1,..., N, and let y f R m f and yg i R mg realise the minimum of min y N Af y f + A g y i y R m f +Nmg g h 2 i, εn i=1 where h i = f + g i + n i.then A f y f f 2 C ε, A g y i g g i 2 C ε, i = 1,..., N for some C > 0 depending only on A g. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

47 Main result Theorem Take β > 0, N N, x f R m f and ỹg, 1..., ỹg N R mg. Assume that {ỹg, 1..., ỹg N } is complete. There exists ε > 0 depending only on N, A g, ỹ f 0 and β such that for every ε ε the following is true. Assume that f, g i, n i R n satisfy n i 2 ε and Af ỹ f f 2 ε, Ag ỹ i g g i 2 ε, i = 1,..., N, and let y f R m f and yg i R mg realise the minimum of min y N Af y f + A g y i y R m f +Nmg g h 2 i, εn i=1 where h i = f + g i + n i.then A f y f f 2 C ε, A g y i g g i 2 C ε, i = 1,..., N for some C > 0 depending only on A g. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

48 Main result Theorem Take β > 0, N N, x f R m f and ỹg, 1..., ỹg N R mg. Assume that {ỹg, 1..., ỹg N } is complete. There exists ε > 0 depending only on N, A g, ỹ f 0 and β such that for every ε ε the following is true. Assume that f, g i, n i R n satisfy n i 2 ε and Af ỹ f f 2 ε, Ag ỹ i g g i 2 ε, i = 1,..., N, and let y f R m f and yg i R mg realise the minimum of min y N Af y f + A g y i y R m f +Nmg g h 2 i, εn i=1 where h i = f + g i + n i.then A f y f f 2 C ε, A g y i g g i 2 C ε, i = 1,..., N for some C > 0 depending only on A g. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

49 Quantitative photoacoustic tomography, Γ = 1 It gives us the image H(x) = Γ(x)µ(x)u(x) = µ(x)u(x). Need to find µ. In the general case Γ 1, apply the same approach and find Γµ and u. Then by using the PDE approach all the unknowns can be reconstructed. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

50 Quantitative photoacoustic tomography, Γ = 1 It gives us the image H(x) = Γ(x)µ(x)u(x) = µ(x)u(x). Need to find µ. In the general case Γ 1, apply the same approach and find Γµ and u. Then by using the PDE approach all the unknowns can be reconstructed. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

51 Quantitative photoacoustic tomography, Γ = 1 It gives us the image H(x) = Γ(x)µ(x)u(x) = µ(x)u(x). Need to find µ. In the general case Γ 1, apply the same approach and find Γµ and u. Then by using the PDE approach all the unknowns can be reconstructed. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

52 Quantitative photoacoustic tomography, Γ = 1 Multi-measurement data: H(x) = µ(x)u i (x), where { ui + µu i = 0 in Ω, u i = ϕ i on Ω. Taking a log: h i = log µ + log u i. The above method may be used if log µ and log u i can be sparsely represented w.r.t. two different dictionaries. Following [Rosenthal, Razansky, Ntziachristos, 2009], observe that: the light absorption µ is a constitutive parameter of the tissue, and as such is discontinuous. Its discontinuities are typically the inclusions we are looking for; the light intensity ui is a solution of a PDE, and as such enjoys higher regularity properties. This motivates the following choices for the dictionaries: Af : wavelets, curvelets, ridgelets, shearlets,... (for simplicity, we shall use Haar wavelets) Ag: low frequency trigonometric polynomials The disjoint sparsity signal separation method may be used to recover µ. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

53 Quantitative photoacoustic tomography, Γ = 1 Multi-measurement data: H(x) = µ(x)u i (x), where { ui + µu i = 0 in Ω, u i = ϕ i on Ω. Taking a log: h i = log µ + log u i. The above method may be used if log µ and log u i can be sparsely represented w.r.t. two different dictionaries. Following [Rosenthal, Razansky, Ntziachristos, 2009], observe that: the light absorption µ is a constitutive parameter of the tissue, and as such is discontinuous. Its discontinuities are typically the inclusions we are looking for; the light intensity ui is a solution of a PDE, and as such enjoys higher regularity properties. This motivates the following choices for the dictionaries: Af : wavelets, curvelets, ridgelets, shearlets,... (for simplicity, we shall use Haar wavelets) Ag: low frequency trigonometric polynomials The disjoint sparsity signal separation method may be used to recover µ. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

54 Quantitative photoacoustic tomography, Γ = 1 Multi-measurement data: H(x) = µ(x)u i (x), where { ui + µu i = 0 in Ω, u i = ϕ i on Ω. Taking a log: h i = log µ + log u i. The above method may be used if log µ and log u i can be sparsely represented w.r.t. two different dictionaries. Following [Rosenthal, Razansky, Ntziachristos, 2009], observe that: the light absorption µ is a constitutive parameter of the tissue, and as such is discontinuous. Its discontinuities are typically the inclusions we are looking for; the light intensity ui is a solution of a PDE, and as such enjoys higher regularity properties. This motivates the following choices for the dictionaries: Af : wavelets, curvelets, ridgelets, shearlets,... (for simplicity, we shall use Haar wavelets) Ag: low frequency trigonometric polynomials The disjoint sparsity signal separation method may be used to recover µ. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

55 Quantitative photoacoustic tomography, Γ = 1 Multi-measurement data: H(x) = µ(x)u i (x), where { ui + µu i = 0 in Ω, u i = ϕ i on Ω. Taking a log: h i = log µ + log u i. The above method may be used if log µ and log u i can be sparsely represented w.r.t. two different dictionaries. Following [Rosenthal, Razansky, Ntziachristos, 2009], observe that: the light absorption µ is a constitutive parameter of the tissue, and as such is discontinuous. Its discontinuities are typically the inclusions we are looking for; the light intensity ui is a solution of a PDE, and as such enjoys higher regularity properties. This motivates the following choices for the dictionaries: Af : wavelets, curvelets, ridgelets, shearlets,... (for simplicity, we shall use Haar wavelets) Ag: low frequency trigonometric polynomials The disjoint sparsity signal separation method may be used to recover µ. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

56 Quantitative photoacoustic tomography, Γ = 1 Multi-measurement data: H(x) = µ(x)u i (x), where { ui + µu i = 0 in Ω, u i = ϕ i on Ω. Taking a log: h i = log µ + log u i. The above method may be used if log µ and log u i can be sparsely represented w.r.t. two different dictionaries. Following [Rosenthal, Razansky, Ntziachristos, 2009], observe that: the light absorption µ is a constitutive parameter of the tissue, and as such is discontinuous. Its discontinuities are typically the inclusions we are looking for; the light intensity ui is a solution of a PDE, and as such enjoys higher regularity properties. This motivates the following choices for the dictionaries: Af : wavelets, curvelets, ridgelets, shearlets,... (for simplicity, we shall use Haar wavelets) Ag: low frequency trigonometric polynomials The disjoint sparsity signal separation method may be used to recover µ. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

57 Quantitative photoacoustic tomography, Γ = 1 Multi-measurement data: H(x) = µ(x)u i (x), where { ui + µu i = 0 in Ω, u i = ϕ i on Ω. Taking a log: h i = log µ + log u i. The above method may be used if log µ and log u i can be sparsely represented w.r.t. two different dictionaries. Following [Rosenthal, Razansky, Ntziachristos, 2009], observe that: the light absorption µ is a constitutive parameter of the tissue, and as such is discontinuous. Its discontinuities are typically the inclusions we are looking for; the light intensity ui is a solution of a PDE, and as such enjoys higher regularity properties. This motivates the following choices for the dictionaries: Af : wavelets, curvelets, ridgelets, shearlets,... (for simplicity, we shall use Haar wavelets) Ag: low frequency trigonometric polynomials The disjoint sparsity signal separation method may be used to recover µ. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

58 Example 1: noise-free case, N = 1 (a) The true absorption µ (b) The true intensity ũ 1 (c) The datum H 1 (d) The reconstructed µ Vidéo: N = 1. Vidéo: N = 2. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

59 Example 1: noise-free case, N = 1 (a) The true absorption µ (b) The true intensity ũ 1 (c) The datum H 1 (d) The reconstructed µ Vidéo: N = 1. Vidéo: N = 2. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

60 Example 1: noisy case, N = 1, 3, 5 (a) The datum H 1. (b) The datum H 3. (c) The datum H 5. (d) Reconstructed µ, N = 1. (e) Reconstructed µ, N = 3. (f) Reconstructed µ, N = 5. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

61 Example 2: the Shepp-Logan Phantom (a) µ (b) H 2 (c) H 4 (d) µ, N = 1 (e) µ, N = 3 (f) µ, N = 5 Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

62 Example 2: the Shepp-Logan Phantom with noise (a) H 3 (b) H 5 (c) µ, N = 5 Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

63 Example 3: piecewise smooth (a) µ (b) µ, N = 1 (c) µ, N = 2 Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

64 Conclusions Past The reconstruction in many hybrid imaging inverse problems require the separation of several signals h i = f + g i. PDE techniques are often powerful, but sometimes they are not applicable. Present Multiple measurements and disjoint sparsity can be used to find f and g i. Uniqueness and stability proven. Orthogonal matching pursuit performs well in many numerical simulations related to quantitative photoacoustic tomography. Future How can we ensure that the solutions u i to the PDE yields the necessary incoherence, measured in terms of their disjoint sparsity? Random boundary conditions may help. Norm l 0 norm l 1. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

65 Conclusions Past The reconstruction in many hybrid imaging inverse problems require the separation of several signals h i = f + g i. PDE techniques are often powerful, but sometimes they are not applicable. Present Multiple measurements and disjoint sparsity can be used to find f and g i. Uniqueness and stability proven. Orthogonal matching pursuit performs well in many numerical simulations related to quantitative photoacoustic tomography. Future How can we ensure that the solutions u i to the PDE yields the necessary incoherence, measured in terms of their disjoint sparsity? Random boundary conditions may help. Norm l 0 norm l 1. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

66 Conclusions Past The reconstruction in many hybrid imaging inverse problems require the separation of several signals h i = f + g i. PDE techniques are often powerful, but sometimes they are not applicable. Present Multiple measurements and disjoint sparsity can be used to find f and g i. Uniqueness and stability proven. Orthogonal matching pursuit performs well in many numerical simulations related to quantitative photoacoustic tomography. Future How can we ensure that the solutions u i to the PDE yields the necessary incoherence, measured in terms of their disjoint sparsity? Random boundary conditions may help. Norm l 0 norm l 1. Giovanni S Alberti (ENS, Paris) Disjoint sparsity and hybrid inverse problems June 16, / 18

Part II Redundant Dictionaries and Pursuit Algorithms

Part II Redundant Dictionaries and Pursuit Algorithms Aisenstadt Chair Course CRM September 2009 Part II Redundant Dictionaries and Pursuit Algorithms Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Sparsity in Redundant Dictionaries

More information

Sparsity-promoting recovery from simultaneous data: a compressive sensing approach

Sparsity-promoting recovery from simultaneous data: a compressive sensing approach SEG 2011 San Antonio Sparsity-promoting recovery from simultaneous data: a compressive sensing approach Haneet Wason*, Tim T. Y. Lin, and Felix J. Herrmann September 19, 2011 SLIM University of British

More information

Wavelet analysis. Wavelet requirements. Example signals. Stationary signal 2 Hz + 10 Hz + 20Hz. Zero mean, oscillatory (wave) Fast decay (let)

Wavelet analysis. Wavelet requirements. Example signals. Stationary signal 2 Hz + 10 Hz + 20Hz. Zero mean, oscillatory (wave) Fast decay (let) Wavelet analysis In the case of Fourier series, the orthonormal basis is generated by integral dilation of a single function e jx Every 2π-periodic square-integrable function is generated by a superposition

More information

When is missing data recoverable?

When is missing data recoverable? When is missing data recoverable? Yin Zhang CAAM Technical Report TR06-15 Department of Computational and Applied Mathematics Rice University, Houston, TX 77005 October, 2006 Abstract Suppose a non-random

More information

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Stable Signal Recovery from Incomplete and Inaccurate Measurements Stable Signal Recovery from Incomplete and Inaccurate Measurements Emmanuel Candes, Justin Romberg, and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics,

More information

Analysis of data separation and recovery problems using clustered sparsity

Analysis of data separation and recovery problems using clustered sparsity Analysis of data separation and recovery problems using clustered sparsity Emily J. King a, Gitta Kutyniok a, and Xiaosheng Zhuang a a Institute of Mathematics, Universität Osnabrück, Osnabrück, Germany

More information

Stable Signal Recovery from Incomplete and Inaccurate Measurements

Stable Signal Recovery from Incomplete and Inaccurate Measurements Stable Signal Recovery from Incomplete and Inaccurate Measurements EMMANUEL J. CANDÈS California Institute of Technology JUSTIN K. ROMBERG California Institute of Technology AND TERENCE TAO University

More information

Lecture Topic: Low-Rank Approximations

Lecture Topic: Low-Rank Approximations Lecture Topic: Low-Rank Approximations Low-Rank Approximations We have seen principal component analysis. The extraction of the first principle eigenvalue could be seen as an approximation of the original

More information

Current Density Impedance Imaging with Complete Electrode Model

Current Density Impedance Imaging with Complete Electrode Model Current Density Impedance Imaging with Complete Electrode Model Alexandru Tamasan jointly with A. Nachman & J. Veras University of Central Florida Work supported by NSF BIRS Workshop on Hybrid Methods

More information

Clarify Some Issues on the Sparse Bayesian Learning for Sparse Signal Recovery

Clarify Some Issues on the Sparse Bayesian Learning for Sparse Signal Recovery Clarify Some Issues on the Sparse Bayesian Learning for Sparse Signal Recovery Zhilin Zhang and Bhaskar D. Rao Technical Report University of California at San Diego September, Abstract Sparse Bayesian

More information

α α λ α = = λ λ α ψ = = α α α λ λ ψ α = + β = > θ θ β > β β θ θ θ β θ β γ θ β = γ θ > β > γ θ β γ = θ β = θ β = θ β = β θ = β β θ = = = β β θ = + α α α α α = = λ λ λ λ λ λ λ = λ λ α α α α λ ψ + α =

More information

FIELDS-MITACS Conference. on the Mathematics of Medical Imaging. Photoacoustic and Thermoacoustic Tomography with a variable sound speed

FIELDS-MITACS Conference. on the Mathematics of Medical Imaging. Photoacoustic and Thermoacoustic Tomography with a variable sound speed FIELDS-MITACS Conference on the Mathematics of Medical Imaging Photoacoustic and Thermoacoustic Tomography with a variable sound speed Gunther Uhlmann UC Irvine & University of Washington Toronto, Canada,

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Morphological Diversity and Sparsity for Multichannel Data Restoration

Morphological Diversity and Sparsity for Multichannel Data Restoration Morphological Diversity and Sparsity for Multichannel Data Restoration J.Bobin 1, Y.Moudden 1, J.Fadili and J-L.Starck 1 1 jerome.bobin@cea.fr, ymoudden@cea.fr, jstarck@cea.fr - CEA-DAPNIA/SEDI, Service

More information

6.2 Permutations continued

6.2 Permutations continued 6.2 Permutations continued Theorem A permutation on a finite set A is either a cycle or can be expressed as a product (composition of disjoint cycles. Proof is by (strong induction on the number, r, of

More information

From Sparse Approximation to Forecast of Intraday Load Curves

From Sparse Approximation to Forecast of Intraday Load Curves From Sparse Approximation to Forecast of Intraday Load Curves Mathilde Mougeot Joint work with D. Picard, K. Tribouley (P7)& V. Lefieux, L. Teyssier-Maillard (RTE) 1/43 Electrical Consumption Time series

More information

Several Views of Support Vector Machines

Several Views of Support Vector Machines Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min

More information

5. Orthogonal matrices

5. Orthogonal matrices L Vandenberghe EE133A (Spring 2016) 5 Orthogonal matrices matrices with orthonormal columns orthogonal matrices tall matrices with orthonormal columns complex matrices with orthonormal columns 5-1 Orthonormal

More information

Image Super-Resolution as Sparse Representation of Raw Image Patches

Image Super-Resolution as Sparse Representation of Raw Image Patches Image Super-Resolution as Sparse Representation of Raw Image Patches Jianchao Yang, John Wright, Yi Ma, Thomas Huang University of Illinois at Urbana-Champagin Beckman Institute and Coordinated Science

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

Math 319 Problem Set #3 Solution 21 February 2002

Math 319 Problem Set #3 Solution 21 February 2002 Math 319 Problem Set #3 Solution 21 February 2002 1. ( 2.1, problem 15) Find integers a 1, a 2, a 3, a 4, a 5 such that every integer x satisfies at least one of the congruences x a 1 (mod 2), x a 2 (mod

More information

Quantitative Thermo-acoustics and related problems

Quantitative Thermo-acoustics and related problems Quantitative Thermo-acoustics and related problems Guillaume Bal Department of Applied Physics & Applied Mathematics, Columbia University, New York, NY 10027 E-mail: gb2030@columbia.edu Kui Ren Department

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Single Depth Image Super Resolution and Denoising Using Coupled Dictionary Learning with Local Constraints and Shock Filtering

Single Depth Image Super Resolution and Denoising Using Coupled Dictionary Learning with Local Constraints and Shock Filtering Single Depth Image Super Resolution and Denoising Using Coupled Dictionary Learning with Local Constraints and Shock Filtering Jun Xie 1, Cheng-Chuan Chou 2, Rogerio Feris 3, Ming-Ting Sun 1 1 University

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2007-08 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

1 Prior Probability and Posterior Probability

1 Prior Probability and Posterior Probability Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which

More information

2. Illustration of the Nikkei 225 option data

2. Illustration of the Nikkei 225 option data 1. Introduction 2. Illustration of the Nikkei 225 option data 2.1 A brief outline of the Nikkei 225 options market τ 2.2 Estimation of the theoretical price τ = + ε ε = = + ε + = + + + = + ε + ε + ε =

More information

Introduction to Support Vector Machines. Colin Campbell, Bristol University

Introduction to Support Vector Machines. Colin Campbell, Bristol University Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.

More information

Noisy and Missing Data Regression: Distribution-Oblivious Support Recovery

Noisy and Missing Data Regression: Distribution-Oblivious Support Recovery : Distribution-Oblivious Support Recovery Yudong Chen Department of Electrical and Computer Engineering The University of Texas at Austin Austin, TX 7872 Constantine Caramanis Department of Electrical

More information

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,

More information

Vekua approximations and applications to soundeld sampling and source localisation

Vekua approximations and applications to soundeld sampling and source localisation 1/50 Vekua approximations and applications to soundeld sampling and source localisation Gilles Chardon Acoustics Research Institute, Vienna, Austria 2/50 Introduction Context Joint work with Laurent Daudet

More information

Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity

Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign

More information

Algebraic and Transcendental Numbers

Algebraic and Transcendental Numbers Pondicherry University July 2000 Algebraic and Transcendental Numbers Stéphane Fischler This text is meant to be an introduction to algebraic and transcendental numbers. For a detailed (though elementary)

More information

Sublinear Algorithms for Big Data. Part 4: Random Topics

Sublinear Algorithms for Big Data. Part 4: Random Topics Sublinear Algorithms for Big Data Part 4: Random Topics Qin Zhang 1-1 2-1 Topic 1: Compressive sensing Compressive sensing The model (Candes-Romberg-Tao 04; Donoho 04) Applicaitons Medical imaging reconstruction

More information

Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

More information

Exact shape-reconstruction by one-step linearization in electrical impedance tomography

Exact shape-reconstruction by one-step linearization in electrical impedance tomography Exact shape-reconstruction by one-step linearization in electrical impedance tomography Bastian von Harrach harrach@math.uni-mainz.de Institut für Mathematik, Joh. Gutenberg-Universität Mainz, Germany

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

More information

Bag of Pursuits and Neural Gas for Improved Sparse Coding

Bag of Pursuits and Neural Gas for Improved Sparse Coding Bag of Pursuits and Neural Gas for Improved Sparse Coding Kai Labusch, Erhardt Barth, and Thomas Martinetz University of Lübec Institute for Neuro- and Bioinformatics Ratzeburger Allee 6 23562 Lübec, Germany

More information

Enhancement of scanned documents in Besov spaces using wavelet domain representations

Enhancement of scanned documents in Besov spaces using wavelet domain representations Enhancement of scanned documents in Besov spaces using wavelet domain representations Kathrin Berkner 1 Ricoh Innovations, Inc., 2882 Sand Hill Road, Suite 115, Menlo Park, CA 94025 ABSTRACT After scanning,

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

IEEE COMPUTING IN SCIENCE AND ENGINEERING, VOL. XX, NO. XX, XXX 2014 1

IEEE COMPUTING IN SCIENCE AND ENGINEERING, VOL. XX, NO. XX, XXX 2014 1 IEEE COMPUTING IN SCIENCE AND ENGINEERING, VOL. XX, NO. XX, XXX 2014 1 Towards IK-SVD: Dictionary Learning for Spatial Big Data via Incremental Atom Update Lizhe Wang 1, Ke Lu 2, Peng Liu 1, Rajiv Ranjan

More information

1 The Brownian bridge construction

1 The Brownian bridge construction The Brownian bridge construction The Brownian bridge construction is a way to build a Brownian motion path by successively adding finer scale detail. This construction leads to a relatively easy proof

More information

Computational Optical Imaging - Optique Numerique. -- Deconvolution --

Computational Optical Imaging - Optique Numerique. -- Deconvolution -- Computational Optical Imaging - Optique Numerique -- Deconvolution -- Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

A VARIANT OF THE EMD METHOD FOR MULTI-SCALE DATA

A VARIANT OF THE EMD METHOD FOR MULTI-SCALE DATA Advances in Adaptive Data Analysis Vol. 1, No. 4 (2009) 483 516 c World Scientific Publishing Company A VARIANT OF THE EMD METHOD FOR MULTI-SCALE DATA THOMAS Y. HOU and MIKE P. YAN Applied and Computational

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

More information

POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS

POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS N. ROBIDOUX Abstract. We show that, given a histogram with n bins possibly non-contiguous or consisting

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines

EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines EECS 556 Image Processing W 09 Interpolation Interpolation techniques B splines What is image processing? Image processing is the application of 2D signal processing methods to images Image representation

More information

3. Regression & Exponential Smoothing

3. Regression & Exponential Smoothing 3. Regression & Exponential Smoothing 3.1 Forecasting a Single Time Series Two main approaches are traditionally used to model a single time series z 1, z 2,..., z n 1. Models the observation z t as a

More information

Cyber-Security Analysis of State Estimators in Power Systems

Cyber-Security Analysis of State Estimators in Power Systems Cyber-Security Analysis of State Estimators in Electric Power Systems André Teixeira 1, Saurabh Amin 2, Henrik Sandberg 1, Karl H. Johansson 1, and Shankar Sastry 2 ACCESS Linnaeus Centre, KTH-Royal Institute

More information

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function

17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function 17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):

More information

9 More on differentiation

9 More on differentiation Tel Aviv University, 2013 Measure and category 75 9 More on differentiation 9a Finite Taylor expansion............... 75 9b Continuous and nowhere differentiable..... 78 9c Differentiable and nowhere monotone......

More information

CS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen

CS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen CS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen LECTURE 3: DATA TRANSFORMATION AND DIMENSIONALITY REDUCTION Chapter 3: Data Preprocessing Data Preprocessing: An Overview Data Quality Major

More information

Big Data Interpolation: An Effcient Sampling Alternative for Sensor Data Aggregation

Big Data Interpolation: An Effcient Sampling Alternative for Sensor Data Aggregation Big Data Interpolation: An Effcient Sampling Alternative for Sensor Data Aggregation Hadassa Daltrophe, Shlomi Dolev, Zvi Lotker Ben-Gurion University Outline Introduction Motivation Problem definition

More information

Introduction to Hypothesis Testing. Hypothesis Testing. Step 1: State the Hypotheses

Introduction to Hypothesis Testing. Hypothesis Testing. Step 1: State the Hypotheses Introduction to Hypothesis Testing 1 Hypothesis Testing A hypothesis test is a statistical procedure that uses sample data to evaluate a hypothesis about a population Hypothesis is stated in terms of the

More information

An Additive Neumann-Neumann Method for Mortar Finite Element for 4th Order Problems

An Additive Neumann-Neumann Method for Mortar Finite Element for 4th Order Problems An Additive eumann-eumann Method for Mortar Finite Element for 4th Order Problems Leszek Marcinkowski Department of Mathematics, University of Warsaw, Banacha 2, 02-097 Warszawa, Poland, Leszek.Marcinkowski@mimuw.edu.pl

More information

Image Super-Resolution via Sparse Representation

Image Super-Resolution via Sparse Representation 1 Image Super-Resolution via Sparse Representation Jianchao Yang, Student Member, IEEE, John Wright, Student Member, IEEE Thomas Huang, Life Fellow, IEEE and Yi Ma, Senior Member, IEEE Abstract This paper

More information

Topologically Massive Gravity with a Cosmological Constant

Topologically Massive Gravity with a Cosmological Constant Topologically Massive Gravity with a Cosmological Constant Derek K. Wise Joint work with S. Carlip, S. Deser, A. Waldron Details and references at arxiv:0803.3998 [hep-th] (or for the short story, 0807.0486,

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

Fitting Subject-specific Curves to Grouped Longitudinal Data

Fitting Subject-specific Curves to Grouped Longitudinal Data Fitting Subject-specific Curves to Grouped Longitudinal Data Djeundje, Viani Heriot-Watt University, Department of Actuarial Mathematics & Statistics Edinburgh, EH14 4AS, UK E-mail: vad5@hw.ac.uk Currie,

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing

NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing Alex Cloninger Norbert Wiener Center Department of Mathematics University of Maryland, College Park http://www.norbertwiener.umd.edu

More information

Non-parametric seismic data recovery with curvelet frames

Non-parametric seismic data recovery with curvelet frames Non-parametric seismic data recovery with curvelet frames Felix J. Herrmann and Gilles Hennenfent (January 2, 2007) ABSTRACT Seismic data recovery from data with missing traces on otherwise regular acquisition

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

minimal polyonomial Example

minimal polyonomial Example Minimal Polynomials Definition Let α be an element in GF(p e ). We call the monic polynomial of smallest degree which has coefficients in GF(p) and α as a root, the minimal polyonomial of α. Example: We

More information

An Introduction to Neural Networks

An Introduction to Neural Networks An Introduction to Vincent Cheung Kevin Cannons Signal & Data Compression Laboratory Electrical & Computer Engineering University of Manitoba Winnipeg, Manitoba, Canada Advisor: Dr. W. Kinsner May 27,

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

How To Teach A Dictionary To Learn For Big Data Via Atom Update

How To Teach A Dictionary To Learn For Big Data Via Atom Update IEEE COMPUTING IN SCIENCE AND ENGINEERING, VOL. XX, NO. XX, XXX 2014 1 IK-SVD: Dictionary Learning for Spatial Big Data via Incremental Atom Update Lizhe Wang 1,KeLu 2, Peng Liu 1, Rajiv Ranjan 3, Lajiao

More information

Social Media Aided Stock Market Predictions by Sparsity Induced Regression

Social Media Aided Stock Market Predictions by Sparsity Induced Regression Social Media Aided Stock Market Predictions by Sparsity Induced Regression Delft Center for Systems and Control Social Media Aided Stock Market Predictions by Sparsity Induced Regression For the degree

More information

How To Understand The Problem Of Decoding By Linear Programming

How To Understand The Problem Of Decoding By Linear Programming Decoding by Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles, CA 90095

More information

0.1 Phase Estimation Technique

0.1 Phase Estimation Technique Phase Estimation In this lecture we will describe Kitaev s phase estimation algorithm, and use it to obtain an alternate derivation of a quantum factoring algorithm We will also use this technique to design

More information

The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation. Lectures INF2320 p. 1/88 The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

More information

Understanding the Impact of Weights Constraints in Portfolio Theory

Understanding the Impact of Weights Constraints in Portfolio Theory Understanding the Impact of Weights Constraints in Portfolio Theory Thierry Roncalli Research & Development Lyxor Asset Management, Paris thierry.roncalli@lyxor.com January 2010 Abstract In this article,

More information

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2015

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2015 ECON 459 Game Theory Lecture Notes Auctions Luca Anderlini Spring 2015 These notes have been used before. If you can still spot any errors or have any suggestions for improvement, please let me know. 1

More information

Math 333 - Practice Exam 2 with Some Solutions

Math 333 - Practice Exam 2 with Some Solutions Math 333 - Practice Exam 2 with Some Solutions (Note that the exam will NOT be this long) Definitions (0 points) Let T : V W be a transformation Let A be a square matrix (a) Define T is linear (b) Define

More information

IN THIS paper, we address the classic image denoising

IN THIS paper, we address the classic image denoising 3736 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 12, DECEMBER 2006 Image Denoising Via Sparse and Redundant Representations Over Learned Dictionaries Michael Elad and Michal Aharon Abstract We

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

Reflection & Transmission of EM Waves

Reflection & Transmission of EM Waves Reflection & Transmission of EM Waves Reading Shen and Kong Ch. 4 Outline Everyday Reflection Reflection & Transmission (Normal Incidence) Reflected & Transmitted Power Optical Materials, Perfect Conductors,

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

Finite dimensional C -algebras

Finite dimensional C -algebras Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

Week 1: Introduction to Online Learning

Week 1: Introduction to Online Learning Week 1: Introduction to Online Learning 1 Introduction This is written based on Prediction, Learning, and Games (ISBN: 2184189 / -21-8418-9 Cesa-Bianchi, Nicolo; Lugosi, Gabor 1.1 A Gentle Start Consider

More information

A Negative Result Concerning Explicit Matrices With The Restricted Isometry Property

A Negative Result Concerning Explicit Matrices With The Restricted Isometry Property A Negative Result Concerning Explicit Matrices With The Restricted Isometry Property Venkat Chandar March 1, 2008 Abstract In this note, we prove that matrices whose entries are all 0 or 1 cannot achieve

More information

Minimally Infeasible Set Partitioning Problems with Balanced Constraints

Minimally Infeasible Set Partitioning Problems with Balanced Constraints Minimally Infeasible Set Partitioning Problems with alanced Constraints Michele Conforti, Marco Di Summa, Giacomo Zambelli January, 2005 Revised February, 2006 Abstract We study properties of systems of

More information

Combining an Alternating Sequential Filter (ASF) and Curvelet for Denoising Coronal MRI Images

Combining an Alternating Sequential Filter (ASF) and Curvelet for Denoising Coronal MRI Images Contemporary Engineering Sciences, Vol. 5, 2012, no. 2, 85-90 Combining an Alternating Sequential Filter (ASF) and Curvelet for Denoising Coronal MRI Images Mohamed Ali HAMDI Ecole Nationale d Ingénieur

More information

Inner product. Definition of inner product

Inner product. Definition of inner product Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product

More information

Subspace Analysis and Optimization for AAM Based Face Alignment

Subspace Analysis and Optimization for AAM Based Face Alignment Subspace Analysis and Optimization for AAM Based Face Alignment Ming Zhao Chun Chen College of Computer Science Zhejiang University Hangzhou, 310027, P.R.China zhaoming1999@zju.edu.cn Stan Z. Li Microsoft

More information

MATH 4552 Cubic equations and Cardano s formulae

MATH 4552 Cubic equations and Cardano s formulae MATH 455 Cubic equations and Cardano s formulae Consider a cubic equation with the unknown z and fixed complex coefficients a, b, c, d (where a 0): (1) az 3 + bz + cz + d = 0. To solve (1), it is convenient

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on CS 5614: (Big) Data Management Systems B. Aditya Prakash Lecture #18: Dimensionality Reduc7on Dimensionality Reduc=on Assump=on: Data lies on or near a low d- dimensional subspace Axes of this subspace

More information