Variational approach to restore point-like and curve-like singularities in imaging Daniele Graziani joint work with Gilles Aubert and Laure Blanc-Féraud Roma 12/06/2012 Daniele Graziani (Roma) 12/06/2012 1 / 36
Outline 1 Classical problem: edge restoration/detection 2 New problem: thin structures restoration/detection 3 Building up the energy 4 Variational analysis of the energy 5 Discrete setting and discrete energies 6 Nesterov s algorithm 7 Application to our minimization problem 8 Results on synthetic and real images 9 Nothing else but points Daniele Graziani (Roma) 12/06/2012 2 / 36
Classical problem I 0 = I + b }{{} Gaussian noise I 0 (x, y) = { 1 inside the edge 0 outside the edge + NOISE Daniele Graziani (Roma) 12/06/2012 3 / 36
ROF s functional Jump across the edge. The distributional gradient DI is a Radon measure concentrated on the edge. I BV (Ω) DI (Ω) = Φ C 0 R(I ) = sup (Ω;R2 ) Φ 1 Ω I divφ = H 1 (S I ) DI (Ω) + λ I I } {{ } 0 2 2 2 Ω prior term } {{ } fidelity term R discrete (I ) = I 1 + λ 2 I I 0 2 2 Daniele Graziani (Roma) 12/06/2012 4 / 36
Numerical minimization Main difficulty: 1 not differentiable X convex set min I X I 1 + λ 2 I I 0 2 2 dual approach (Chambolle s algorithm): w I 0 2 K := {w = divϕ ϕ 1}. min w K Ω primal approach (Nesterov s algorithm): Smoothing 1 -norm and fast descent gradient scheme Daniele Graziani (Roma) 12/06/2012 5 / 36
Classical result Figure : Classical denoising: total variation. Edges preservation Daniele Graziani (Roma) 12/06/2012 6 / 36
thin structures Motivation: biomedical images: viral particles, filaments thin structures?: thin structure S support of Radon measures concentrated on open curves and/or isolated points charge δ S potential I I 0 noisy The gradient is not relevant in this case Daniele Graziani (Roma) 12/06/2012 7 / 36
Thin structures Suppose S = Γ a line inside Ω. { I I = δ Γ = I = 0 x Γ δ x Γ δ Γ δ is a small tubular neighborhood of Γ (δ = 0). I W 1,p 0 (Γ δ ) such that Γ δ I p dx 0 whenever δ 0 while looking at the total variation of the distributional Laplacian: I (Γ δ ) = sup I φdx = H 1 (Γ) Γ δ φ C 0 (Γ δ) φ 1 Daniele Graziani (Roma) 12/06/2012 8 / 36
The energy we consider I 0 = I + b }{{} Gaussian noise we wish to solve: where: F(I ) = I (Ω) + λ 2 I I 0 2 min F(I ) I M p (Ω) M p (Ω) := {u W 1,p 0 (Ω) : I (Ω) < + } p < 2 to allow concentration on isolated points Daniele Graziani (Roma) 12/06/2012 9 / 36
Variational properties Compactness: Stampacchia s a priori estimation for weak solutions of Dirichlet problem with measure data Lower semicontinuity: weak W 1,p 0 -lower semicontinuity of I (Ω) + strong L 2 -continuity of L 2 -norm. Uniqueness: strict convexity of F Relaxation: F(I ) = SC (F )(I ) (with respect to the W 1,1 -strong convergence) { F (I ) := Ω I dx on C 0 2(Ω), + on M p (Ω) \ C0 2(Ω). Daniele Graziani (Roma) 12/06/2012 10 / 36
Discrete model We define the discrete rectangular domain Ω of step size δx = 1, where Ω = {1,..., d 1 } {1,..., d 2 } Z 2. If I R d 1d 2 is the matrix image the gradient I R d 1d 2 R d 1d 2 : where ( I ) 1 i,j = { I i+1,j I i,j if i < d 1 0 if i = d 1, ( I ) i,j = (( I ) 1 i,j, ( I ) 2 i,j) ( I ) 2 i,j = { I i,j+1 I i,j if j < d 2 0 if j = d 2. So that the divergence operator is the adjoint of the gradient: and we can define the Laplacian as div = I = div( I ). Daniele Graziani (Roma) 12/06/2012 11 / 36
Discrete functional We consider we wish to solve: d 1 d 2 F (I ) = I i,j + λ d 1 d 2 I i,j (I 0 ) i,j 2 2 i=1 j=1 min X 0 F (I ) i=1 j=1 where X 0 := {I R d 1d 2 ; I 1,j = I d1,j = I i,1 = I i,d2 = 0} Daniele Graziani (Roma) 12/06/2012 12 / 36
Classical Nesterov s minimization algorithm with: min x X General structure F (x) = min{f (x) + Ψ(x)} x X f convex with L-Lipschitz differential. Ψ(x) simple function. One can compute exactly: Then prox Ψ (x) = arg min Ψ(y) + 1 y 2 y x 2 X 0 F (x k ) F (x ) L x x 0 2 X k 2, where x is a minimum point of F and x 0 is an initial data. Daniele Graziani (Roma) 12/06/2012 13 / 36
Mechanics of the algorithm 1 a minimizing sequence {x k } 2 an increasing sequence of coefficients {A k } such that: A 0 = 0, A k = A k+1 + a k+1, with a k + 1 > 0 3 an approximating sequence {ψ k } of A k F : ψ k (x) = k i=1 a i(f (x i ) + f (x i ), x x i X + A k Ψ(x) + 1 2 x x 0 2 X Trick ( A k F (x k ) min x X ψ k (x)) ψ k (x) A k F (x) + 1 2 x x 0 2 X A k k2 2L 0 F (x k ) F (x ) L x x 0 2 X k 2. Daniele Graziani (Roma) 12/06/2012 14 / 36
Application to our minimization problem Smoothed version of F where d 1 d 2 F ɛ (I ) = w ɛ ( ( I ) i,j ) + λ d 1 d 2 I i,j (I 0 ) i,j 2, 2 i=1 j=1 { x w ɛ (x) = x 2 2ɛ + ɛ 2 i=1 j=1 si x ɛ otherwise. f = F ɛ Ψ 0 Daniele Graziani (Roma) 12/06/2012 15 / 36
Lipschitz constant of F ɛ By direct computation df ɛ (I ) = (ψ) + λ(i I 0 ) where Then we have ψ i,j = { ( I )i,j ( I ) i,j ( I ) i,j ɛ if ( I ) i,j ɛ otherwise df ɛ (I 1 ) df ɛ (I 2 ) 2 ( 2 2 ɛ So the final estimation is + λ) I 1 I 2 2 ( 64 ɛ + λ) I 1 I 2 2. where I ɛ is a minimum of F ɛ. 0 F ɛ (I k ) F ɛ (I ɛ ) ( 64 ɛ + λ) I ɛ I 0 2 2 k 2, Daniele Graziani (Roma) 12/06/2012 16 / 36
How to choose ɛ and the number of iterations δ-solution δ-solution of F : F (I δ ) F (I ) δ, where I is a minimum of F. Problem For δ given, choose ɛ and the number of iterations K to obtain a δ-solution By direct estimation So worst case precision is F (I k ) F (I ) ( 64 ɛ + λ) I ɛ I 2 0 2 k 2 + d 1 d 2 ɛ, F (I k ) F (I ) = ( 64 ɛ + λ) I ɛ I 0 2 2 k 2 + d 1 d 2 ɛ; Daniele Graziani (Roma) 12/06/2012 17 / 36
Then the optimal choices are ɛ = where C := max I I 0 2. I Practically δ [, K = ( 64d ] 1d 2 + λ)]c + 1, d 1 d 2 δ δ = 1 d 1 = d 2 = 256 ɛ = 10 5 K = 3000 Figure : convergence Daniele Graziani (Roma) 12/06/2012 18 / 36
Numerical tests Figure : noisy image Daniele Graziani (Roma) 12/06/2012 19 / 36
Numerical tests Figure : restored image Daniele Graziani (Roma) 12/06/2012 20 / 36
Numerical tests Figure : I Daniele Graziani (Roma) 12/06/2012 21 / 36
Numerical tests Figure : noisy image c Daniele Graziani (Roma) 12/06/2012 22 / 36
Numerical tests Figure : restored image Daniele Graziani (Roma) 12/06/2012 23 / 36
Numerical tests Figure : noisy image c Daniele Graziani (Roma) 12/06/2012 24 / 36
Numerical tests Figure : restored image Daniele Graziani (Roma) 12/06/2012 25 / 36
Link between discrete and continuous setting? mesh of size δx = h 0 we should define the sequence F ɛ in the continuous setting via interpolation (?) F ɛ(h) SC F = F (in the sense of Γ-convergence)(?) min F ɛ(h) min F Daniele Graziani (Roma) 12/06/2012 26 / 36
Nothing else but points In this framework the energy to minimize is given by: F (I, P) = I 2 dx + I I 0 2 dx + H 0 (P) Ω where P is the initial singular set I 0 : I 0 =, Penalization The term H 0 forces I to have singularities given by a finite set of points P Ω Daniele Graziani (Roma) 12/06/2012 27 / 36
The variational approximation Let {F ɛ } be a family of functionals defined on a metric space X. {F ɛ } Γ-converges to F as ɛ 0 if x X 1 2 x ɛ x lim inf ɛ 0 The functional F is the Γ-limit of {F ɛ }. Variational property F ɛ(x ɛ ) F (x) x ɛ x lim sup F ɛ (x ɛ ) F (x) ɛ 0 if {x ɛ } is a sequence of minimizers of F ɛ and x ɛ x, then x minimizes F. Daniele Graziani (Roma) 12/06/2012 28 / 36
P atomic set of N points, i.e. P = {x 1,..., x N } Main difficulty: the presence of object with dimension 0 Braides-March s approach: H 0 (P) G ɛ (D) = 1 4π D 1 ɛ + ɛk2 (x)dh 1 D is proper regular set. k denotes the curvature of its boundary The minimum of G ɛ is given by B ɛ (x i ) We recover the set P when ɛ 0 Daniele Graziani (Roma) 12/06/2012 29 / 36
The intermediate approximation Ω\D I 2 dx + Ω I I 0 2 dx F ɛ (I, D) = + 1 ( 1 4π D ɛ + ɛκ2) dh 1 if L 2 (D) a(ɛ) 0 + otherwise Approximation for the measure dh 1. dh 1 D µ ɛ (w ɛ, w ɛ )dx = (ɛ w ɛ 2 + W (w ɛ) )dx ɛ where W (t) = t 2 (1 t) 2 w ɛ C (Ω; [0, 1]) Daniele Graziani (Roma) 12/06/2012 30 / 36
The De Giorgi s conjecture Then Variation of H 1 gives a curvature term k, Variation of the gradient energy gives 2ɛ w W (w) ɛ. Φ ɛ (I, w) : = + + w 2 I 2 dx + I I 0 2 dx + Ω Ω 1 β ɛ (2ɛ w W (w) ) 2 dx 8πb 0 Ω ɛ 1 1 µ ɛ (w, w)dx + 1 8πb 0 β ɛ γ ɛ Ω Ω (1 w) 2 β ε lim = 0, ε 0 + γ ε ε log(ε) lim = 0. ε 0 + β ε Daniele Graziani (Roma) 12/06/2012 31 / 36
Numerical minimization (I t, w t ) = Φ ɛ(i, w) + boundary condition small time step τ = 1 + Simulated Annealing L We obtain a function w whose zeros are given by the set P. Daniele Graziani (Roma) 12/06/2012 32 / 36
Computer examples Figure : biological image c Daniele Graziani (Roma) 12/06/2012 33 / 36
Computer examples Figure : Level set {w 0} = {singular points} Daniele Graziani (Roma) 12/06/2012 34 / 36
Computer examples Figure : Superposition with the original image Daniele Graziani (Roma) 12/06/2012 35 / 36
References -G.Aubert, L.Blanc-Féraud, D.Graziani Analysis of a variational model to restore point-like and curve -like singularities in imaging, in revision for Applied mathematics and optimization -D.Graziani, L.Blanc-Féraud, G.Aubert A formal Γ-convergence approach for the detection of points in 2-D images, Siam Journal of Imaging Science Vol. 3, No. 3 (2010), 578-594. -G.Aubert, D.Graziani Variational approximation for detecting-point like target problems in 2-D images ESSAIM: Control, Optimisation and Calculus of Variations Volume 17 No.04 (2011), 909-930. Daniele Graziani (Roma) 12/06/2012 36 / 36