Variational approach to restore point-like and curve-like singularities in imaging



Similar documents
Applications to Data Smoothing and Image Processing I

Adaptive Online Gradient Descent

10. Proximal point method

Numerical Methods For Image Restoration

Dual Methods for Total Variation-Based Image Restoration

Introduction to the Finite Element Method

FIELDS-MITACS Conference. on the Mathematics of Medical Imaging. Photoacoustic and Thermoacoustic Tomography with a variable sound speed

Error Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach

Big Data - Lecture 1 Optimization reminders

LABEL PROPAGATION ON GRAPHS. SEMI-SUPERVISED LEARNING. ----Changsheng Liu

Random graphs with a given degree sequence

Level Set Evolution Without Re-initialization: A New Variational Formulation

Proximal mapping via network optimization

ROF model on the graph

Part II Redundant Dictionaries and Pursuit Algorithms

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Statistical machine learning, high dimension and big data

Several Views of Support Vector Machines

The Heat Equation. Lectures INF2320 p. 1/88

Parallel Total Variation Minimization. Diplomarbeit

EXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL

1 if 1 x 0 1 if 0 x 1

GenOpt (R) Generic Optimization Program User Manual Version 3.0.0β1

The CUSUM algorithm a small review. Pierre Granjon

Duality of linear conic problems

Quasi-static evolution and congested transport

A PRIORI ESTIMATES FOR SEMISTABLE SOLUTIONS OF SEMILINEAR ELLIPTIC EQUATIONS. In memory of Rou-Huai Wang

Extremal equilibria for reaction diffusion equations in bounded domains and applications.

SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI. (B.Sc.(Hons.), BUAA)

NMR Measurement of T1-T2 Spectra with Partial Measurements using Compressive Sensing

2.2 Creaseness operator

Binary Image Reconstruction

How High a Degree is High Enough for High Order Finite Elements?

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

BANACH AND HILBERT SPACE REVIEW

Multigrid preconditioning for nonlinear (degenerate) parabolic equations with application to monument degradation

STORM: Stochastic Optimization Using Random Models Katya Scheinberg Lehigh University. (Joint work with R. Chen and M. Menickelly)

Re-initialization Free Level Set Evolution via Reaction Diffusion

Metric Spaces. Chapter 1

SUCCESSFUL PREDICTION OF HORSE RACING RESULTS USING A NEURAL NETWORK

Elasticity Theory Basics

GI01/M055 Supervised Learning Proximal Methods

Finite covers of a hyperbolic 3-manifold and virtual fibers.

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

(Quasi-)Newton methods

How I won the Chess Ratings: Elo vs the rest of the world Competition

Wavelet analysis. Wavelet requirements. Example signals. Stationary signal 2 Hz + 10 Hz + 20Hz. Zero mean, oscillatory (wave) Fast decay (let)

Big Data Analytics: Optimization and Randomization

A Network Flow Approach in Cloud Computing

LECTURE 1: DIFFERENTIAL FORMS forms on R n

OpenFOAM Optimization Tools

Finite Element Formulation for Beams - Handout 2 -

An optimal transportation problem with import/export taxes on the boundary

Level Set Framework, Signed Distance Function, and Various Tools

Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

Ideal Class Group and Units

Fuzzy Differential Systems and the New Concept of Stability

1. Prove that the empty set is a subset of every set.

Motivation. Motivation. Can a software agent learn to play Backgammon by itself? Machine Learning. Reinforcement Learning

Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.

The Henstock-Kurzweil-Stieltjes type integral for real functions on a fractal subset of the real line

Spectral Networks and Harmonic Maps to Buildings

TESLA Report

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

Accurate and robust image superresolution by neural processing of local image representations

CSCI567 Machine Learning (Fall 2014)

Formulations of Model Predictive Control. Dipartimento di Elettronica e Informazione

Statistical Machine Learning

CS 295 (UVM) The LMS Algorithm Fall / 23

How To Calculate Energy From Water

USING SPECTRAL RADIUS RATIO FOR NODE DEGREE TO ANALYZE THE EVOLUTION OF SCALE- FREE NETWORKS AND SMALL-WORLD NETWORKS

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Graph Based Processing of Big Images

Follow the Perturbed Leader

Downloaded 08/02/12 to Redistribution subject to SIAM license or copyright; see

Lecture 3: Linear methods for classification

CONSTANT-SIGN SOLUTIONS FOR A NONLINEAR NEUMANN PROBLEM INVOLVING THE DISCRETE p-laplacian. Pasquale Candito and Giuseppina D Aguí

Numerical methods for American options

Interior Point Methods and Linear Programming

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

3. Interpolation. Closing the Gaps of Discretization... Beyond Polynomials

Load balancing in a heterogeneous computer system by self-organizing Kohonen network

Microeconomic Theory: Basic Math Concepts

jorge s. marques image processing

Single item inventory control under periodic review and a minimum order quantity

Notes on metric spaces

The Relative Worst Order Ratio for On-Line Algorithms

Discrete Optimization

Transcription:

Variational approach to restore point-like and curve-like singularities in imaging Daniele Graziani joint work with Gilles Aubert and Laure Blanc-Féraud Roma 12/06/2012 Daniele Graziani (Roma) 12/06/2012 1 / 36

Outline 1 Classical problem: edge restoration/detection 2 New problem: thin structures restoration/detection 3 Building up the energy 4 Variational analysis of the energy 5 Discrete setting and discrete energies 6 Nesterov s algorithm 7 Application to our minimization problem 8 Results on synthetic and real images 9 Nothing else but points Daniele Graziani (Roma) 12/06/2012 2 / 36

Classical problem I 0 = I + b }{{} Gaussian noise I 0 (x, y) = { 1 inside the edge 0 outside the edge + NOISE Daniele Graziani (Roma) 12/06/2012 3 / 36

ROF s functional Jump across the edge. The distributional gradient DI is a Radon measure concentrated on the edge. I BV (Ω) DI (Ω) = Φ C 0 R(I ) = sup (Ω;R2 ) Φ 1 Ω I divφ = H 1 (S I ) DI (Ω) + λ I I } {{ } 0 2 2 2 Ω prior term } {{ } fidelity term R discrete (I ) = I 1 + λ 2 I I 0 2 2 Daniele Graziani (Roma) 12/06/2012 4 / 36

Numerical minimization Main difficulty: 1 not differentiable X convex set min I X I 1 + λ 2 I I 0 2 2 dual approach (Chambolle s algorithm): w I 0 2 K := {w = divϕ ϕ 1}. min w K Ω primal approach (Nesterov s algorithm): Smoothing 1 -norm and fast descent gradient scheme Daniele Graziani (Roma) 12/06/2012 5 / 36

Classical result Figure : Classical denoising: total variation. Edges preservation Daniele Graziani (Roma) 12/06/2012 6 / 36

thin structures Motivation: biomedical images: viral particles, filaments thin structures?: thin structure S support of Radon measures concentrated on open curves and/or isolated points charge δ S potential I I 0 noisy The gradient is not relevant in this case Daniele Graziani (Roma) 12/06/2012 7 / 36

Thin structures Suppose S = Γ a line inside Ω. { I I = δ Γ = I = 0 x Γ δ x Γ δ Γ δ is a small tubular neighborhood of Γ (δ = 0). I W 1,p 0 (Γ δ ) such that Γ δ I p dx 0 whenever δ 0 while looking at the total variation of the distributional Laplacian: I (Γ δ ) = sup I φdx = H 1 (Γ) Γ δ φ C 0 (Γ δ) φ 1 Daniele Graziani (Roma) 12/06/2012 8 / 36

The energy we consider I 0 = I + b }{{} Gaussian noise we wish to solve: where: F(I ) = I (Ω) + λ 2 I I 0 2 min F(I ) I M p (Ω) M p (Ω) := {u W 1,p 0 (Ω) : I (Ω) < + } p < 2 to allow concentration on isolated points Daniele Graziani (Roma) 12/06/2012 9 / 36

Variational properties Compactness: Stampacchia s a priori estimation for weak solutions of Dirichlet problem with measure data Lower semicontinuity: weak W 1,p 0 -lower semicontinuity of I (Ω) + strong L 2 -continuity of L 2 -norm. Uniqueness: strict convexity of F Relaxation: F(I ) = SC (F )(I ) (with respect to the W 1,1 -strong convergence) { F (I ) := Ω I dx on C 0 2(Ω), + on M p (Ω) \ C0 2(Ω). Daniele Graziani (Roma) 12/06/2012 10 / 36

Discrete model We define the discrete rectangular domain Ω of step size δx = 1, where Ω = {1,..., d 1 } {1,..., d 2 } Z 2. If I R d 1d 2 is the matrix image the gradient I R d 1d 2 R d 1d 2 : where ( I ) 1 i,j = { I i+1,j I i,j if i < d 1 0 if i = d 1, ( I ) i,j = (( I ) 1 i,j, ( I ) 2 i,j) ( I ) 2 i,j = { I i,j+1 I i,j if j < d 2 0 if j = d 2. So that the divergence operator is the adjoint of the gradient: and we can define the Laplacian as div = I = div( I ). Daniele Graziani (Roma) 12/06/2012 11 / 36

Discrete functional We consider we wish to solve: d 1 d 2 F (I ) = I i,j + λ d 1 d 2 I i,j (I 0 ) i,j 2 2 i=1 j=1 min X 0 F (I ) i=1 j=1 where X 0 := {I R d 1d 2 ; I 1,j = I d1,j = I i,1 = I i,d2 = 0} Daniele Graziani (Roma) 12/06/2012 12 / 36

Classical Nesterov s minimization algorithm with: min x X General structure F (x) = min{f (x) + Ψ(x)} x X f convex with L-Lipschitz differential. Ψ(x) simple function. One can compute exactly: Then prox Ψ (x) = arg min Ψ(y) + 1 y 2 y x 2 X 0 F (x k ) F (x ) L x x 0 2 X k 2, where x is a minimum point of F and x 0 is an initial data. Daniele Graziani (Roma) 12/06/2012 13 / 36

Mechanics of the algorithm 1 a minimizing sequence {x k } 2 an increasing sequence of coefficients {A k } such that: A 0 = 0, A k = A k+1 + a k+1, with a k + 1 > 0 3 an approximating sequence {ψ k } of A k F : ψ k (x) = k i=1 a i(f (x i ) + f (x i ), x x i X + A k Ψ(x) + 1 2 x x 0 2 X Trick ( A k F (x k ) min x X ψ k (x)) ψ k (x) A k F (x) + 1 2 x x 0 2 X A k k2 2L 0 F (x k ) F (x ) L x x 0 2 X k 2. Daniele Graziani (Roma) 12/06/2012 14 / 36

Application to our minimization problem Smoothed version of F where d 1 d 2 F ɛ (I ) = w ɛ ( ( I ) i,j ) + λ d 1 d 2 I i,j (I 0 ) i,j 2, 2 i=1 j=1 { x w ɛ (x) = x 2 2ɛ + ɛ 2 i=1 j=1 si x ɛ otherwise. f = F ɛ Ψ 0 Daniele Graziani (Roma) 12/06/2012 15 / 36

Lipschitz constant of F ɛ By direct computation df ɛ (I ) = (ψ) + λ(i I 0 ) where Then we have ψ i,j = { ( I )i,j ( I ) i,j ( I ) i,j ɛ if ( I ) i,j ɛ otherwise df ɛ (I 1 ) df ɛ (I 2 ) 2 ( 2 2 ɛ So the final estimation is + λ) I 1 I 2 2 ( 64 ɛ + λ) I 1 I 2 2. where I ɛ is a minimum of F ɛ. 0 F ɛ (I k ) F ɛ (I ɛ ) ( 64 ɛ + λ) I ɛ I 0 2 2 k 2, Daniele Graziani (Roma) 12/06/2012 16 / 36

How to choose ɛ and the number of iterations δ-solution δ-solution of F : F (I δ ) F (I ) δ, where I is a minimum of F. Problem For δ given, choose ɛ and the number of iterations K to obtain a δ-solution By direct estimation So worst case precision is F (I k ) F (I ) ( 64 ɛ + λ) I ɛ I 2 0 2 k 2 + d 1 d 2 ɛ, F (I k ) F (I ) = ( 64 ɛ + λ) I ɛ I 0 2 2 k 2 + d 1 d 2 ɛ; Daniele Graziani (Roma) 12/06/2012 17 / 36

Then the optimal choices are ɛ = where C := max I I 0 2. I Practically δ [, K = ( 64d ] 1d 2 + λ)]c + 1, d 1 d 2 δ δ = 1 d 1 = d 2 = 256 ɛ = 10 5 K = 3000 Figure : convergence Daniele Graziani (Roma) 12/06/2012 18 / 36

Numerical tests Figure : noisy image Daniele Graziani (Roma) 12/06/2012 19 / 36

Numerical tests Figure : restored image Daniele Graziani (Roma) 12/06/2012 20 / 36

Numerical tests Figure : I Daniele Graziani (Roma) 12/06/2012 21 / 36

Numerical tests Figure : noisy image c Daniele Graziani (Roma) 12/06/2012 22 / 36

Numerical tests Figure : restored image Daniele Graziani (Roma) 12/06/2012 23 / 36

Numerical tests Figure : noisy image c Daniele Graziani (Roma) 12/06/2012 24 / 36

Numerical tests Figure : restored image Daniele Graziani (Roma) 12/06/2012 25 / 36

Link between discrete and continuous setting? mesh of size δx = h 0 we should define the sequence F ɛ in the continuous setting via interpolation (?) F ɛ(h) SC F = F (in the sense of Γ-convergence)(?) min F ɛ(h) min F Daniele Graziani (Roma) 12/06/2012 26 / 36

Nothing else but points In this framework the energy to minimize is given by: F (I, P) = I 2 dx + I I 0 2 dx + H 0 (P) Ω where P is the initial singular set I 0 : I 0 =, Penalization The term H 0 forces I to have singularities given by a finite set of points P Ω Daniele Graziani (Roma) 12/06/2012 27 / 36

The variational approximation Let {F ɛ } be a family of functionals defined on a metric space X. {F ɛ } Γ-converges to F as ɛ 0 if x X 1 2 x ɛ x lim inf ɛ 0 The functional F is the Γ-limit of {F ɛ }. Variational property F ɛ(x ɛ ) F (x) x ɛ x lim sup F ɛ (x ɛ ) F (x) ɛ 0 if {x ɛ } is a sequence of minimizers of F ɛ and x ɛ x, then x minimizes F. Daniele Graziani (Roma) 12/06/2012 28 / 36

P atomic set of N points, i.e. P = {x 1,..., x N } Main difficulty: the presence of object with dimension 0 Braides-March s approach: H 0 (P) G ɛ (D) = 1 4π D 1 ɛ + ɛk2 (x)dh 1 D is proper regular set. k denotes the curvature of its boundary The minimum of G ɛ is given by B ɛ (x i ) We recover the set P when ɛ 0 Daniele Graziani (Roma) 12/06/2012 29 / 36

The intermediate approximation Ω\D I 2 dx + Ω I I 0 2 dx F ɛ (I, D) = + 1 ( 1 4π D ɛ + ɛκ2) dh 1 if L 2 (D) a(ɛ) 0 + otherwise Approximation for the measure dh 1. dh 1 D µ ɛ (w ɛ, w ɛ )dx = (ɛ w ɛ 2 + W (w ɛ) )dx ɛ where W (t) = t 2 (1 t) 2 w ɛ C (Ω; [0, 1]) Daniele Graziani (Roma) 12/06/2012 30 / 36

The De Giorgi s conjecture Then Variation of H 1 gives a curvature term k, Variation of the gradient energy gives 2ɛ w W (w) ɛ. Φ ɛ (I, w) : = + + w 2 I 2 dx + I I 0 2 dx + Ω Ω 1 β ɛ (2ɛ w W (w) ) 2 dx 8πb 0 Ω ɛ 1 1 µ ɛ (w, w)dx + 1 8πb 0 β ɛ γ ɛ Ω Ω (1 w) 2 β ε lim = 0, ε 0 + γ ε ε log(ε) lim = 0. ε 0 + β ε Daniele Graziani (Roma) 12/06/2012 31 / 36

Numerical minimization (I t, w t ) = Φ ɛ(i, w) + boundary condition small time step τ = 1 + Simulated Annealing L We obtain a function w whose zeros are given by the set P. Daniele Graziani (Roma) 12/06/2012 32 / 36

Computer examples Figure : biological image c Daniele Graziani (Roma) 12/06/2012 33 / 36

Computer examples Figure : Level set {w 0} = {singular points} Daniele Graziani (Roma) 12/06/2012 34 / 36

Computer examples Figure : Superposition with the original image Daniele Graziani (Roma) 12/06/2012 35 / 36

References -G.Aubert, L.Blanc-Féraud, D.Graziani Analysis of a variational model to restore point-like and curve -like singularities in imaging, in revision for Applied mathematics and optimization -D.Graziani, L.Blanc-Féraud, G.Aubert A formal Γ-convergence approach for the detection of points in 2-D images, Siam Journal of Imaging Science Vol. 3, No. 3 (2010), 578-594. -G.Aubert, D.Graziani Variational approximation for detecting-point like target problems in 2-D images ESSAIM: Control, Optimisation and Calculus of Variations Volume 17 No.04 (2011), 909-930. Daniele Graziani (Roma) 12/06/2012 36 / 36