2.2 Creaseness operator



Similar documents
Convolution. 1D Formula: 2D Formula: Example on the web:

State of Stress at Point

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

Math 241, Exam 1 Information.

H.Calculating Normal Vectors

Solutions for Review Problems

Solving Simultaneous Equations and Matrices

DERIVATIVES AS MATRICES; CHAIN RULE

the points are called control points approximating curve

Section 12.6: Directional Derivatives and the Gradient Vector

Edge detection. (Trucco, Chapt 4 AND Jain et al., Chapt 5) -Edges are significant local changes of intensity in an image.

Geometric Optics Converging Lenses and Mirrors Physics Lab IV

J. P. Oakley and R. T. Shann. Department of Electrical Engineering University of Manchester Manchester M13 9PL U.K. Abstract

RAJALAKSHMI ENGINEERING COLLEGE MA 2161 UNIT I - ORDINARY DIFFERENTIAL EQUATIONS PART A

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve

DATA ANALYSIS II. Matrix Algorithms

(a) We have x = 3 + 2t, y = 2 t, z = 6 so solving for t we get the symmetric equations. x 3 2. = 2 y, z = 6. t 2 2t + 1 = 0,

Nonlinear Iterative Partial Least Squares Method

Exam 1 Sample Question SOLUTIONS. y = 2x

( 1) = 9 = 3 We would like to make the length 6. The only vectors in the same direction as v are those

FINAL EXAM SOLUTIONS Math 21a, Spring 03

Solutions to Homework 5

Elasticity Theory Basics

1 3 4 = 8i + 20j 13k. x + w. y + w

Vector and Matrix Norms

MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem

This makes sense. t /t 2 dt = 1. t t 2 + 1dt = 2 du = 1 3 u3/2 u=5

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Data Mining: Algorithms and Applications Matrix Math Review

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

Sharpening through spatial filtering

Machine Learning and Pattern Recognition Logistic Regression

Mean value theorem, Taylors Theorem, Maxima and Minima.

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

Solution of Linear Systems

Image Gradients. Given a discrete image Á Òµ, consider the smoothed continuous image ܵ defined by

Chapter 7. Lyapunov Exponents. 7.1 Maps

2013 MBA Jump Start Program

Least-Squares Intersection of Lines

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

SOLVING LINEAR SYSTEMS

Scalar Valued Functions of Several Variables; the Gradient Vector

Introduction to the Finite Element Method (FEM)

Math 265 (Butler) Practice Midterm II B (Solutions)

LINEAR ALGEBRA W W L CHEN

3 Orthogonal Vectors and Matrices

COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS

Image Segmentation and Registration

Understanding Basic Calculus

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Rotation Matrices and Homogeneous Transformations

Linear Threshold Units

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

3. Reaction Diffusion Equations Consider the following ODE model for population growth

Don't Forget the Differential Equations: Finishing 2005 BC4

Principles of Scientific Computing Nonlinear Equations and Optimization

Figure 2.1: Center of mass of four points.

BIG DATA PROBLEMS AND LARGE-SCALE OPTIMIZATION: A DISTRIBUTED ALGORITHM FOR MATRIX FACTORIZATION

Rigid body dynamics using Euler s equations, Runge-Kutta and quaternions.

Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture.

Component Ordering in Independent Component Analysis Based on Data Power

42 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. Figure 1.18: Parabola y = 2x Brief review of Conic Sections

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Path Tracking for a Miniature Robot

How To Write A Distance Regularized Level Set Evolution

Integrated sensors for robotic laser welding

ON CERTAIN DOUBLY INFINITE SYSTEMS OF CURVES ON A SURFACE

Digital Imaging and Multimedia. Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University

On Motion of Robot End-Effector using the Curvature Theory of Timelike Ruled Surfaces with Timelike Directrix

Practice Final Math 122 Spring 12 Instructor: Jeff Lang

Class Meeting # 1: Introduction to PDEs

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a

3. INNER PRODUCT SPACES

INTERPOLATION. Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y).

Matrices and Polynomials

Math 120 Final Exam Practice Problems, Form: A

1 of 79 Erik Eberhardt UBC Geological Engineering EOSC 433

The Heat Equation. Lectures INF2320 p. 1/88

Lecture 5 Principal Minors and the Hessian

Line and Polygon Clipping. Foley & Van Dam, Chapter 3

Chapter 36 - Lenses. A PowerPoint Presentation by Paul E. Tippens, Professor of Physics Southern Polytechnic State University

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Høgskolen i Narvik Sivilingeniørutdanningen

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Estimated Pre Calculus Pacing Timeline

Review Sheet for Test 1

Reflection and Refraction

TOPIC 4: DERIVATIVES

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

How To Understand And Solve Algebraic Equations

Understanding and Applying Kalman Filtering

Parameter Estimation for Bingham Models

Gaussian Processes in Machine Learning

5.7 Maximum and Minimum Values

Scan Conversion of Filled Primitives Rectangles Polygons. Many concepts are easy in continuous space - Difficult in discrete space

Vector Math Computer Graphics Scott D. Anderson

Transcription:

2.2. Creaseness operator 31 2.2 Creaseness operator Antonio López, a member of our group, has studied for his PhD dissertation the differential operators described in this section [72]. He has compared the performance of various classical creaseness operators, and studies the theoretical reason of their lack of continuity. Following, we synthesise his results for the sake of completeness, but for further information regarding particular points, the cited bibliography should be consulted. Today s literature defines a crease (ridge or valley) in many different ways [10]. Intuitively, a crease can be defined: 1. paths where water gathers downhill when it rains in a landscape (turn the landscape upside-down for a ridge). 2. points with sharp drops on both hand sides.. 3. when walking uphill, points where you make a sharp turn.. Before giving the proper definitions, we need to introduce some notation. We define a discrete image as the sampling of a d-dimensional continuous function: the level set for a constant l consists of the set of points L : Ω R d Γ R (2.1) S l = {x Ω L(x) = l} (2.2) Figure 2.5 shows the graphical interpretation for the d = 2. For the 3 D case, the level sets will be surfaces having the same intensity level. A related family of curves are the slope lines, which follow the gradient and are orthogonal to the level lines. Operators over a function are defined as usual: = ( / x 1,..., / x d ) (Gradient) (2.3) = ( t ) (Hessian) (2.4) The derivative of L along the vector v = (v 1,..., v d ) t : L v = L (v/ v ) (2.5) And the second-order derivative of L along v and w = (w 1,..., w d ) t : L vw = (v t / v ) L (w/ w ) (2.6) Derivatives along the axis of coordinates x i are annotated as L x i. In 2 D, useful definitions are w = (L x, L y ) t and v = (L y, L x ) t.

32 CT TO MR VOLUME REGISTRATION Ridge like vertex curves Valley like vertex curves Level Curves w v v w Level curve tangent Slopeline tangent (Gradient Direction) v w Figure 2.5: Graphical interpretation of terms related to level curves: ridges, valleys as lines following the minimum and maximum curvature, and the v and w vectors at any point. 2.2.1 Three classical definitions of creases Definition 1: paths where water gathers downhill when it rains in a landscape. This definition, due to R. Rothe, was saved from oblivion in [36, 37, 38]. It can be written as parts of slope-lines where other slope-lines converge. Slope-lines are solutions of the ordinary differential equation: I dx = 0 (2.7) Given an initial point, the slope-line including a point will be the set of consecutive points solution of equation 2.7, which is done for instance by means of Runge-Kutta (as implemented in [80]). That can be seen as tracing the path of a rain drop as it runs down-hill. López in [51] studied the algorithmic implementation compared to other operators, but results where not satisfactory. The main drawback is that the operator is not local, which means that it cannot be described as a function of a neighbourhood. That is to say, many initial points are needed to properly trace the crests, and that has a very high computational cost. We extensively studied the response of this operator for registration purposes during the initial stages of the research; the registration achieved with this scheme some results [44]. In that paper, we compared its response to other operators, and also started preliminary registrations. However, the final accuracy was limited because the segmentation presented gaps were it was not expected, similarly to those of the L vv operator, explained in the following pages. The first year of my research was dedicated to study this operator, and the first two publications [44, 52] were based on those results. Afterwards, another creaseness operator compared favourably to Rothe s, so it was replaced in the registration algorithm. See an example with the response for a heart gammagraphic image in figure 2.6.

2.2. Creaseness operator 33 Figure 2.6: From top to bottom and left to right: (a) Heart gammagraphic image. (b) Image seen as a landscape. (c) Level curves. (d) Some slopelines (black) over the level curves (grey). Observe how slopelines crowd along special slopeline segments. (e) Accumulation image where darker grey-levels denote greater accumulation. (f) Selected special slopelines segments: drainage pattern (grey) and inverse drainage pattern (dark).

34 CT TO MR VOLUME REGISTRATION 1 0-1 -2-1 -0.5 0 0.5 (a) 1-1 0-0.5 0.5 1 (b) W W V (c) V (d) Figure 2.7: Graphical interpretation of the L vv operator: a) shows the shape function with a crest line; b) at each point the direction of the gradient w in dark and the vector perpendicular v in grey. c) shows the profile following each vector at non-crest and crest (d) points. Definition 2: points with sharp drops on right and on the left hand side. We will refer to figure 2.7 to explain the mathematical definition. At any point of an image seen as a landscape, w points uphill, and v is perpendicular to w. If we look at the function profile for a section following v, the shape will be different depending on the neighbouring values: for point at (c), it will be flat, but for a crease point (d) it will clearly show a parabola. Therefore L vv, which is the second derivative of the function along v, will take high values for valleys and low values for ridges. The explicit formula for L vv for d = 2 [56] is: L vv = L2 yl xx + L 2 xl yy 2L x L y L xy L 2 x + L 2 y (2.8) Definition number 3: when walking uphill, points where you need to make a sharp turn. This definition recalls that of the level-curve curvature κ: following a level curve, the points where the curvature is higher (lower) determine the valleys (ridge). It has the following analytical expression: κ = (2L x L y L xy L 2 yl xx L 2 xl yy )(L 2 x + L 2 y) 3 2, (2.9) Comparing equations 2.8 and 2.9, we see that L vv is κ multiplied by the gradient magnitude L w : κ = L vv /L w = L vv L α w, α = 1 (2.10)

2.2. Creaseness operator 35 For α taking values other than 1, α [ 1, 0], κ is normalised in several degrees with respect to the gradient magnitude. 2.2.2 New creaseness operators The differential operators κ and its reduced form L vv are in fact the particular expression for 2 D of the curvature of the level-curves, which, in 3 D, is termed mean curvature κ M of the level surfaces. In the d dimensional case we will call it level-set extrinsic curvature, LSEC. The general formula for LSEC uses Einstein summation convention: κ d = (L α L β L αβ L α L α L ββ )(L γ L γ ) 3 2, α, β, γ Xd, (2.11) where X d is the d-dimensional (local) coordinate system {x 1,..., x d }. In 3 d we have level surfaces and the straightforward extension of κ is two times the Mean curvature κ M of the level surfaces: [ κ M = 2(L x L y L xy + L x L z L xz + L y L z L yz ) L 2 x(l yy + L zz ) ] L 2 y(l xx + L zz ) L 2 z(l xx + L yy ) 2(L 2 x + L 2 y + L 2 z) 3 2 (2.12) However, the way LSEC is usually computed directly discretizing Eq. (2.12) or the equivalent expression in the 2 d case for κ gives rise to a number of discontinuities : there appear gaps at places where we would not expect any reduction of creaseness because they are at the centre of elongated objects and creaseness is a measure of medialness for grey level elongated objects. We have checked that this happens independently of the scheme of discretizing derivatives and even when the gradient magnitude is far away from the machine s zero, this is, at pixels where Eq. (2.12) is well-defined. To avoid discontinuities we present an alternative way of computing creaseness from the level set extrinsic curvature. For a d dimensional vector field u : R d R d, u(x) = (u 1 (x),..., u d (x)) t, the divergence measures the degree of parallelism of its integral lines : div(u) = d i=1 u i x i (2.13) Now, if we define the normalised gradient vector field of L : R d R as w = { w/ w if w > 0 0 d if w = 0 (2.14) it can be shown that: κ d = div( w) (2.15) For d = 3 Eqs. (2.12) and (2.15) are equivalent in the continuous domain. However, and this is the key point, the same approximation using derivatives substituted

36 CT TO MR VOLUME REGISTRATION Figure 2.8: Comparison of the operators: on the left row, original CT image with the creaseness images for L vv and C κ d. The middle column zooms the window extracted from each image at the location show in the CT image. It can be seen that although the bone in the CT image is consistent ridge, gaps appear in the Lvv operator. The cause can be seen if the window is depicted as a landscape (third column): gaps correspond to local minima along the crease, i.e., saddle points. Our operator C κ d solves this problem. into Eqs. (2.12) and (2.15) gives rise to very different results, namely, Eq. (2.15) avoids the gaps that Eq. (2.12) produces on creases. For example, in the case of ridge-like saddle points, as those in figure 2.8, κ not only decreases but also suffers a change of sign barrier on the path of the expected ridgelike curve. The reason is that the neighbourhood of the saddle point is composed mainly of concave zones ( κ < 0) but the sub-pixel ridge-like curve runs through convex zones ( κ > 0) missed by the discretization of Eq. (2.12). In order to obtain derivatives of a discrete image L in a well posed manner we

2.2. Creaseness operator 37 have approximated image derivatives by finite centred differences of the Gaussian smoothed image instead of convolution with Gaussian derivatives of a given standard deviation σ D. For each pixel, neighbour derivatives are calculated on the fly as needed and stored in buffers for the next one. This approach is necessary because otherwise the algorithm would require 10 float images simultaneously. Thus, Eq. (2.15) produces similar results being computationally less time and memory consuming. The creaseness measure κ d can still be improved by pre filtering the image gradient vector field in order to increase the degree of attraction/repulsion at ridge like/valley like creases, which is what κ d is actually measuring. This can be done by the structure tensor analysis ( [52], [53]): 1. Compute the gradient vector field w and the structure tensor field M M(x; σ I ) = G(x; σ I ) (w(x) w(x) t ) (2.16) being the element wise convolution of matrix w(x) w(x) t with the Gaussian window G(x; σ I ). The standard deviation σ I controls the size of the averaging window. 2. Perform the eigenvalue analysis of M. The normalised eigenvector w corresponding to the highest eigenvalue gives the predominant gradient orientation. In the structure tensor analysis, opposite directions are equally treated. Thus, to recover the direction we put w in the same quadrant in 2 d, or octant in 3 d, as w. Then, we obtain the new vector field w and the creaseness measure κ d : w = sign(w t w)w (2.17) κ d = div( w) (2.18) The discretization of κ d results from approximating partial derivatives of Eq. (2.13) by finite centred differences of the w vector field. When the smallest neighbourhood is taken, namely, the 6 pixels neighbouring a pixel (i, j, k), the following expression is obtained: κ d [i, j, k] = w 1 [i + 1, j, k] w 1 [i 1, j, k] + w 2 [i, j + 1, k] (2.19) w 2 [i, j 1, k] + w 3 [i, j, k + 1] w 3 [i, j, k 1] 3. Compute a confidence measure C [0, 1] to discard creaseness at isotropic areas. As similarity of the eigenvalues (λ 1,..., λ d ) of the structure tensor implies isotropy we can base the computation of C on a normalised measure based on their difference. A suitable choice is: ( ( d d i=1 j=i+1 (λ i(x) λ j (x)) 2) 2 ) C(x) = 1 exp 2c 2 (2.20) where threshold c is experimentally chosen. In this way, C κ d has a lower response than κ d at isotropic regions.

38 CT TO MR VOLUME REGISTRATION (a) (b) (c) (d) Figure 2.9: From left to right: 3 D creaseness measures of the MR slice in (a) for σ D = 2.0 mm: (b) κ, (c) κ for σ I = 2.0 mm, (d) C κ for c = 10000. The gaps in (b) do not appear in (c). We have tested several methods to compute the eigenvalues. If we only need the highest, the fastest method is Powell s applied to an optimised expression from [21]. If we choose to implement step 3, the fastest is singular value decomposition as implemented in [80]. Figure 2.9 compares κ, κ and C κ on an MR volume.