2.2 Creaseness operator
|
|
|
- Margery Augusta Warren
- 10 years ago
- Views:
Transcription
1 2.2. Creaseness operator Creaseness operator Antonio López, a member of our group, has studied for his PhD dissertation the differential operators described in this section [72]. He has compared the performance of various classical creaseness operators, and studies the theoretical reason of their lack of continuity. Following, we synthesise his results for the sake of completeness, but for further information regarding particular points, the cited bibliography should be consulted. Today s literature defines a crease (ridge or valley) in many different ways [10]. Intuitively, a crease can be defined: 1. paths where water gathers downhill when it rains in a landscape (turn the landscape upside-down for a ridge). 2. points with sharp drops on both hand sides.. 3. when walking uphill, points where you make a sharp turn.. Before giving the proper definitions, we need to introduce some notation. We define a discrete image as the sampling of a d-dimensional continuous function: the level set for a constant l consists of the set of points L : Ω R d Γ R (2.1) S l = {x Ω L(x) = l} (2.2) Figure 2.5 shows the graphical interpretation for the d = 2. For the 3 D case, the level sets will be surfaces having the same intensity level. A related family of curves are the slope lines, which follow the gradient and are orthogonal to the level lines. Operators over a function are defined as usual: = ( / x 1,..., / x d ) (Gradient) (2.3) = ( t ) (Hessian) (2.4) The derivative of L along the vector v = (v 1,..., v d ) t : L v = L (v/ v ) (2.5) And the second-order derivative of L along v and w = (w 1,..., w d ) t : L vw = (v t / v ) L (w/ w ) (2.6) Derivatives along the axis of coordinates x i are annotated as L x i. In 2 D, useful definitions are w = (L x, L y ) t and v = (L y, L x ) t.
2 32 CT TO MR VOLUME REGISTRATION Ridge like vertex curves Valley like vertex curves Level Curves w v v w Level curve tangent Slopeline tangent (Gradient Direction) v w Figure 2.5: Graphical interpretation of terms related to level curves: ridges, valleys as lines following the minimum and maximum curvature, and the v and w vectors at any point Three classical definitions of creases Definition 1: paths where water gathers downhill when it rains in a landscape. This definition, due to R. Rothe, was saved from oblivion in [36, 37, 38]. It can be written as parts of slope-lines where other slope-lines converge. Slope-lines are solutions of the ordinary differential equation: I dx = 0 (2.7) Given an initial point, the slope-line including a point will be the set of consecutive points solution of equation 2.7, which is done for instance by means of Runge-Kutta (as implemented in [80]). That can be seen as tracing the path of a rain drop as it runs down-hill. López in [51] studied the algorithmic implementation compared to other operators, but results where not satisfactory. The main drawback is that the operator is not local, which means that it cannot be described as a function of a neighbourhood. That is to say, many initial points are needed to properly trace the crests, and that has a very high computational cost. We extensively studied the response of this operator for registration purposes during the initial stages of the research; the registration achieved with this scheme some results [44]. In that paper, we compared its response to other operators, and also started preliminary registrations. However, the final accuracy was limited because the segmentation presented gaps were it was not expected, similarly to those of the L vv operator, explained in the following pages. The first year of my research was dedicated to study this operator, and the first two publications [44, 52] were based on those results. Afterwards, another creaseness operator compared favourably to Rothe s, so it was replaced in the registration algorithm. See an example with the response for a heart gammagraphic image in figure 2.6.
3 2.2. Creaseness operator 33 Figure 2.6: From top to bottom and left to right: (a) Heart gammagraphic image. (b) Image seen as a landscape. (c) Level curves. (d) Some slopelines (black) over the level curves (grey). Observe how slopelines crowd along special slopeline segments. (e) Accumulation image where darker grey-levels denote greater accumulation. (f) Selected special slopelines segments: drainage pattern (grey) and inverse drainage pattern (dark).
4 34 CT TO MR VOLUME REGISTRATION (a) (b) W W V (c) V (d) Figure 2.7: Graphical interpretation of the L vv operator: a) shows the shape function with a crest line; b) at each point the direction of the gradient w in dark and the vector perpendicular v in grey. c) shows the profile following each vector at non-crest and crest (d) points. Definition 2: points with sharp drops on right and on the left hand side. We will refer to figure 2.7 to explain the mathematical definition. At any point of an image seen as a landscape, w points uphill, and v is perpendicular to w. If we look at the function profile for a section following v, the shape will be different depending on the neighbouring values: for point at (c), it will be flat, but for a crease point (d) it will clearly show a parabola. Therefore L vv, which is the second derivative of the function along v, will take high values for valleys and low values for ridges. The explicit formula for L vv for d = 2 [56] is: L vv = L2 yl xx + L 2 xl yy 2L x L y L xy L 2 x + L 2 y (2.8) Definition number 3: when walking uphill, points where you need to make a sharp turn. This definition recalls that of the level-curve curvature κ: following a level curve, the points where the curvature is higher (lower) determine the valleys (ridge). It has the following analytical expression: κ = (2L x L y L xy L 2 yl xx L 2 xl yy )(L 2 x + L 2 y) 3 2, (2.9) Comparing equations 2.8 and 2.9, we see that L vv is κ multiplied by the gradient magnitude L w : κ = L vv /L w = L vv L α w, α = 1 (2.10)
5 2.2. Creaseness operator 35 For α taking values other than 1, α [ 1, 0], κ is normalised in several degrees with respect to the gradient magnitude New creaseness operators The differential operators κ and its reduced form L vv are in fact the particular expression for 2 D of the curvature of the level-curves, which, in 3 D, is termed mean curvature κ M of the level surfaces. In the d dimensional case we will call it level-set extrinsic curvature, LSEC. The general formula for LSEC uses Einstein summation convention: κ d = (L α L β L αβ L α L α L ββ )(L γ L γ ) 3 2, α, β, γ Xd, (2.11) where X d is the d-dimensional (local) coordinate system {x 1,..., x d }. In 3 d we have level surfaces and the straightforward extension of κ is two times the Mean curvature κ M of the level surfaces: [ κ M = 2(L x L y L xy + L x L z L xz + L y L z L yz ) L 2 x(l yy + L zz ) ] L 2 y(l xx + L zz ) L 2 z(l xx + L yy ) 2(L 2 x + L 2 y + L 2 z) 3 2 (2.12) However, the way LSEC is usually computed directly discretizing Eq. (2.12) or the equivalent expression in the 2 d case for κ gives rise to a number of discontinuities : there appear gaps at places where we would not expect any reduction of creaseness because they are at the centre of elongated objects and creaseness is a measure of medialness for grey level elongated objects. We have checked that this happens independently of the scheme of discretizing derivatives and even when the gradient magnitude is far away from the machine s zero, this is, at pixels where Eq. (2.12) is well-defined. To avoid discontinuities we present an alternative way of computing creaseness from the level set extrinsic curvature. For a d dimensional vector field u : R d R d, u(x) = (u 1 (x),..., u d (x)) t, the divergence measures the degree of parallelism of its integral lines : div(u) = d i=1 u i x i (2.13) Now, if we define the normalised gradient vector field of L : R d R as w = { w/ w if w > 0 0 d if w = 0 (2.14) it can be shown that: κ d = div( w) (2.15) For d = 3 Eqs. (2.12) and (2.15) are equivalent in the continuous domain. However, and this is the key point, the same approximation using derivatives substituted
6 36 CT TO MR VOLUME REGISTRATION Figure 2.8: Comparison of the operators: on the left row, original CT image with the creaseness images for L vv and C κ d. The middle column zooms the window extracted from each image at the location show in the CT image. It can be seen that although the bone in the CT image is consistent ridge, gaps appear in the Lvv operator. The cause can be seen if the window is depicted as a landscape (third column): gaps correspond to local minima along the crease, i.e., saddle points. Our operator C κ d solves this problem. into Eqs. (2.12) and (2.15) gives rise to very different results, namely, Eq. (2.15) avoids the gaps that Eq. (2.12) produces on creases. For example, in the case of ridge-like saddle points, as those in figure 2.8, κ not only decreases but also suffers a change of sign barrier on the path of the expected ridgelike curve. The reason is that the neighbourhood of the saddle point is composed mainly of concave zones ( κ < 0) but the sub-pixel ridge-like curve runs through convex zones ( κ > 0) missed by the discretization of Eq. (2.12). In order to obtain derivatives of a discrete image L in a well posed manner we
7 2.2. Creaseness operator 37 have approximated image derivatives by finite centred differences of the Gaussian smoothed image instead of convolution with Gaussian derivatives of a given standard deviation σ D. For each pixel, neighbour derivatives are calculated on the fly as needed and stored in buffers for the next one. This approach is necessary because otherwise the algorithm would require 10 float images simultaneously. Thus, Eq. (2.15) produces similar results being computationally less time and memory consuming. The creaseness measure κ d can still be improved by pre filtering the image gradient vector field in order to increase the degree of attraction/repulsion at ridge like/valley like creases, which is what κ d is actually measuring. This can be done by the structure tensor analysis ( [52], [53]): 1. Compute the gradient vector field w and the structure tensor field M M(x; σ I ) = G(x; σ I ) (w(x) w(x) t ) (2.16) being the element wise convolution of matrix w(x) w(x) t with the Gaussian window G(x; σ I ). The standard deviation σ I controls the size of the averaging window. 2. Perform the eigenvalue analysis of M. The normalised eigenvector w corresponding to the highest eigenvalue gives the predominant gradient orientation. In the structure tensor analysis, opposite directions are equally treated. Thus, to recover the direction we put w in the same quadrant in 2 d, or octant in 3 d, as w. Then, we obtain the new vector field w and the creaseness measure κ d : w = sign(w t w)w (2.17) κ d = div( w) (2.18) The discretization of κ d results from approximating partial derivatives of Eq. (2.13) by finite centred differences of the w vector field. When the smallest neighbourhood is taken, namely, the 6 pixels neighbouring a pixel (i, j, k), the following expression is obtained: κ d [i, j, k] = w 1 [i + 1, j, k] w 1 [i 1, j, k] + w 2 [i, j + 1, k] (2.19) w 2 [i, j 1, k] + w 3 [i, j, k + 1] w 3 [i, j, k 1] 3. Compute a confidence measure C [0, 1] to discard creaseness at isotropic areas. As similarity of the eigenvalues (λ 1,..., λ d ) of the structure tensor implies isotropy we can base the computation of C on a normalised measure based on their difference. A suitable choice is: ( ( d d i=1 j=i+1 (λ i(x) λ j (x)) 2) 2 ) C(x) = 1 exp 2c 2 (2.20) where threshold c is experimentally chosen. In this way, C κ d has a lower response than κ d at isotropic regions.
8 38 CT TO MR VOLUME REGISTRATION (a) (b) (c) (d) Figure 2.9: From left to right: 3 D creaseness measures of the MR slice in (a) for σ D = 2.0 mm: (b) κ, (c) κ for σ I = 2.0 mm, (d) C κ for c = The gaps in (b) do not appear in (c). We have tested several methods to compute the eigenvalues. If we only need the highest, the fastest method is Powell s applied to an optimised expression from [21]. If we choose to implement step 3, the fastest is singular value decomposition as implemented in [80]. Figure 2.9 compares κ, κ and C κ on an MR volume.
Convolution. 1D Formula: 2D Formula: Example on the web: http://www.jhu.edu/~signals/convolve/
Basic Filters (7) Convolution/correlation/Linear filtering Gaussian filters Smoothing and noise reduction First derivatives of Gaussian Second derivative of Gaussian: Laplacian Oriented Gaussian filters
State of Stress at Point
State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,
SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA
SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the
Math 241, Exam 1 Information.
Math 241, Exam 1 Information. 9/24/12, LC 310, 11:15-12:05. Exam 1 will be based on: Sections 12.1-12.5, 14.1-14.3. The corresponding assigned homework problems (see http://www.math.sc.edu/ boylan/sccourses/241fa12/241.html)
H.Calculating Normal Vectors
Appendix H H.Calculating Normal Vectors This appendix describes how to calculate normal vectors for surfaces. You need to define normals to use the OpenGL lighting facility, which is described in Chapter
Solutions for Review Problems
olutions for Review Problems 1. Let be the triangle with vertices A (,, ), B (4,, 1) and C (,, 1). (a) Find the cosine of the angle BAC at vertex A. (b) Find the area of the triangle ABC. (c) Find a vector
Solving Simultaneous Equations and Matrices
Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering
DERIVATIVES AS MATRICES; CHAIN RULE
DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we
the points are called control points approximating curve
Chapter 4 Spline Curves A spline curve is a mathematical representation for which it is easy to build an interface that will allow a user to design and control the shape of complex curves and surfaces.
Section 12.6: Directional Derivatives and the Gradient Vector
Section 26: Directional Derivatives and the Gradient Vector Recall that if f is a differentiable function of x and y and z = f(x, y), then the partial derivatives f x (x, y) and f y (x, y) give the rate
Edge detection. (Trucco, Chapt 4 AND Jain et al., Chapt 5) -Edges are significant local changes of intensity in an image.
Edge detection (Trucco, Chapt 4 AND Jain et al., Chapt 5) Definition of edges -Edges are significant local changes of intensity in an image. -Edges typically occur on the boundary between two different
Geometric Optics Converging Lenses and Mirrors Physics Lab IV
Objective Geometric Optics Converging Lenses and Mirrors Physics Lab IV In this set of lab exercises, the basic properties geometric optics concerning converging lenses and mirrors will be explored. The
J. P. Oakley and R. T. Shann. Department of Electrical Engineering University of Manchester Manchester M13 9PL U.K. Abstract
A CURVATURE SENSITIVE FILTER AND ITS APPLICATION IN MICROFOSSIL IMAGE CHARACTERISATION J. P. Oakley and R. T. Shann Department of Electrical Engineering University of Manchester Manchester M13 9PL U.K.
RAJALAKSHMI ENGINEERING COLLEGE MA 2161 UNIT I - ORDINARY DIFFERENTIAL EQUATIONS PART A
RAJALAKSHMI ENGINEERING COLLEGE MA 26 UNIT I - ORDINARY DIFFERENTIAL EQUATIONS. Solve (D 2 + D 2)y = 0. 2. Solve (D 2 + 6D + 9)y = 0. PART A 3. Solve (D 4 + 4)x = 0 where D = d dt 4. Find Particular Integral:
SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve
SOLUTIONS Problem. Find the critical points of the function f(x, y = 2x 3 3x 2 y 2x 2 3y 2 and determine their type i.e. local min/local max/saddle point. Are there any global min/max? Partial derivatives
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
(a) We have x = 3 + 2t, y = 2 t, z = 6 so solving for t we get the symmetric equations. x 3 2. = 2 y, z = 6. t 2 2t + 1 = 0,
Name: Solutions to Practice Final. Consider the line r(t) = 3 + t, t, 6. (a) Find symmetric equations for this line. (b) Find the point where the first line r(t) intersects the surface z = x + y. (a) We
Nonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
Exam 1 Sample Question SOLUTIONS. y = 2x
Exam Sample Question SOLUTIONS. Eliminate the parameter to find a Cartesian equation for the curve: x e t, y e t. SOLUTION: You might look at the coordinates and notice that If you don t see it, we can
( 1)2 + 2 2 + 2 2 = 9 = 3 We would like to make the length 6. The only vectors in the same direction as v are those
1.(6pts) Which of the following vectors has the same direction as v 1,, but has length 6? (a), 4, 4 (b),, (c) 4,, 4 (d), 4, 4 (e) 0, 6, 0 The length of v is given by ( 1) + + 9 3 We would like to make
FINAL EXAM SOLUTIONS Math 21a, Spring 03
INAL EXAM SOLUIONS Math 21a, Spring 3 Name: Start by printing your name in the above box and check your section in the box to the left. MW1 Ken Chung MW1 Weiyang Qiu MW11 Oliver Knill h1 Mark Lucianovic
Solutions to Homework 5
Solutions to Homework 5 1. Let z = f(x, y) be a twice continously differentiable function of x and y. Let x = r cos θ and y = r sin θ be the equations which transform polar coordinates into rectangular
Elasticity Theory Basics
G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold
1 3 4 = 8i + 20j 13k. x + w. y + w
) Find the point of intersection of the lines x = t +, y = 3t + 4, z = 4t + 5, and x = 6s + 3, y = 5s +, z = 4s + 9, and then find the plane containing these two lines. Solution. Solve the system of equations
Vector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem
MA 323 Geometric Modelling Course Notes: Day 02 Model Construction Problem David L. Finn November 30th, 2004 In the next few days, we will introduce some of the basic problems in geometric modelling, and
This makes sense. t 2 1 + 1/t 2 dt = 1. t t 2 + 1dt = 2 du = 1 3 u3/2 u=5
1. (Line integrals Using parametrization. Two types and the flux integral) Formulas: ds = x (t) dt, d x = x (t)dt and d x = T ds since T = x (t)/ x (t). Another one is Nds = T ds ẑ = (dx, dy) ẑ = (dy,
Mathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS
A QUIK GUIDE TO THE FOMULAS OF MULTIVAIABLE ALULUS ontents 1. Analytic Geometry 2 1.1. Definition of a Vector 2 1.2. Scalar Product 2 1.3. Properties of the Scalar Product 2 1.4. Length and Unit Vectors
Sharpening through spatial filtering
Sharpening through spatial filtering Stefano Ferrari Università degli Studi di Milano [email protected] Elaborazione delle immagini (Image processing I) academic year 2011 2012 Sharpening The term
Machine Learning and Pattern Recognition Logistic Regression
Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,
Mean value theorem, Taylors Theorem, Maxima and Minima.
MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.
CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In
Solution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
Image Gradients. Given a discrete image Á Òµ, consider the smoothed continuous image ܵ defined by
Image Gradients Given a discrete image Á Òµ, consider the smoothed continuous image ܵ defined by ܵ Ü ¾ Ö µ Á Òµ Ü ¾ Ö µá µ (1) where Ü ¾ Ö Ô µ Ü ¾ Ý ¾. ½ ¾ ¾ Ö ¾ Ü ¾ ¾ Ö. Here Ü is the 2-norm for the
Chapter 7. Lyapunov Exponents. 7.1 Maps
Chapter 7 Lyapunov Exponents Lyapunov exponents tell us the rate of divergence of nearby trajectories a key component of chaotic dynamics. For one dimensional maps the exponent is simply the average
2013 MBA Jump Start Program
2013 MBA Jump Start Program Module 2: Mathematics Thomas Gilbert Mathematics Module Algebra Review Calculus Permutations and Combinations [Online Appendix: Basic Mathematical Concepts] 2 1 Equation of
Least-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization
Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization 2.1. Introduction Suppose that an economic relationship can be described by a real-valued
SOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
Scalar Valued Functions of Several Variables; the Gradient Vector
Scalar Valued Functions of Several Variables; the Gradient Vector Scalar Valued Functions vector valued function of n variables: Let us consider a scalar (i.e., numerical, rather than y = φ(x = φ(x 1,
Introduction to the Finite Element Method (FEM)
Introduction to the Finite Element Method (FEM) ecture First and Second Order One Dimensional Shape Functions Dr. J. Dean Discretisation Consider the temperature distribution along the one-dimensional
Math 265 (Butler) Practice Midterm II B (Solutions)
Math 265 (Butler) Practice Midterm II B (Solutions) 1. Find (x 0, y 0 ) so that the plane tangent to the surface z f(x, y) x 2 + 3xy y 2 at ( x 0, y 0, f(x 0, y 0 ) ) is parallel to the plane 16x 2y 2z
LINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
3 Orthogonal Vectors and Matrices
3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first
COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS
COMPARISON OF OBJECT BASED AND PIXEL BASED CLASSIFICATION OF HIGH RESOLUTION SATELLITE IMAGES USING ARTIFICIAL NEURAL NETWORKS B.K. Mohan and S. N. Ladha Centre for Studies in Resources Engineering IIT
Image Segmentation and Registration
Image Segmentation and Registration Dr. Christine Tanner ([email protected]) Computer Vision Laboratory, ETH Zürich Dr. Verena Kaynig, Machine Learning Laboratory, ETH Zürich Outline Segmentation
Understanding Basic Calculus
Understanding Basic Calculus S.K. Chung Dedicated to all the people who have helped me in my life. i Preface This book is a revised and expanded version of the lecture notes for Basic Calculus and other
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.
1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.
Rotation Matrices and Homogeneous Transformations
Rotation Matrices and Homogeneous Transformations A coordinate frame in an n-dimensional space is defined by n mutually orthogonal unit vectors. In particular, for a two-dimensional (2D) space, i.e., n
Linear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points
Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a
3. Reaction Diffusion Equations Consider the following ODE model for population growth
3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent
Don't Forget the Differential Equations: Finishing 2005 BC4
connect to college success Don't Forget the Differential Equations: Finishing 005 BC4 Steve Greenfield available on apcentral.collegeboard.com connect to college success www.collegeboard.com The College
Principles of Scientific Computing Nonlinear Equations and Optimization
Principles of Scientific Computing Nonlinear Equations and Optimization David Bindel and Jonathan Goodman last revised March 6, 2006, printed March 6, 2009 1 1 Introduction This chapter discusses two related
Figure 2.1: Center of mass of four points.
Chapter 2 Bézier curves are named after their inventor, Dr. Pierre Bézier. Bézier was an engineer with the Renault car company and set out in the early 196 s to develop a curve formulation which would
BIG DATA PROBLEMS AND LARGE-SCALE OPTIMIZATION: A DISTRIBUTED ALGORITHM FOR MATRIX FACTORIZATION
BIG DATA PROBLEMS AND LARGE-SCALE OPTIMIZATION: A DISTRIBUTED ALGORITHM FOR MATRIX FACTORIZATION Ş. İlker Birbil Sabancı University Ali Taylan Cemgil 1, Hazal Koptagel 1, Figen Öztoprak 2, Umut Şimşekli
Rigid body dynamics using Euler s equations, Runge-Kutta and quaternions.
Rigid body dynamics using Euler s equations, Runge-Kutta and quaternions. Indrek Mandre http://www.mare.ee/indrek/ February 26, 2008 1 Motivation I became interested in the angular dynamics
Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture.
Implementation of Canny Edge Detector of color images on CELL/B.E. Architecture. Chirag Gupta,Sumod Mohan K [email protected], [email protected] Abstract In this project we propose a method to improve
Component Ordering in Independent Component Analysis Based on Data Power
Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals
42 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. Figure 1.18: Parabola y = 2x 2. 1.6.1 Brief review of Conic Sections
2 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE Figure 1.18: Parabola y = 2 1.6 Quadric Surfaces 1.6.1 Brief review of Conic Sections You may need to review conic sections for this to make more sense. You
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches
Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic
Path Tracking for a Miniature Robot
Path Tracking for a Miniature Robot By Martin Lundgren Excerpt from Master s thesis 003 Supervisor: Thomas Hellström Department of Computing Science Umeå University Sweden 1 Path Tracking Path tracking
How To Write A Distance Regularized Level Set Evolution
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 12, DECEMBER 2010 3243 Distance Regularized Level Set Evolution and Its Application to Image Segmentation Chunming Li, Chenyang Xu, Senior Member, IEEE,
Integrated sensors for robotic laser welding
Proceedings of the Third International WLT-Conference on Lasers in Manufacturing 2005,Munich, June 2005 Integrated sensors for robotic laser welding D. Iakovou *, R.G.K.M Aarts, J. Meijer University of
ON CERTAIN DOUBLY INFINITE SYSTEMS OF CURVES ON A SURFACE
i93 c J SYSTEMS OF CURVES 695 ON CERTAIN DOUBLY INFINITE SYSTEMS OF CURVES ON A SURFACE BY C H. ROWE. Introduction. A system of co 2 curves having been given on a surface, let us consider a variable curvilinear
Digital Imaging and Multimedia. Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University
Digital Imaging and Multimedia Filters Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What are Filters Linear Filters Convolution operation Properties of Linear Filters Application
On Motion of Robot End-Effector using the Curvature Theory of Timelike Ruled Surfaces with Timelike Directrix
Malaysian Journal of Mathematical Sciences 8(2): 89-204 (204) MALAYSIAN JOURNAL OF MATHEMATICAL SCIENCES Journal homepage: http://einspem.upm.edu.my/journal On Motion of Robot End-Effector using the Curvature
Practice Final Math 122 Spring 12 Instructor: Jeff Lang
Practice Final Math Spring Instructor: Jeff Lang. Find the limit of the sequence a n = ln (n 5) ln (3n + 8). A) ln ( ) 3 B) ln C) ln ( ) 3 D) does not exist. Find the limit of the sequence a n = (ln n)6
Class Meeting # 1: Introduction to PDEs
MATH 18.152 COURSE NOTES - CLASS MEETING # 1 18.152 Introduction to PDEs, Fall 2011 Professor: Jared Speck Class Meeting # 1: Introduction to PDEs 1. What is a PDE? We will be studying functions u = u(x
88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a
88 CHAPTER. VECTOR FUNCTIONS.4 Curvature.4.1 Definitions and Examples The notion of curvature measures how sharply a curve bends. We would expect the curvature to be 0 for a straight line, to be very small
3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
INTERPOLATION. Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y).
INTERPOLATION Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). As an example, consider defining and x 0 =0, x 1 = π 4, x
Matrices and Polynomials
APPENDIX 9 Matrices and Polynomials he Multiplication of Polynomials Let α(z) =α 0 +α 1 z+α 2 z 2 + α p z p and y(z) =y 0 +y 1 z+y 2 z 2 + y n z n be two polynomials of degrees p and n respectively. hen,
Math 120 Final Exam Practice Problems, Form: A
Math 120 Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems. Specifically,
1 of 79 Erik Eberhardt UBC Geological Engineering EOSC 433
Stress & Strain: A review xx yz zz zx zy xy xz yx yy xx yy zz 1 of 79 Erik Eberhardt UBC Geological Engineering EOSC 433 Disclaimer before beginning your problem assignment: Pick up and compare any set
The Heat Equation. Lectures INF2320 p. 1/88
The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)
Lecture 5 Principal Minors and the Hessian
Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and
Line and Polygon Clipping. Foley & Van Dam, Chapter 3
Line and Polygon Clipping Foley & Van Dam, Chapter 3 Topics Viewing Transformation Pipeline in 2D Line and polygon clipping Brute force analytic solution Cohen-Sutherland Line Clipping Algorithm Cyrus-Beck
Chapter 36 - Lenses. A PowerPoint Presentation by Paul E. Tippens, Professor of Physics Southern Polytechnic State University
Chapter 36 - Lenses A PowerPoint Presentation by Paul E. Tippens, Professor of Physics Southern Polytechnic State University 2007 Objectives: After completing this module, you should be able to: Determine
General Framework for an Iterative Solution of Ax b. Jacobi s Method
2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,
Høgskolen i Narvik Sivilingeniørutdanningen
Høgskolen i Narvik Sivilingeniørutdanningen Eksamen i Faget STE66 ELASTISITETSTEORI Klasse: 4.ID Dato: 7.0.009 Tid: Kl. 09.00 1.00 Tillatte hjelpemidler under eksamen: Kalkulator Kopi av Boken Mechanics
NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
Estimated Pre Calculus Pacing Timeline
Estimated Pre Calculus Pacing Timeline 2010-2011 School Year The timeframes listed on this calendar are estimates based on a fifty-minute class period. You may need to adjust some of them from time to
Review Sheet for Test 1
Review Sheet for Test 1 Math 261-00 2 6 2004 These problems are provided to help you study. The presence of a problem on this handout does not imply that there will be a similar problem on the test. And
Reflection and Refraction
Equipment Reflection and Refraction Acrylic block set, plane-concave-convex universal mirror, cork board, cork board stand, pins, flashlight, protractor, ruler, mirror worksheet, rectangular block worksheet,
TOPIC 4: DERIVATIVES
TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the
Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors
1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number
AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS
AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,
How To Understand And Solve Algebraic Equations
College Algebra Course Text Barnett, Raymond A., Michael R. Ziegler, and Karl E. Byleen. College Algebra, 8th edition, McGraw-Hill, 2008, ISBN: 978-0-07-286738-1 Course Description This course provides
Understanding and Applying Kalman Filtering
Understanding and Applying Kalman Filtering Lindsay Kleeman Department of Electrical and Computer Systems Engineering Monash University, Clayton 1 Introduction Objectives: 1. Provide a basic understanding
Parameter Estimation for Bingham Models
Dr. Volker Schulz, Dmitriy Logashenko Parameter Estimation for Bingham Models supported by BMBF Parameter Estimation for Bingham Models Industrial application of ceramic pastes Material laws for Bingham
Gaussian Processes in Machine Learning
Gaussian Processes in Machine Learning Carl Edward Rasmussen Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany [email protected] WWW home page: http://www.tuebingen.mpg.de/ carl
5.7 Maximum and Minimum Values
5.7 Maximum and Minimum Values Objectives Icandefinecriticalpoints. I know the di erence between local and absolute minimums/maximums. I can find local maximum(s), minimum(s), and saddle points for a given
Scan Conversion of Filled Primitives Rectangles Polygons. Many concepts are easy in continuous space - Difficult in discrete space
[email protected] CSE 480/580 Lecture 7 Slide 1 2D Primitives I Point-plotting (Scan Conversion) Lines Circles Ellipses Scan Conversion of Filled Primitives Rectangles Polygons Clipping In graphics must
Vector Math Computer Graphics Scott D. Anderson
Vector Math Computer Graphics Scott D. Anderson 1 Dot Product The notation v w means the dot product or scalar product or inner product of two vectors, v and w. In abstract mathematics, we can talk about
