ROF model on the graph
|
|
|
- Baldric Eaton
- 9 years ago
- Views:
Transcription
1 Department of Mathematics ROF model on the graph Japhet iyobuhungiro and Eric Setterqvist LiTH-MAT-R--14/6--SE
2 Department of Mathematics Linköping University S Linköping, Sweden.
3 LiTH-MAT-R-14/6-SE ROF model on the graph Japhet iyobuhungiro Division of Mathematics and Applied Mathematics, Department of Mathematics, Linköping University, SE Linköping, Sweden Department of Mathematics, School of pure and applied sciences, College of Science and Technology, University of Rwanda, P.O. Box 117 Butare, Rwanda Eric Setterqvist Division of Mathematics and Applied Mathematics, Department of Mathematics, Linköping University, SE Linköping, Sweden Abstract In this paper we consider an analogue of the well known in image processing, Rudin Osher Fatemi ROF denoising model on a general finite directed and connected graph. We consider the space BV on the graph and show that the unit ball of its dual space can be described as the image of the unit ball of the space l on the graph by a divergence operator. Based on this result, we propose a new fast algorithm to find the exact minimizer for the ROF model. Finally we prove convergence of the algorithm and illustrate its performance on some image denoising test examples. Keywords: ROF model, Directed graph, L functional, Image processing, Dual BV, Regularization. 1 MSC: 46B7, 46E35, 68U1 1. Classical ROF model and ROF model on the graph Let us suppose that we observed noisy image f ob L defined on a square domain Ω = [, 1] in R f ob = f + η, where f BV is the original image and η L is noise. Denoising is one of the problems which appear in image processing: "How to recover the image f from the noisy image f ob?". Variational methods using the total variation minimization are often employed to solve this problem. The total variation regularization technique was introduced by Rudin, Osher and Fatemi in [1] and is called the ROF model. It suggests to take as an approximation to the original image f the function f opt,t BV, which is the exact minimizer for the L,1 functional for the couple L, BV : 1 L,1 t, f ob ; L, BV = inf g BV f ob g L + t g BV, for some t >, 1 i.e., f opt,t BV is such that L,1 t, f ob ; L, BV = 1 fob f opt,t + L t fopt,t BV. By BV we denote the space of functions of bounded variation defined by the seminorm f BVΩ = 1 1 var x f x, y dy + var y f x, y dx, Corresponding author addresses: [email protected], [email protected] Japhet iyobuhungiro, [email protected] Eric Setterqvist 1
4 ROF MODEL O THE GRAPH where var x f x, y = n 1 sup f xj+1, y f x j, y x 1... x n 1 j=1 is a function of y equal to variation of f on the horizontal axis for y fixed, and var y f x, y = n 1 sup y 1... y n 1 i=1 f x, y i+1 f x, y i is a function of x equal to variation of f on the vertical axis for x fixed. However the problem of actual calculation of the function f opt,t see 1 and is non-trivial. Standard approach is connected with discretization of the functional 1, i.e. we divide Ω into square cells and instead of the space L Ω consider its finite dimensional subspace S which consists of functions that are constant on each cell. S = { f L Ω : f = } f ij χ ij,, 3 i,j=1 where 1 if j 1 χ x < j ij x, y = otherwise. and i 1 y < i ; 4 Throughout, we will consider our discretization grid as a D Cartesian coordinate in computer screen space, i.e., with the origin at the top left corner and +y axis pointing down, which is the way matrices are represented on the computer, as illustrated in Figure 1. Figure 1: Illustration of the rectangular grid. The grid contains cells refered to as vertices; 1 horizontal edges directed left right and 1 vertical edges directed up down. Total of 1 edges. As the BV seminorm of a function g S is equal to g BV S = 1 i=1 1 j=1 gi,j+1 g ij + j=1 1 gi+1,j g ij, i=1
5 JAPHET IYOBUHUGIRO AD ERIC SETTERQVIST the discrete analogue of the functional 1 can be written as L,1 t, f ob ; L, BV 1 = inf g S f obij g ij + t 1 i,j=1 i=1 j=1 g i,j+1 g ij + j=1 1 g i+1,j g ij. i=1 ow we can consider the graph G = V, E, where the set of vertices V corresponds to cells and the set of edges E corresponds to set of pairs of cells which have common faces see Figure 1, i.e. V = { v ij ; i, j = 1,,..., }, and E = E H E V, where E H = {eij H = } v ij, v i,j+1 ; i = 1,,..., and j = 1,,..., 1 are the 1 horizontal edges that we choose to be directed left right and E V = {eij V = } v ij, v i+1,j ; i = 1,,..., 1 and j = 1,,..., are the 1 vertical edges that we choose to be directed top down. So we have that dim V = and dim E = 1. Let us denote by S V and S E the set of real-valued functions on V and E respectively. ext we consider the analogue of the gradient operator on the graph, i.e., grad : S V S E which maps function f S V to function grad f S E defined as see Figure 5 grad f eij H grad f eij V Figure : The neighbors of the vertex v ij. = f v i,j+1 f vij, i = 1,,..., and j = 1,,..., 1; = f v i+1,j f vij, i = 1,,..., 1 and j = 1,,...,. In this framework, the observed image f ob S can be considered as an element of S V, and the functional 5 can be written as 1 L,1 t, f ob ; L, BV = inf g S V f ob g l S V + t grad g l 1 S E, 6 3
6 ROF MODEL O THE GRAPH where 1 f l S V = f v, h l 1 S E = he = he + he. e E e E H e E V Of course, exact minimizer of 6 coincides with exact minimizer of L,1 s, f ob ; l S V, BVS V = inf g S V 1 f ob g l S V + s grad g l 1 S E, s = t. 7 This leads to the following analogue of the ROF model on a general finite directed and connected graph G = V, E Formulation of the problem on general graph Let G = V, E be a finite directed and connected graph with vertices V = {v 1, v,..., v } and M directed edges E = {e 1, e,..., e M }, where each edge is determined by a pair of two vertices e = v i, v j for some i, j {1,,..., }. Let S V = { f : f : V R} denote the set of of real valued functions defined on the vertices and S E = {g : g : E R} denote the set of of real valued functions defined on the edges. Then the analogue of ROF model on the graph G can be formulated as follows: Problem 1.1. Suppose that we know function f ob S V. For given t >, find exact minimizer of the functional 1 L,1 t, f ob ; l S V, BVS V = inf g BVS V f ob g l S V + t g BVS V, 8 where 1 f l S V = f v ; f BVSV = grad f l 1 S E ; h l 1 S E = he, 9 e E and operator grad : S V S E is defined by the formula grad f e = f v j f v i if e = v i, v j. In this paper we will suggest a fast reiterative algorithm that constructs exact minimizer of 8. We would like to note that the case of a general graph could be particularly useful when instead of usually considered rectangular domain we have some manifold, for example map of the Earth is an image on the sphere.. Description of the ball of dual space to BVS V It was shown in [] that exact minimizer for the L,1 -functional L,1 t, f ob ; l, X = inf g X 1 f ob g l + t g X for the couple l, X, where space l is defined by the standard Euclidean norm, X is a Banach space on R n and t is a given positive parameter, is equal to the difference between f ob and the nearest element to f ob of the ball of radius t > of the space X. ote that proofs in [] are also true when X is equipped with seminorm. So to construct exact minimizer for the L,1 -functional for the couple l S V, BVS V we first need to describe the ball of radius t > of the space BV S V with norm defined by h BV S V = sup h, f SV, where h, f SV = hv f v. 1 f BVSV 1 Remark 1. ote that f BVSV = if and only if f = C for some constant C R. Therefore h BV S V = sup h, f f BVSV 1, if h v = ; +, otherwise. 4
7 JAPHET IYOBUHUGIRO AD ERIC SETTERQVIST In order to formulate the main result of this section, let us consider divergence operator on the graph, i.e. the operator div : S E S V defined by We can now formulate the result div gv = i:v i,v E g v i, v j:v,v j E g v, v j. 11 Theorem.1. The unit ball of the space BV S V is equal to the image of the unit ball of the space l S E under the operator div, i.e., B BV S V = div B l S E. The proof of Theorem.1 is based on two elementary lemmas. The first one is Lemma 1. The operator div : S E S V is conjugate to the operator grad : S V S E, i.e. div g, f SV = g, grad f SE, where f 1, f SV = f 1 v f v and g 1, g SE = g 1 eg e. e E Proof. The expression g, grad f SE is, by definition of the operator grad, equal to grad f, g SE = [ f vj f vi ] g e = e=v i,v j E e=v i,v j E = i:v i,v E = f v f v g v i, v i:v i,v E g v i, v j:v,v j E j:v,v j E f v j g vi, v j f v g v, v j g v, v j. The expression div g, f SV is, by definition of the operator div, equal to f, div g SV = f v div g v = f v i:v i,v E g v i, v e=v i,v j E j:v,v j E f v i g v i, v j g v, v j. We conclude that div g, f SV = g, grad f SE. For the statement of the second lemma, we will need the following definition Definition.1. Let X be a Banach space and let V be a subspace of X. The annihilator of V denoted Ann V is the set of bounded linear functionals that vanish on V. That is the set defined by where X is the dual space of X. Ann V = {x X : x, v =, for all v V}, 1 Lemma. Let X be a Banach space with dual space X and let V be a finite dimensional subspace of X and Ann V be the annihilator of V. Let x X. Then where B X is the unit ball of X. inf x + v X = sup x, x, 13 x B X AnnV 5
8 ROF MODEL O THE GRAPH Proof. The case x V is obvious. From now on we suppose x / V. Let us take an arbitrary x B X Ann V and v V. Since x Ann V i.e., x, v =, v V then we have Since x B X i.e., x X 1 it follows that x, x = x, x + v x X x + v X, v V. x, x x + v X, v V. Therefore since x B X Ann V and v V were arbitrary, we have that inf x + v X sup x, x. 14 x B X AnnV In order to prove the reverse inequality, let us consider the space W, which is the algebraic sum between the span of x and the space V: W = {x } + V = {w X : w = λx + v, v V and λ R}, 15 and take v V such that inf x + v X = x + v X. The existence of such v follows from the assumption that V is a finite dimensional subspace of X. Without loss of generality we can assume that x + v X = 1. Since W is a normed vector space by itself, it is possible to consider its dual space. Moreover V is a linear subspace of W and then by the Hahn-Banach Theorem, since x + v W and x + v / V there exists a bounded linear functional x W such that It follows that x, v = for all v V, and x, x + v = x Ann V and x, x = Let us now investigate the action of x on W. Let λ R, v V and let w = λx + v be an element of W. Then we have x, w = x, λx + v = x, λx + λv + v λv = λ x, x + v + x, v λv = λ, 18 because x, x + v = 1 and x, v λv = since x Ann V and v λv V. Let us now describe the unit ball B W of W. Suppose that w = λx + v B W where λ =. We have that 1 w X = λx + v X = λ x + v λ x + v λ X = λ. 19 X Therefore w = λx + v B W implies that λ 1. The W - norm of the functional x is by definition From 18 and 19, it follows that x W = sup w B W x, w. x W = sup λ = sup λ = 1. w B W λ 1 By invoking the Hahn-Banach theorem, we can extend the functional x : From 17 and 1 we conclude that x X such that x W = x and x = X x W = 1. 1 x B X Ann V. 6
9 JAPHET IYOBUHUGIRO AD ERIC SETTERQVIST It follows that inf x + v X = x + v X = x, x + v = x, x + v sup x, x + v = sup x, x. x B X AnnV x B X AnnV Putting 14 and together, we obtain inf x + v X = sup x, x. x B X AnnV Proof of Theorem.1. The operator grad has a kernel which is given by kergrad = { f S V : f = C, for some C R}, 3 and the annihilator or orthogonal complement of the kergrad is given by { } Ann kergrad = F S V : F v =. 4 Since from Lemma 1, div = grad, where grad is the conjugate operator of grad, then it is a simple result from linear algebra that Im div = Ann kergrad and Im grad = Ann kerdiv. 5 From Remark 1, note that if hv = then h BV S V = +, so to avoid this case we will restrict ourselves on functions h Ann kergrad. Then from relations 4 and 5, we conclude that BV S V = Im div. Therefore for all h BV S V, there exists at least one g S E such that h = div g. So let h BV S V and fix g S E such that h = div g. ow let us consider the infimum of g l S E over all functions g in the fiber of h BV S V under the operator div. inf g l h=div g S E = inf g + u l {u kerdiv} S E Put U = kerdiv. Then we have Ann U = Im grad. By applying Lemma, we have that inf g l h=div g S E = From Lemma 1, h = div g, 9 and 1 follows that It means that inf g l h=div g S E = sup } f, g SE = sup grad f, g SE. f l 1 S E 1, f grad f =grad f, f S l 1 V S E 1 { sup f, div g SV = grad f l 1 S E 1 = h BV S V. h BV S V 1 if and only if inf sup f, h SV = grad f l 1 S E 1 sup f BVSV 1 f, h SV g l h=div g S E 1. 6 ote that the infimum is attained because kerdiv is a finite dimensional subspace of S E. Therefore there exists an element g h g + kerdiv, such that g h l S E 1 and div g h = h. We conclude that B BV S V = div B l S E. 7
10 ROF MODEL O THE GRAPH.1. Corollary for the discrete ROF model As a corollary of Theorem.1 we will have the following result for a reminder of space S, see expression 3. Corollary 1. Let h S. Then h BV S 1 if and only if h can be decomposed as h = h 1 + h, with h 1, h 1 S such that x 1 sup h 1 s, y ds 1 and h 1 s, y ds =, for all y [, 1] ; and x 1 sup y 1 y 1 h x, t dt 1 and h x, t dt =, for all x [, 1]. Proof. Let us take h S such that h BV S 1. By interpretation of the space S on the graph, we have from Theorem.1 that h BV S 1 if and only if h = div g for some g S E such that g l S E, where h ] [ ] v ij = div g vij = [g ei,j 1 H g eij H + g ei 1,j V g eij V, i, j = 1,...,, with the assumption g ei, H = g e,j V =. It follows that where functions h 1, h S V are defined as follows ] h 1 vij = [g ei,j 1 H g eij H h = h 1 + h, ] and h vij = [g ei 1,j V g eij V. 7 From 7 we deduce that g ei1 H = h1 v i1 ; g e i H = h1 v i1 + h 1 v i ;. g ei, 1 H = h 1 v i1 + h 1 v i h 1 v i, 1 ; g ei H = h1 v i1 + h 1 v i h 1 v i, 1 + h 1 v i. i = 1,,...,. 8 From the the first 1 equations of 8, we have that sup g eik H k = sup h 1 vij, i = 1,,...,. 9 1 k 1 1 k 1 From the the last equations of 8, we deduce that = g ei H = j=1 j=1 h 1 vij, i = 1,,...,. 3 From the correspondance between S V and space S, the expression 9 implies that k sup g eik H = sup h 1 k 1 1 k 1 1 s, y ds = sup x h 1 s, y ds x 1, y [, 1]. 31 Since g l S E, it follows from 31 that x h 1 s, y ds, y [, 1]. sup x 1 8
11 JAPHET IYOBUHUGIRO AD ERIC SETTERQVIST Therefore sup x 1 x h 1 s, y ds 1, y [, 1]. From 3, we have that h 1 vij = h 1 s, y ds =, y [, 1]. j=1 Therefore 1 h 1 s, y ds =, y [, 1]. We conclude that sup x 1 x 1 h 1 s, y ds 1 and h 1 s, y ds =, for all y [, 1]. Similar arguments for the function h and the function g acting on the edges e V ij will yield that sup y 1 y 1 h x, t dt 1 and h x, t dt =, for all x [, 1]. 3. Algorithm for construction of exact minimizer Below we present a fast reiterative algorithm for the actual construction of the element f opt,t. This algorithm is a specialized version of an algorithm constructed in [3] for problems concerning a weighted divergence operator on a general finite directed graph. As was mentioned in the previous section, we know from [] that the exact minimizer f opt,t for the L,1 -functional is given by 1 L,1 t, f ob ; l S V, BVS V = inf g BVS V f ob g l S V + t g BVS V f opt,t = f ob h, 3 where h is the nearest element to f ob, in the metric of l S V, in the ball tb BV S V, i.e. inf h tb BV SV f ob h l S V = fob h. 33 l S V ext, from Theorem.1 we know that tb BV S V = t div B l S E for t >. The proposed algorithm constructs h through a sequence of elements g n tb l S E such that divg n h as n + in the metric of l S V. 9
12 ROF MODEL O THE GRAPH 3.1. Algorithm The algorithm consists of several steps outlined in this section. Let f ob S V on G = V = {v 1,..., v }, E = {e 1,..., e M }, a regularization parameter t and a maximum number of iterations iter be given. Set Define the operator T as follows: e k = v i, v j E, k = 1,,..., M; for some i, j {1,,..., }. T = T M T M 1 T M... T T 1, where for k = 1,,..., M, T k : tb l S E tb l S E is defined as follows: Kg e k,if Kg e k [ t, +t] ; t,if Kg e k < t;,if e = e k ; T k g e = +t,if Kg e k > +t. g e,if e = e k. where and Kg e k = [ fob v j div \ ek g v j ] [ f ob v i div \ ek g v i ] div \ek g v i = div g v i + g e k ; div \ek g v j = div g vj g ek ; and div \ek g v l = div g v l, l = i, j.. With these notions at our disposal, we now outline the steps of the algorithm. Step 1. Take g =, or choose any g tb l S E Step. Calculate g = Tg. i.e., calculate Tg e k for k = 1,,..., M. If g = g then take h = divg, otherwise go to Step 3. Step 3. Put g = g and go to Step. We continue this process applying the operator T to the new element g tb l S E generating the sequence of elements g, g 1 = Tg, g = Tg 1,..., g n = Tg n 1 with g n tb l S E, n =, 1,,... until the maximum number of iterations iter is reached. In the next Section we will show that div g n h as n + in the metric of l S V. To prove the convergence of the proposed algorithm we need the following proposition Proposition 3.1. Let h be the minimizer defined by 33. The operator T is continuous and satisfies the following two conditions a For any g tb l S E, div g = h if and only if Tg = g; b For any g tb l S E, if div g = h and f ob div Tg l S V < f ob div g l S V. 34 Proof. It is clear from definition that small changes of g tb l S E leads to small changes of any operator T k and therefore the operator T is continuous as a product of continuous operators. We now prove a. Assume first that g tb l S E is such that div g = h. Let e k = v i, v j E. We note that ge k appears only in the following two terms of f ob div g l S V : fob v j div gv j + fob v i div gv i = 35 [ ] [ f ob v j div \ek gv j ge k + f ob v i div \ek gv i + ge k ]. 1
13 JAPHET IYOBUHUGIRO AD ERIC SETTERQVIST Since div g = h, ge k in particular must minimize 35 in the interval [ t, t]. We note that by the Jensen s inequality [ ] [ ωge k = f ob v j div \ek gv j ge k + f ob v i div \ek gv i + ge k ] with equality if and only if [ ] f ob v j div \ek gv j ge k [ ] + f ob v i div \ek gv i + ge k [ ] [ ] f ob v j div \ek gv j + f ob v i div \ek gv i f ob v j div \ek gv j ge k = f ob v i div \ek gv i + ge k, i.e. when [ ] [ ] f ob v j div \ek gv j f ob v i div \ek gv i ge k = = Kge k. Moreover, ωx is strictly convex and therefore strictly decreasing for x < Kg e k and strictly increasing for x > Kge k. So the minimal value of ωx on the interval [ t, t] is only attained at i the point Kge k if Kge k [ t, t], ii the point t if Kge k < t and iii the point t if Kge k > t. The assumption div g = h then implies that ge k must be the nearest point in the interval [ t, t] to Kge k, i.e. T k ge k = ge k. Since e k E was arbitrary it follows that T k g e k = g e k, k = 1,..., M which implies T k g = g, k = 1,..., M and therefore Tg = g. We now show the other direction of a. Let us assume that g tb l S E and Tg = g. Then for any edge e E, ge coincides with the point of the interval [ t, t] which is nearest to Kge. As f ob div g l S V is a convex function of g on tb l S E, to show this direction it is enough to show that g minimizes f ob div g l S V locally, i.e. it is enough to show that for some small ε > we have where R ε is the tubular set given by f ob div g l S V = inf f R ε f ob div f l S V, R ε = ote that for any f R ε and e E we have { f tb l S E : g f l S E ε }. f e [ t, t] [ge ε, ge + ε]. The set R ε is a closed convex subset of S E, therefore there exists a function f ε R ε such that so we need to prove that f ob div f ε l S V = inf f R ε f ob div f l S V f ob div g l S V = f ob div f ε l S V. We first note that it follows from the necessity direction proved above that for any edge e E, f ε e will coincide with the point of the interval [ t, t] [ge ε, ge + ε] which is nearest to K f ε e. Let us now decompose the set of edges E into two parts, the first part which we denote by Ω g consists of the edges for which Kge does not belong to the interval [ t, t], i.e. Ω g = {e E : Kge / [ t, t]}. As ge is the nearest point in the interval [ t, t] to Kge we have = ge = { t,if Kg e < t; +t,if Kg e > +t. for edges e Ω g. 11
14 ROF MODEL O THE GRAPH If the number ε > is small enough, it follows from g f l S E ε that on e Ω g where we have Kge < t we will also have K f ε e < t and therefore f ε e = t = ge. Analogously, if ε > is small enough, on e Ω g where Kge > t we will have K f ε e > t and therefore f ε e = t = ge. So we have proved that f ε e = ge for all e Ω g. 36 The next step is to consider the graph G = V, E\Ω g, i.e. the graph G with edges in Ω g removed. G is the union of several connected components V k, E k, k = 1,..., l we have V 1... V l = V and E 1... E l = E\Ω g. ote that some of the graphs V k, E k can consist just of one single vertex. For these graphs there is nothing to prove as E k =. Let us now consider a subgraph V k, E k where E k =. On each e E k we have Kge [ t, t] and therefore ge = Kge, i.e. if e = v i, v j then or equivalently ge = Kge = [ ] [ ] f ob v j div \e gv j f ob v i div \e gv i, f ob v j div gv j = f ob v i div gv i. Here it is important to note that operators K, div and div \e are considered in the orignal setting of G = V, E. Therefore, for all v V k the values of are equal. So we have f ob v div gv [ f ob v div gv] k [ f ob v div gv] = V k. 37 V k k For f ε we can with Jensen s inequality derive the corresponding inequality [ f ob v div f ε v] k [ f ob v div f ε v] V k. 38 V k k ow, note that flow on edges in E k are cancelled in the sums k [ f ob v div gv] and k [ f ob v div f ε v]. 39 Therefore, only flows on edges in Ω g remain in these sums. It then follows from 36 that [ f ob v div f ε v] = [ f ob v div gv]. k k Therefore, taking into account 37 and 38, we obtain Summing over all V k then gives Hence, from the definition of f ε we must have and a is therefore established. [ f ob v div f ε v] [ f ob v div gv]. k k f ob div f ε l S V f ob div g l S V. f ob div f ε l S V = f ob div g l S V. 1
15 JAPHET IYOBUHUGIRO AD ERIC SETTERQVIST To prove b note that for g tb l S E the operators T k, k = 1,..., M, by definition see also necessity part of a satisfy f ob div T k g l S V f ob div g l S V with equality if and only if T k ge k = ge k. This implies for the operator T = T M T M 1...T 1 that f ob div Tg l S V f ob div g l S V with equality if and only if Tg = g which in turn by condition a is equivalent to div g = h. Hence for any g tb l S E, if div g = h then f ob div Tg l S V < f ob div g l S V, which establishes b and completes the proof. 3.. Convergence of the algorithm With Proposition 3.1 at hand, we can now prove convergence of the Algorithm. Theorem 3.1. Let h be the minimizer defined by 33, g tb l S E and let T be the operator constructed in Section 3.1. Then div T n g h as n + in the metric of l S V. 4 Proof. Let T be the operator constructed in Section 3.1. From Proposition 3.1, T : tb l S E tb l S E is continuous and satisfies the conditions a and b. Then T n g tb l S E, g tb l S E ; n =, 1,, Since T satisfies conditions a and b, the sequence of numbers f ob div T n g l n S V is monotonically decreasing and is bounded below by inf f ob div g l g tb S V. Therefore it converges. Let us l S E now consider the sequence T n g n tb l S E. Since tb l S E is a compact set, then Tn g n contains a convergent subsequence in tb l S E, say Tn k g k : Since T, div and l S V are continuous operators, we have f ob div Tg h l S V = f ob div T lim lim k Tn k g = g h tb l S E. 4 k Tn k g l S V = lim fob div T nk+1 l g. k S V From Proposition 3.1 and continuity of div and l S V it follows that f ob div Tg h l S V = lim f ob div k = f ob div = lim k f ob div T T n k g l S V T n k+1 g l S V lim k f ob div T n k+1 g l S V l lim k Tn k+1 g = f ob div g h l S V. S V Therefore by Proposition 3.1 we conclude that div g h = h. By continuity of div and l S V we have that lim f ob div T n k g l k S V = f ob div g h l S V. ow, since the subsequence f ob div T n k g l S V must converge to the same limit as k f ob div T n g l S V, we conclude that n lim f ob div T n g n l S V = f ob div g h l S V = fob h. l S V 13
16 ROF MODEL O THE GRAPH Therefore, as h is the unique nearest element to f ob in tb BV S V we conclude that div T n g div g h = h as n + in the norm of l S V. 4. Tikhonov functional In this section, we consider the Tikhonov functional in order to use it for comparaison purposes in the next section. Let us consider the Tikhonov functional for the couple of Sobolev spaces L Ω, Ẇ 1, Ω defined on a square domain Ω = [, 1] in R. It is given by: 1 L, t, f ob ; L Ω, Ẇ 1, Ω = inf g Ẇ 1, Ω f ob g L Ω + t g Ẇ 1, Ω, 43 where f Ẇ1, Ω = x f x = f x L Ω x 1, x 1 L + f x x 1, x L 1/, 44 and f L Ω = 1 1 f x 1, x dx 1 dx 1/. 45 We need to find the element f opt,t Ẇ 1, Ω, exact minimizer for 43. That is f opt,t Ẇ 1, Ω such that L, t, f ob ; L Ω, Ẇ 1, Ω = 1 fob f opt,t L Ω + t fopt,t Ẇ 1, Ω. 46 The exact minimizer f opt,t Ẇ 1, for 43 can be found directly by solving the corresponding optimality condition. However functions from S don t have derivative 44. So we need to look for an approximate smooth function for which 44 exists. We consider a finite dimensional subspace of Ẇ 1, Ω which consists of functions that can be represented as trigonometric polynomials of degree. To this end, we will carry the processing in the Fourier transform domain where the image is represented as a weighted sum of the complex exponentials. Afterwards, we apply the inverse Fourier transform to return to the spatial domain. We will consider f ob n 1, n = fob k, l e iπ kn 1+ln, n 1, n =, 1,..., 1, 47 k= l= and g n 1, n = k= l= ĝ k, l e iπ kn 1+ln, n 1, n =, 1,..., 1, 48 where fob k, l = 1 1 n 1 = n = f ob n 1, n e iπ kn 1+ln, k, l =, 1,..., 1, 49 and ĝ k, l = 1 n 1 = 1 n = g n 1, n e iπ kn 1+ln, k, l =, 1,...,
17 JAPHET IYOBUHUGIRO AD ERIC SETTERQVIST Equations 49 5 correspond to the discrete Fourier transforms of the images defined over the discrete finite D grid of size and equations correspond to their inverses. We use samples in both directions and in both spatial and frequency domains. ote that the image is periodized along both dimensions with period. In this formulations, the functional 43 becomes inf ĝk,l inf ĝk,l 1 1 k,l= f ob k, l ĝ k, l + t [ 1 k,l= So we need to minimize each term inside the sum, i.e., 1 f ob k, l ĝ k, l π + t k + i π k ĝ k, l + 1 k,l= i π ] l ĝ k, l. π l ĝ k, l, k, l =, 1,..., We will represent the complex numbers f ob k, l and ĝ k, l in their polar form, i.e., fob k, l = r k,l e iϕ k,l and ĝ k, l = r k,l e iϕ k,l. 5 ote that r k,l and ϕ k,l, k, l =, 1,..., 1 are known because f ob is given. We only need to solve for r k,l s. Putting 5 in 51, we need to solve inf r k,l 1 r k,l r k,l + π t k + l r k,l, k, l =, 1,..., 1. Taking derivative with respect to r k,l, we see that the infimum is attained at r k,l which satisfies we have r k,l r k,l + π t k + l r k,l =, k, l =, 1,..., 1, r k,l = r k,l 1 + π, k, l =, 1,..., 1. t k + l Therefore the exact minimizer f opt,t is given by the formula where f opt,t n 1, n = 1 1 k= fopt,t k, l = 1 fopt,t k, l e iπ kn 1+ln, n 1, n =, 1,..., 1, 53 l= r k,l 1 + π t k + l eiϕ k,l. The matlab routines f f t and i f f t are employed for a fast computation of Experimental results In this section we will illustrate on some test images the performance of ROF model using our algorithm and we will compare with result obtained by using the Tikhonov functional described in the previous section. All the test observed images used have width 51 and height 51 pixels and the corresponding original image is provided for comparison purposes. In view of our analogue of ROF model on the graph see 7, note that the regularization parameter we use for ROF model is s = t, where = 51 and t is the regularization parameter appearing in the original model see 1. The original image used in Figures 3 4 is an 8 bit gray scale image with intensity values ranging from black to 55 white. The original image used in Figures 5 6 is an artificial image with 4 parts of intensity values 18, 1, 9 and 5. The original image used in Figures 7 8 is an artificial binary image of intensity values 15 and 1. In all the cases the noisy image is the corrupted version of the original image where a Gaussian additive noise of 15
18 ROF MODEL O THE GRAPH standard deviation ε with different values of ε is added to the original image. i.e, if we denote the original image by f, then the observed noisy image f ob is defined by f ob = f + ε randn 51, 51, where randn, is a function returning a by matrix containing pseudorandom values drawn from the standard normal distribution as buit in Matlab. In Figures 4, 6 and 8, we have highlighted in white the parts of the reconstructed image f opt,t where f i,j f opt,ti,j α, i, j = 1,,...,, for different values of α in both reconstructions. Figure 3: Lenna, the original is a image with intensity values ranging from to 55. Top left: Original image f ; Top right: oisy image f ob = f + ε randn51,51, i.e image with Gaussian additive noise of standard deviation ε = 3.6. Bottom left: Tikhonov reconstruction using t =.45. Bottom right: ROF reconstruction using s =
19 JAPHET IYOBUHUGIRO AD ERIC SETTERQVIST Figure 4: Top: Residual image, i.e., the difference between the observed image and the reconstructed image f ob f opt,t. Top left: Residual from Tikhonov reconstruction. Top right: Residual from ROF reconstruction. Bottom: The white shows, matrix componentwise, parts of the recontructed image f opt,t where f i,j f opt,ti,j 35, i, j = 1,,...,. Left: in Tikhonov reconstruction. Right: in ROF reconstruction. 17
20 ROF MODEL O THE GRAPH Figure 5: Geometric features with stairs, the original is a image with 4 regions of intensity values 18, 1, 9 and 5. Top left: Original image f ; Top right: oisy image f ob = f + ε randn51,51, i.e image with Gaussian additive noise of standard deviation ε = 3.5. Bottom left: Tikhonov reconstruction using t = 1.59; Bottom right: ROF reconstruction using s =
21 JAPHET IYOBUHUGIRO AD ERIC SETTERQVIST Figure 6: Geometric features with stairs. Top: Residual image, i.e., the difference between the observed image and the reconstructed image f ob f opt,t. Top left: Residual from Tikhonov reconstruction. Top right: Residual from ROF reconstruction. Bottom: The white shows, matrix componentwise, parts of the recontructed image f opt,t where f i,j f opt,ti,j, i, j = 1,,...,. Left: in Tikhonov reconstruction. Right: in ROF reconstruction. 19
22 ROF MODEL O THE GRAPH Figure 7: Geometric features with a cusp, the original is a image with regions of intensity values 15 and 1. Top left: Original image f ; Top right: oisy image f ob = f + ε randn51,51, i.e image with Gaussian additive noise of standard deviation ε = Bottom left: Tikhonov reconstruction using t = Bottom right: ROF reconstruction using s = 36.5.
23 JAPHET IYOBUHUGIRO AD ERIC SETTERQVIST Figure 8: Geometric features with a cusp. Top: Residual image, i.e., the difference between the observed image and the reconstructed image f ob f opt,t. Top left: Residual from Tikhonov reconstruction. Top right: Residual from ROF reconstruction. Bottom: The white shows, matrix componentwise, parts of the recontructed image f opt,t where f i,j f opt,ti,j, i, j = 1,,...,. Left: in Tikhonov reconstruction. Right: in ROF reconstruction. 1
24 ROF MODEL O THE GRAPH 6. Final remarks and discussions Since its appearance in 199, the ROF model has received a large amount of popularity for its effeciency in regularizing images without smoothing the boundaries, and it has since been applied to a multitude of other imaging problems see for example the book [4]. Some of the earlier works for minimizing the total variation based on dual formulation include [5], where the methods presented is based on removal of some of the singularity caused by the non-differentiability of the quantity grad f appearing in the regularization term. In 4, Chambolle see [6] provided an algorithm related to [5] for minimizing the total variation of an image and proved its convergence. In the last few years image decomposition models into a piecewise smooth and oscillating components that usually people refer to as cartoon and textures or textures + noise respectively, have received interest in the image processing community. For example f opt,t BV satisfying is such that f ob = f ob f opt,t + fopt,t. This is the decomposition of f ob into the piecewise smooth component f opt,t BV and the component fob f opt,t L wich contains textures and noise. The original theoretical model for such an image decomposition was introduced in by Yves Meyer in [7] by using the total variation to model the piecewise smooth component and an appropriate dual space named G which is the Banach space composed of the distributions f which can be written f = 1 g 1 + g = divg, with g 1 and g in L Ω with the following norm { } f G = inf g L Ω;R : f = divg, g L Ω;R = ess sup x Ω g 1 x + g x, to model the oscillating component. Some of the most known works proposed in the literature for numerically solving the Meyer s model or its variants include for instance the works of [8] who proposed a model that splits the image into three components, a geometrical components modeled by the total variation, a texture component modeled by a negative Sobolev norm and a noise component modeled by a negative Besov norm; the works of [6, 5, 9, 1] that proposed an efficient projection algorithm to minimize the total variation. In 6 [11] and [1] adapted Chambolle s algorithm in cases of presence of an operator like a convolution kernel for example. Recently, [13] showed that the Bregman iteration is a very efficient and fast way to solve TV problems among other L 1 regularized optimization problems and proposed a split Bregman method which they applied to the Rudin Osher Fatemi functional for image denoising and [14] designed an algorithm by using the Split Bregman iterations and the duality used by Chambolle to find the minimizer of a functional based on Meyer G norm. Other works based on the Meyer s G norm include for example [15] and [16]. Acknowledgement. The authors are very grateful to Professor atan Kruglyak for his invaluable suggestions and discussions that made a considerable improvement to this work. References [1] L. I. Rudin, S. Osher, E. Fatemi, onlinear total variation based noise removal algorithms, Physica, orth-holland D []. Kruglyak, J. iyobuhungiro, Characterization of Optimal Decompositions in Real Interpolation, submitted. [3]. Kruglyak, E. Setterqvist, Weighted Divergence on the Graph. [4] T. F. Chan, J. Shen, Image Processing and Analysis. Variational, PDE, Wavelet, and Stochastic Methods, Siam, 5. [5] T. F. Chan, G. H. Golub, P. Mulet, A nonlinear primal dual method for total variation based image restoration, Siam J. Sc. Comput [6] A. Chambolle, An algorithm for total variation minimization and applications, Journal of mathematical imaging and vision
25 JAPHET IYOBUHUGIRO AD ERIC SETTERQVIST [7] Y. Meyer, Oscillating Patterns in Image Processing and onlinear Evolution Equations, University Lecture Series, Vol., AMS Providence,. [8] J. F. Aujol, A. Chambolle, Dual norms and image decomposition, International journal of computer vision [9] J. F. Aujol, G. Aubert, L. Blanc-Feraud, A. Chambolle, Image decomposition into a bounded variation component and an oscillating component, Journal of mathematical imaging and vision [1] A. Chambolle, J. Darbon, Algorithms for total variation minimization, Mathematical model for multichannel image processing, Beijing [11] J. Gilles, oisy image decomposition: a new structure, texture and noise model based on local adaptivity. [1] J. F. Aujol, G. Gilboa, T. Chan, S. Osher, Structure-texture image decomposition-modeling, algorithms, and parameter selection, International Journal of Computer Vision [13] T. Goldstein, S. Osher, The split bregman method for L 1 regularized problems, Siam J. Imaging Sciences [14] J. Gilles, S. Osher, Bregman implementation of Meyer s G norm for cartoon + textures decomposition. [15] D. M. Strong, J. F. Aujol, T. F. Chan, Scale recognition, regularization parameter selection and Meyer s G norm in total variation regularization, Multiscale Model. Simul [16] S. Osher, O. Scherzer, G norm properties of bounded variation regularizaton, Comm. Math. Sci
Metric Spaces. Chapter 7. 7.1. Metrics
Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some
Variational approach to restore point-like and curve-like singularities in imaging
Variational approach to restore point-like and curve-like singularities in imaging Daniele Graziani joint work with Gilles Aubert and Laure Blanc-Féraud Roma 12/06/2012 Daniele Graziani (Roma) 12/06/2012
FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES
FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied
Notes on metric spaces
Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.
Continued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
Duality of linear conic problems
Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least
Mathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
Separation Properties for Locally Convex Cones
Journal of Convex Analysis Volume 9 (2002), No. 1, 301 307 Separation Properties for Locally Convex Cones Walter Roth Department of Mathematics, Universiti Brunei Darussalam, Gadong BE1410, Brunei Darussalam
Similarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
Mathematical Methods of Engineering Analysis
Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................
No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics
No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results
BANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
1 Norms and Vector Spaces
008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)
1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
Let H and J be as in the above lemma. The result of the lemma shows that the integral
Let and be as in the above lemma. The result of the lemma shows that the integral ( f(x, y)dy) dx is well defined; we denote it by f(x, y)dydx. By symmetry, also the integral ( f(x, y)dx) dy is well defined;
SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties
SOLUTIONS TO EXERCISES FOR MATHEMATICS 205A Part 3 Fall 2008 III. Spaces with special properties III.1 : Compact spaces I Problems from Munkres, 26, pp. 170 172 3. Show that a finite union of compact subspaces
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
Section 4.4 Inner Product Spaces
Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer
MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
Metric Spaces. Chapter 1
Chapter 1 Metric Spaces Many of the arguments you have seen in several variable calculus are almost identical to the corresponding arguments in one variable calculus, especially arguments concerning convergence
MEASURE AND INTEGRATION. Dietmar A. Salamon ETH Zürich
MEASURE AND INTEGRATION Dietmar A. Salamon ETH Zürich 12 May 2016 ii Preface This book is based on notes for the lecture course Measure and Integration held at ETH Zürich in the spring semester 2014. Prerequisites
MICROLOCAL ANALYSIS OF THE BOCHNER-MARTINELLI INTEGRAL
PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 MICROLOCAL ANALYSIS OF THE BOCHNER-MARTINELLI INTEGRAL NIKOLAI TARKHANOV AND NIKOLAI VASILEVSKI
16.3 Fredholm Operators
Lectures 16 and 17 16.3 Fredholm Operators A nice way to think about compact operators is to show that set of compact operators is the closure of the set of finite rank operator in operator norm. In this
INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS
INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem
TOPIC 4: DERIVATIVES
TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the
NOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
Metric Spaces Joseph Muscat 2003 (Last revised May 2009)
1 Distance J Muscat 1 Metric Spaces Joseph Muscat 2003 (Last revised May 2009) (A revised and expanded version of these notes are now published by Springer.) 1 Distance A metric space can be thought of
17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function
17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):
8.1 Examples, definitions, and basic properties
8 De Rham cohomology Last updated: May 21, 211. 8.1 Examples, definitions, and basic properties A k-form ω Ω k (M) is closed if dω =. It is exact if there is a (k 1)-form σ Ω k 1 (M) such that dσ = ω.
Adaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver
Høgskolen i Narvik Sivilingeniørutdanningen STE637 ELEMENTMETODER Oppgaver Klasse: 4.ID, 4.IT Ekstern Professor: Gregory A. Chechkin e-mail: [email protected] Narvik 6 PART I Task. Consider two-point
The Heat Equation. Lectures INF2320 p. 1/88
The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)
Practical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
Chapter 20. Vector Spaces and Bases
Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit
10. Proximal point method
L. Vandenberghe EE236C Spring 2013-14) 10. Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing 10-1 Proximal point method a conceptual algorithm for minimizing
Recall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
Math 4310 Handout - Quotient Vector Spaces
Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform
MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish
Part II Redundant Dictionaries and Pursuit Algorithms
Aisenstadt Chair Course CRM September 2009 Part II Redundant Dictionaries and Pursuit Algorithms Stéphane Mallat Centre de Mathématiques Appliquées Ecole Polytechnique Sparsity in Redundant Dictionaries
Chapter 5. Banach Spaces
9 Chapter 5 Banach Spaces Many linear equations may be formulated in terms of a suitable linear operator acting on a Banach space. In this chapter, we study Banach spaces and linear operators acting on
SEQUENCES OF MAXIMAL DEGREE VERTICES IN GRAPHS. Nickolay Khadzhiivanov, Nedyalko Nenov
Serdica Math. J. 30 (2004), 95 102 SEQUENCES OF MAXIMAL DEGREE VERTICES IN GRAPHS Nickolay Khadzhiivanov, Nedyalko Nenov Communicated by V. Drensky Abstract. Let Γ(M) where M V (G) be the set of all vertices
Several Views of Support Vector Machines
Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min
Linear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
Lecture Notes on Measure Theory and Functional Analysis
Lecture Notes on Measure Theory and Functional Analysis P. Cannarsa & T. D Aprile Dipartimento di Matematica Università di Roma Tor Vergata [email protected] [email protected] aa 2006/07 Contents
Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.
Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.
Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)
MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of
SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH
31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,
α = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
Inner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS
POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS N. ROBIDOUX Abstract. We show that, given a histogram with n bins possibly non-contiguous or consisting
How To Prove The Dirichlet Unit Theorem
Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem
Moving Least Squares Approximation
Chapter 7 Moving Least Squares Approimation An alternative to radial basis function interpolation and approimation is the so-called moving least squares method. As we will see below, in this method the
Derivative Free Optimization
Department of Mathematics Derivative Free Optimization M.J.D. Powell LiTH-MAT-R--2014/02--SE Department of Mathematics Linköping University S-581 83 Linköping, Sweden. Three lectures 1 on Derivative Free
Chapter 6. Cuboids. and. vol(conv(p ))
Chapter 6 Cuboids We have already seen that we can efficiently find the bounding box Q(P ) and an arbitrarily good approximation to the smallest enclosing ball B(P ) of a set P R d. Unfortunately, both
1. Let P be the space of all polynomials (of one real variable and with real coefficients) with the norm
Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: F3B, F4Sy, NVP 005-06-15 Skrivtid: 9 14 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal
Introduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
Geometrical Characterization of RN-operators between Locally Convex Vector Spaces
Geometrical Characterization of RN-operators between Locally Convex Vector Spaces OLEG REINOV St. Petersburg State University Dept. of Mathematics and Mechanics Universitetskii pr. 28, 198504 St, Petersburg
Applications to Data Smoothing and Image Processing I
Applications to Data Smoothing and Image Processing I MA 348 Kurt Bryan Signals and Images Let t denote time and consider a signal a(t) on some time interval, say t. We ll assume that the signal a(t) is
ISOMETRIES OF R n KEITH CONRAD
ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x
Introduction to the Finite Element Method
Introduction to the Finite Element Method 09.06.2009 Outline Motivation Partial Differential Equations (PDEs) Finite Difference Method (FDM) Finite Element Method (FEM) References Motivation Figure: cross
CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
Vector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
Notes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
Stationary random graphs on Z with prescribed iid degrees and finite mean connections
Stationary random graphs on Z with prescribed iid degrees and finite mean connections Maria Deijfen Johan Jonasson February 2006 Abstract Let F be a probability distribution with support on the non-negative
Introduction to Topology
Introduction to Topology Tomoo Matsumura November 30, 2010 Contents 1 Topological spaces 3 1.1 Basis of a Topology......................................... 3 1.2 Comparing Topologies.......................................
Binary Image Reconstruction
A network flow algorithm for reconstructing binary images from discrete X-rays Kees Joost Batenburg Leiden University and CWI, The Netherlands [email protected] Abstract We present a new algorithm
THE BANACH CONTRACTION PRINCIPLE. Contents
THE BANACH CONTRACTION PRINCIPLE ALEX PONIECKI Abstract. This paper will study contractions of metric spaces. To do this, we will mainly use tools from topology. We will give some examples of contractions,
Mean Value Coordinates
Mean Value Coordinates Michael S. Floater Abstract: We derive a generalization of barycentric coordinates which allows a vertex in a planar triangulation to be expressed as a convex combination of its
Section 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
EXISTENCE AND NON-EXISTENCE RESULTS FOR A NONLINEAR HEAT EQUATION
Sixth Mississippi State Conference on Differential Equations and Computational Simulations, Electronic Journal of Differential Equations, Conference 5 (7), pp. 5 65. ISSN: 7-669. UL: http://ejde.math.txstate.edu
Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics
Undergraduate Notes in Mathematics Arkansas Tech University Department of Mathematics An Introductory Single Variable Real Analysis: A Learning Approach through Problem Solving Marcel B. Finan c All Rights
and s n (x) f(x) for all x and s.t. s n is measurable if f is. REAL ANALYSIS Measures. A (positive) measure on a measurable space
RAL ANALYSIS A survey of MA 641-643, UAB 1999-2000 M. Griesemer Throughout these notes m denotes Lebesgue measure. 1. Abstract Integration σ-algebras. A σ-algebra in X is a non-empty collection of subsets
Lecture 7: Finding Lyapunov Functions 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1
1. Let X and Y be normed spaces and let T B(X, Y ).
Uppsala Universitet Matematiska Institutionen Andreas Strömbergsson Prov i matematik Funktionalanalys Kurs: NVP, Frist. 2005-03-14 Skrivtid: 9 11.30 Tillåtna hjälpmedel: Manuella skrivdon, Kreyszigs bok
Chapter 2: Binomial Methods and the Black-Scholes Formula
Chapter 2: Binomial Methods and the Black-Scholes Formula 2.1 Binomial Trees We consider a financial market consisting of a bond B t = B(t), a stock S t = S(t), and a call-option C t = C(t), where the
1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
6. Define log(z) so that π < I log(z) π. Discuss the identities e log(z) = z and log(e w ) = w.
hapter omplex integration. omplex number quiz. Simplify 3+4i. 2. Simplify 3+4i. 3. Find the cube roots of. 4. Here are some identities for complex conjugate. Which ones need correction? z + w = z + w,
Fixed Point Theorems
Fixed Point Theorems Definition: Let X be a set and let T : X X be a function that maps X into itself. (Such a function is often called an operator, a transformation, or a transform on X, and the notation
Triangle deletion. Ernie Croot. February 3, 2010
Triangle deletion Ernie Croot February 3, 2010 1 Introduction The purpose of this note is to give an intuitive outline of the triangle deletion theorem of Ruzsa and Szemerédi, which says that if G = (V,
Class Meeting # 1: Introduction to PDEs
MATH 18.152 COURSE NOTES - CLASS MEETING # 1 18.152 Introduction to PDEs, Fall 2011 Professor: Jared Speck Class Meeting # 1: Introduction to PDEs 1. What is a PDE? We will be studying functions u = u(x
Lecture Notes on Elasticity of Substitution
Lecture Notes on Elasticity of Substitution Ted Bergstrom, UCSB Economics 210A March 3, 2011 Today s featured guest is the elasticity of substitution. Elasticity of a function of a single variable Before
Shortcut sets for plane Euclidean networks (Extended abstract) 1
Shortcut sets for plane Euclidean networks (Extended abstract) 1 J. Cáceres a D. Garijo b A. González b A. Márquez b M. L. Puertas a P. Ribeiro c a Departamento de Matemáticas, Universidad de Almería,
CONTINUED FRACTIONS AND PELL S EQUATION. Contents 1. Continued Fractions 1 2. Solution to Pell s Equation 9 References 12
CONTINUED FRACTIONS AND PELL S EQUATION SEUNG HYUN YANG Abstract. In this REU paper, I will use some important characteristics of continued fractions to give the complete set of solutions to Pell s equation.
A PRIORI ESTIMATES FOR SEMISTABLE SOLUTIONS OF SEMILINEAR ELLIPTIC EQUATIONS. In memory of Rou-Huai Wang
A PRIORI ESTIMATES FOR SEMISTABLE SOLUTIONS OF SEMILINEAR ELLIPTIC EQUATIONS XAVIER CABRÉ, MANEL SANCHÓN, AND JOEL SPRUCK In memory of Rou-Huai Wang 1. Introduction In this note we consider semistable
Properties of BMO functions whose reciprocals are also BMO
Properties of BMO functions whose reciprocals are also BMO R. L. Johnson and C. J. Neugebauer The main result says that a non-negative BMO-function w, whose reciprocal is also in BMO, belongs to p> A p,and
Solutions to Homework 10
Solutions to Homework 1 Section 7., exercise # 1 (b,d): (b) Compute the value of R f dv, where f(x, y) = y/x and R = [1, 3] [, 4]. Solution: Since f is continuous over R, f is integrable over R. Let x
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
1. Prove that the empty set is a subset of every set.
1. Prove that the empty set is a subset of every set. Basic Topology Written by Men-Gen Tsai email: [email protected] Proof: For any element x of the empty set, x is also an element of every set since
MATH1231 Algebra, 2015 Chapter 7: Linear maps
MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales [email protected] Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter
F. ABTAHI and M. ZARRIN. (Communicated by J. Goldstein)
Journal of Algerian Mathematical Society Vol. 1, pp. 1 6 1 CONCERNING THE l p -CONJECTURE FOR DISCRETE SEMIGROUPS F. ABTAHI and M. ZARRIN (Communicated by J. Goldstein) Abstract. For 2 < p
Wavelet analysis. Wavelet requirements. Example signals. Stationary signal 2 Hz + 10 Hz + 20Hz. Zero mean, oscillatory (wave) Fast decay (let)
Wavelet analysis In the case of Fourier series, the orthonormal basis is generated by integral dilation of a single function e jx Every 2π-periodic square-integrable function is generated by a superposition
Pacific Journal of Mathematics
Pacific Journal of Mathematics GLOBAL EXISTENCE AND DECREASING PROPERTY OF BOUNDARY VALUES OF SOLUTIONS TO PARABOLIC EQUATIONS WITH NONLOCAL BOUNDARY CONDITIONS Sangwon Seo Volume 193 No. 1 March 2000
1 Review of Least Squares Solutions to Overdetermined Systems
cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares
Arrangements And Duality
Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,
