A New TwIST: TwoStep Iterative Shrinkage/Thresholding Algorithms for Image Restoration


 Elmer Hamilton
 1 years ago
 Views:
Transcription
1 SUBMITTED FOR PUBLICATION; A Ne TIST: ToSte Iterative Shrinkage/Thresholding Algorithms for Image Restoration José M. BioucasDias, Member, IEEE, and Mário A. T. Figueiredo, Senior Member, IEEE Abstract Iterative shrinkage/thresholding (IST) algorithms have been recently roosed to handle a class of convex unconstrained otimization roblems arising in image restoration and other linear inverse roblems. This class of roblems results from combining a linear observation model ith a nonquadratic regularizer (e.g., total variation, or aveletbased regularization). It haens that the convergence rate of these IST algorithms deends heavily on the linear observation oerator, becoming very slo hen this oerator is illconditioned or illosed. In this aer, e introduce toste IST (TIST) algorithms, exhibiting much faster convergence rate than IST for illconditioned roblems. For a vast class of nonquadratic convex regularizers (l norms, some Besov norms, and total variation), e sho that TIST converges to a minimizer of the objective function, for a given range of values of its arameters. For noninvertible observation oerators, e introduce a monotonic version of TIST (MTIST); although the convergence roof does not aly to this scenario, e give exerimental evidence that MTIST exhibits similar seed gains over IST. The effectiveness of the ne methods are exerimentally confirmed on roblems of image deconvolution and of restoration ith missing samles. Key Words: Inverse roblems, deconvolution, convex otimization, avelets, total variation, regularization, otimization. A. Problem Formulation I. INTRODUCTION Inverse roblems abound in many alication areas of signal/image rocessing: remote sensing, radar imaging, tomograhic imaging, microscoic imaging, astronomic imaging, digital hotograhy [1], [5], [34]. Image restoration is one of the earliest and most classical linear inverse roblems in imaging, dating back to the 1960 s [1]. In an inverse roblem, the goal is to estimate an unknon original signal/image x from a (ossibly noisy) observation y, roduced by an oerator K alied to x. When K is linear, e have a linear inverse roblem (LIP). Although e only reort image restoration exeriments, all the results herein resented are of general alicability in LIPs. Many aroaches to LIPs define a solution x (e.g., a restored image/signal) as a minimizer of a convex objective function This ork as artially suorted by Fundação ara a Ciência e Tecnologia (FCT), Portuguese Ministry of Science and Higher Education, under roject POSC/EEACPS/61271/2004. Both authors are ith the Instituto de Telecomunicações, and the Deartment of Electrical and Comuter Engineering, Instituto Suerior Técnico, Lisboa, Portugal. and A reliminary, much shorter version of this aer as submitted to the IEEE International Conference on Image Processing ICIP f : X R = [, + ], given by f(x) = 1 2 y Kx 2 + λφ(x), (1) here K : X Y is the (linear) direct oerator, X and Y are real Hilbert saces (both ith norm denoted as ), Φ : X R is a function (hose meaning and role ill be discussed in the next aragrahs), λ [0, + [ is a arameter. In a regularization frameork, minimizing f is seen as a ay of overcoming the illconditioned, or singular, nature of K, hich recludes inverting it. In this context, Φ is called the regularizer and λ the regularization arameter [5]. In a (finitedimensional) Bayesian setting, the reasoning behind (1) is as follos. Assume that y = Kx+, here is a samle of a hite zeromean Gaussian random vector/field, of variance σ 2 ; let (x) be the adoted rior; thus, the logarithm of the a osteriori density is log (x y) = f(x) (u to a constant), ith λ = σ 2 and Φ(x) = log(x); maximum a osteriori estimates are thus minimizers of f (see [2] and references therein). Desite this ossible interretation of (1), e ill refer to Φ simly as the regularizer. Regardless of the adoted formal frameork, the intuitive meaning of f is simle: minimizing it corresonds to looking for a comromise beteen the lack of fitness of a candidate estimate x to the observed data, measured by y Kx 2, and its degree of undesirability, given by Φ(x). The regularization arameter λ controls the relative eight of the to terms. A more detailed discussion of Φ ill be ostoned to Section II; suffice it to say here that the current stateoftheart regularizers for image restoration are nondifferentiable. Examles of such choices are totalvariation (TV) regularization [10], [14], [41] and aveletbased regularization [12], [21], [22], [38]. The nondifferentiable nature of f, together ith the huge dimension of its argument (for a tyical image, X = R ), lace its minimization beyond the reach of standard offtheshelf otimization methods. Of course not all aroaches to LIPs lead to convex otimization roblems such as (1). For examle, some aveletbased deconvolution aroaches do not lead to an otimization roblem [30], [39]. Hoever, this aer is strictly concerned ith algorithms for minimizing (1), and ill not discuss its relative merits ith resect to other criteria, nor the relative merits of different choices of Φ. B. Previous Algorithms In recent years, iterative shrinkage/thresholding (IST) algorithms (described in Section IV), tailored for objective
2 SUBMITTED FOR PUBLICATION; functions ith the form (1), ere indeendently roosed by several authors in different frameorks. IST algorithms for aveletbased deconvolution ere first derived in [40] (see also [27]) under the exectationmaximization (EM) frameork and, later [28], using a majorizationminimization (MM, see [32]) aroach. In [20], IST algorithms ere laced on solid mathematical grounds, ith a rigorous convergence roof in an infinite dimensional setting. A roof for the finite dimensional case as indeendently resented in [4]. IST algorithms have been indeendently roosed in [23], [24], [44], [45]. Recently, aer [18] brought imortant contributions to the understanding of a class of objective functions hich contains f, as ell as of a class of algorithms (termed forardbackard slitting) hich includes IST. A different (not IST) algorithm, roosed in [6], [7], under a generalized EM frameork [48], as recently shon to also be an MM scheme [26]. That algorithm, hich e ill refer to as IRS (iterative reeighted shrinkage) as shon to be much faster than IST hen K is strongly illconditioned. Conversely, for mildly illconditioned K and medium to strong noise, IST is faster than IRS [26]. C. Contributions This aer introduces a ne class of iterative schemes, bringing together the best of IRS and IST. Algorithms in this class have a toste IST (TIST) structure, i.e., each iterate deends on the to revious iterates, rather than only on the revious one. For illconditioned (but invertible) linear observation oerators, e rove (linear) convergence of TIST to minima of the objective function f, for a certain range of the algorithm arameters, and derive bounds for the convergence factor. As a byroduct of this analysis, e rovide a bound for the convergence factor of IST in the case of invertible oerators hich, to best of our knoledge, as not available in the literature. Exerimental results (in aveletbased and TVbased deconvolution) confirm that TIST algorithms can be tuned to converge much faster than the original IST versions, secially in severely illconditioned roblems. Similarly to the IRS algorithm [7], [26], the seed gains can reach u to orders of magnitude in a tyical benchmark roblem (e.g., 9 9 uniform blur). Conversely, in ell conditioned LIPs, TIST is still faster than IST (although not as much as in severely illconditioned LIPs), thus faster than IRS [26]. The convergence roof mentioned in the revious aragrah alies only to invertible linear oerators. For the noninvertible case, e introduce a monotonic variant of TIST, termed MTIST. Although e do not have a roof of convergence, e give exerimental evidence that, ith a noninvertible oerator, MTIST also exhibits a large seed advantage over IST. D. Summary of the Paer In Section II, e revie several choices of Φ in the context of denoising roblems, the solution of hich lays a central role in IST and TIST. Section III studies the existence and uniqueness of minimizers of f. The IST and IRS algorithms are revieed in Section IV, together ith revious results on the convergence of IST. The TIST algorithm is introduced in Section V, hich also contains the central theorem of the aer. Finally, exerimental results are reorted in Section VI. Aendices contain brief revies of basic results from convex analysis and other mathematical tools, as ell as the roofs of the ne results resented. II. REGULARIZERS AND DENOISING A. Denoising ith Convex Regularizers Denoising roblems are LIPs in hich K is the identity, Kx = x. In this case, the objective function (1) simlifies to here d y : X R, f den = (1/2)d 2 y + λφ, d y (x) = x y. (2) We adot the folloing standard assumtions about the regularizer Φ : X R: it is convex, loer semicontinuous (lsc), and roer (see Aendix A for definitions and imlications of these roerties). The fact that Φ is lsc and roer and d 2 y is a continuous, realvalued, coercive function (lim x d 2 y(x) = ), guarantees that f den is lsc, roer, and coercive. Consequently, the set of minimizers of f den is not emty (Theorem 5, Aendix A). Finally, the strict convexity of d 2 y imlies strict convexity of f den (Theorem 7, Aendix A), thus its minimizer is unique; this allos defining the denoising function (also knon as the Moreau roximal maing [18], [36], [43]) Ψ λ : X X as { } d 2 y (x) Ψ λ (y) = argmin + λφ(x). (3) x 2 In the folloing subsections, e describe in detail the classes or regularizers considered in this ork, as ell as the corresonding denoising functions. B. Denoising ith 1Homogeneous Regularizers A function Φ that satisfies Φ(ζ x) = ζ Φ(x), for all ζ 0 and x X, is called ositively homogeneous of degree 1 (hd 1). Let Υ(X) denote the set of functions Φ : X R that are convex, lsc, roer, and hd1. An imortant recent result states that denoising ith regularizers from Υ(X) corresonds to the residual of the rojection onto a convex set, as formalized in the folloing theorem (see [10], [18], [35] for roofs): Theorem 1: If Φ Υ(X), then the denoising function Ψ λ defined in (3) is given by Ψ λ (y) = y P λc (y), (4) here C X is a closed convex set deending on the regularizer Φ, and P A : X X denotes the orthogonal rojection oerator onto the convex set A X.
3 SUBMITTED FOR PUBLICATION; C. Total Variation In the original formulation of totalvariation (TV) [10], [14], [41], X is an infinitedimensional Hilbert sace L 2 (I), here I is a bounded oen domain of R 2, e.g., Ω =]0, 1[ 2. With digital images, X is simly a finitedimensional sace of ixel values on a 2D lattice, say X = R m, equied ith the Euclidean norm; thus discrete TV regularizers have to be used [10], [11]. Standard choices are the isotroic and nonisotroic discrete TV regularizers, given, resectively, by Φ itv (x) = ( h i x)2 + ( v i x)2 (5) i Φ nitv (x) = i h i x + v i x, (6) here h i and v i denote horizontal and vertical (on the 2D lattice) firstorder local difference oerators (omitting boundary corrections). It s clear from (5) and (6) that Φ itv, Φ nitv Υ(R m ). Although there is no closed form for the rojection onto C, i.e., to solve the TV denoising roblem, fast iterative methods have been recently introduced [10], [11], [19], [29]. D. Weighted l Norm Weighted l norms, for 1, are defined as ( ) 1/ Φ l (x) = x, = i x i, (7) here = [ 1, 2,..., i,...], ith i 0 and 1. The underlying Hilbert sace is simly X = R m, in the finitedimensional case (ith the sum in (7) extending from i = 1 to i = m), or X = l 2 (N), in the infinitedimensional case (here the sum in (7) is for i N). Being a norm, Φ l clearly belongs to Υ. The denoising function Ψ λ under a Φ l regularizer cannot be obtained in closed form, excet in some articular cases, the most notable of hich is = 1; in this case, Ψ λ is the ell knon softthresholding function [22], that is Ψ λ (z) = x = [ x 1,..., x i,...], ith x i = soft(z i, λ i ) = sign(z i )max {0, z i λ i }. (8) Next, e discuss some aroaches involving Φ l regularizers. 1) Orthogonal Reresentations: A classical aroach consists in modeling images as elements of some Besov sace B a (L (I)), here I = [0, 1] 2 is the unit square. The adoted regularizer is then the corresonding Besov norm B a (L (I)), hich has an equivalent eighted l sequence norm of avelet coefficients on an orthogonal avelet basis (see [12] for details). To exloit this equivalence, the roblem is formulated.r.t. the coefficients, rather than the image itself. Letting W : X B a (L (I)) denote the linear oerator that roduces an image from its avelet coefficients, the objective function becomes i f(x) = 1 2 d2 y (HWx) + λφ l (x), (9) here the eights deend on the scale of each coefficient and on the arameters ( and a) of the Besov norm (see [12]), and H is the observation oerator. In ractice, for digital images, X is finitedimensional, say X = R m, Φ l is a eighted l norm on R m, and W R m m is an unitary m m matrix. Notice that (9) has the same form as (1), ith K = HW and Φ = Φ l. 2) Frames and Redundant Reresentations: Another formulation (in a finitedimensional setting) leading to an objective function ith the same form as (9) is the folloing. Let the columns of W contain a redundant dictionary (e.g., a frame) ith resect to hich e seek a reresentation of the unknon image. If the image is directly observed, H is the identity; in this case, minimizing (9) corresonds to finding a regularized reresentation of the observed image on the dictionary W [23], [24]. For = 1, this is the ellknon basisursuit denoising criterion [16]. If the original image is not directly observed (H is not identity), minimizing (9) corresonds to reconstructing/restoring the original image by looking for a (regularized) reresentation on an overcomlete dictionary. This formulation has been used for shiftinvariant aveletbased deconvolution [7], [27], [28]. E. The th Poer of a Weighted l Norm This class of regularizers, defined as Φ l (x) = x, = i i x i, (10) aears in many aveletbased aroaches [7], [20], [27], [28], [29], [42]. This regularizer can also be motivated as being equivalent to the th oer of a Besov norm, B a [20], [35]. (L(I)) For = 1, Φ 1 l = Φ 1 l 1, thus the denoising oerator (3) is given by (8). For > 1, Φ is not hd1, and the denoising l oerator doesn t have the form (4). In this case, hoever, e can rite Ψ λ (z) = x = [ x 1,..., x i,...], ith here S τ, = F 1 τ, x i = S λi,(z i ), (11) is the inverse function of F τ, (x) = x + τ sign(x) x 1. (12) Notice that, for > 1, F τ, : R R is onetoone, thus S τ, = Fτ, 1 is ell defined. The function S τ,, called the shrinkage function, has simle closed forms hen = 4/3, = 3/2, or = 2 [15]. For examle, the function S τ,2 is a simle linear shrinkage, S τ,2 (z) = z/(1 + 2τ). Imortant features of S τ, (for > 1) are: it s strictly monotonic, continuously 1 differentiable, and its derivative is uer bounded by 1 (since the derivative of its inverse F τ, is uniformly loer bounded by 1) [20]. 1 Continuous differentiability is not claimed in [20], only its differentiability. Hoever, the continuity (for > 1) of the derivative of S τ,, denoted S τ,, is easily shon. Firstly, it s trivial to check that lim x 0 S τ, (x) = 0, here S τ,(x) = 1/F τ,(s τ,(x)). Secondly, it s also easy to sho, via the definition of derivative, that S τ, (0) = 0.
4 SUBMITTED FOR PUBLICATION; III. EXISTENCE AND UNIQUENESS OF SOLUTIONS The existence and uniqueness of minimizers of (1) are addressed in the folloing roosition, the roof of hich can be found in [18, Proositions 3.1 and 5.3]. Proosition 1: Let f : X R be defined as in (1), here oerator K : X Y is linear and bounded, and Φ : X R is a roer, lsc, convex function. Let G denote the set of minimizers of f. Then, (i) if Φ is coercive, then G is nonemty; (ii) if Φ is strictly convex or K is injective, then G contains at most one element; (iii) if K is bounded belo, that is, if there exists κ ]0, + ], such that, for any x X, Kx κ x, then G contains exactly one element. We ill no comment on the alication of Proosition 1 to the several regularization functions above considered. If all the eights are strictly ositive ( i > 0, i ), both the eighted l norm and its th oer (for 1) are coercive (see [10, Proosition 5.15 and Problem 5.18], thus Proosition 1 (i) ensures existence of minimizers of f. Under these regularizers, if K is injective, the minimizer is unique; otherise, the minimizer is unique ith Φ l, ith > 1 (hich is strictly convex). In the finitedimensional case (X = R m ), injectivity of K is sufficient to guarantee existence and uniqueness of the solution (under any convex regularizer, strictly or not, coercive or not). This results from Proosition 1 (iii), because any finitedimensional injective oerator is bounded belo. When Φ is a TV regularizer (e.g., Φ itv or Φ nitv ) and K is not bounded belo, Proosition 1 can not be used to guarantee existence of minimizers of (1). The reason is that TV regularizers are not coercive since they equal zero hen the argument is a constant image. Hoever, under the additional condition that constant images do not belong to the null sace of K, it can still be shon that G is not emty [13]. IV. PREVIOUS ALGORITHMS This section revies algorithms reviously roosed for finding minimizers of f. From this oint on, e focus on the finitedimensional case, X = R m, Y = R n, and denote the standard Euclidean vector norm as 2. A. Iterative Shrinkage/Thresholding (IST) IST algorithms has the form x t+1 = (1 β)x t + β Ψ λ ( xt + K T (y Kx t ) ), (13) here β > 0. The original IST algorithm has the form (13), ith β = 1 [20], [27], [28]. Schemes ith β 1 can be seen as under (β < 1) or over (β > 1) relaxed versions of the original IST algorithm. Each iteration of the IST algorithm only involves sums, matrixvector roducts by K and K T, and the alication of the denoising oeration Ψ λ. In aveletbased methods, Ψ λ is a coefficientise nonlinearity, thus very comutationally efficient. When K reresents the convolution ith some kernel k, the corresonding roduct can be comuted efficiently using the fast Fourier transform (FFT). Convergence of IST, ith β = 1, as first shon in [20]. Later, convergence of a more general version of the algorithm (including β 1), as shon in [18]. The folloing theorem is a simlified version of Theorems 3.4 and 5.5 from [18]; the simlifications result from considering finitedimensional saces (no difference beteen strong and eak convergence) and from (13) being a articular case of the somehat more general version studied in [18]. Theorem 2: Let f be given by (1), here Φ : X R is convex 2 and K 2 2 < 2. Let G, the set of minimizers of f, be nonemty. Fix some x 1 and let the sequence {x t, t N} be roduced by (13), ith β ]0, 1]. Then, {x t, t N} converges to a oint x G. B. Iterative Reeighted Shrinkage (IRS) The IRS algorithm as secifically designed for aveletbased roblems of the form (9), here W contains an orthogonal or redundant avelet basis and the regularizer is not necessarily a eighted l norm [7]. The iterations of the IRS algorithm are given by x t+1 = solution {A t x = b}, (14) ith b = K T y and A t = λd t +K T K, here D t is a diagonal matrix (of nonnegative elements) that deends on x t and Φ. Observe that matrix D t shrinks the comonents of x t+1, thus the term iterative reeighted shrinkage. Each iteration of IRS resembles a eighted ridge regression roblem, ith design matrix K; algorithms ith a similar structure have been used for sarse regression [25], [31]. The huge size of A t forces the use of iterative methods to imlement (14). In [7], this is done ith a toste (or secondorder) stationary iterative method [3], hich e ill next briefly revie. C. Toste Methods for Linear Systems Consider the linear system Ax = b, ith A ositive definite; define a socalled slitting of A as A = C R, such that C is ositive definite and easy to invert (e.g., a a diagonal matrix). A stationary toste iterative method (TSIM) for solving Ax = b is defined as x 1 = x 0 + β 0 C 1 (b Ax 0 ) x t+1 = (1 α)x t 1 + αx t + β C 1 (b Ax t ), (15) for t 1, here x 0 is the initial vector, and α, β, β 0 are the arameters of the algorithm (more on this belo). The designation toste stems from the fact that x t+1 deends on both x t and x t 1, rather than only on x t. The main result concerning TSIM is given in folloing theorem [3, Theorem 5.9]: Theorem 3: Let {x t, t N} be the sequence roduced by (15), ith arbitrary x 0. Let λ 1 and λ m denote the smallest 2 In a finitedimensional sace, every real convex function is continuous, so e can dro the lsc condition.
5 SUBMITTED FOR PUBLICATION; and largest eigenvalues of matrix C 1 A, and κ = λ 1 /λ m be its inverse condition number. Then, {x t, t N} converges to the solution of Ax = b if and only if 0 < α < 2 and 0 <β <2 α/λ m. The otimal asymtotic convergence factor 3 is ρ (1 κ)/(1 + κ), obtained for α = ρ and β = 2 α/(λ 1 +λ m ). With α = 1, the toste method (15) becomes a oneste method for hich the best asymtotic converge factor is ρ (1 κ)/(1 + κ). D. Comaring IST ith IRS It as shon in [7] that, for ill conditioned systems, IRS is much faster than IST. This fact can be traced to the use of the TSIM in each ste of IRS. On the other hand, hen noise is the main factor, and the observation oerator is not too illconditioned, IST outerforms IRS because it uses a closedform (usually nonlinear) denoising ste in each iteration [26]. In fact, in a ure denoising roblem (K = I or K orthogonal), IST (ith β = 1 and initialized ith a zero image) converges in one ste, hile IRS does not. A. Motivation and Definition V. TWOSTEP IST (TWIST) The TIST method roosed in this aer aims at keeing the good denoising erformance of the IST scheme, hile still being able to handle illosed roblems as efficiently as the IRS algorithm. Taking C = I + λd t and R = I K T K in the slitting A = C R of matrix A = λd t +K T K, the toste iteration (15) for the linear system Ax = K T y becomes x t+1 = (1 α)x t 1 + (α β)x t +β C 1 ( x t + K T (y Kx t ) ). (16) Observe the relationshi beteen (13) and (16): the former can be obtained from the latter by setting α = 1 and relacing the multilication by matrix C 1 by the denoising oerator Ψ λ. This similarity suggests a toste version of IST (TIST) as x 1 = Γ λ (x 0 ) (17) x t+1 = (1 α)x t 1 + (α β)x t + β Γ λ (x t ), (18) for t 1, here Γ λ : R m R m is defined as Γ λ (x) = Ψ λ ( x + K T (y Kx) ). (19) A key observation is that TIST, IST, and the original IST ith β = 1 all have the same fixed oints. In fact, elementary maniulation allos shoing that the three folloing equations are equivalent: x = x = x = (1 α)x + (α β)x + β Γ λ (x) (1 β)x + β Γ λ (x) Γ λ (x). 3 See Aendix B for a brief revie of convergence factors. B. Convergence of TIST Fundamental questions concerning TIST are: for hat values of α and β does it converge? Ho does the convergence rate deend of α and β? The main theorem of this aer artially characterizes the convergence of the TIST algorithm, hen f has a unique minimizer. Theorem 4: Let f be given by (1), here Φ is a convex regularizer. Let ξ 1 and ξ m be to real numbers such that 0 < ξ 1 λ i (K T K) ξ m, here λ i ( ) is the ith eigenvalue of its argument, let κ = ξ 1 / ξ m, here ξ m max(1, ξ m ), and ρ 1 κ 1 + < 1. (20) κ Let x be the unique (because K is injective) minimizer of f and define the error vector as e t = x t x and the stacked error vector as ] (i) (ii) (iii) t = [ et+1 e t. (21) There exists a matrix set Q such that t+1 can be ritten as t+1 = Q t t, here Q t Q, for t N (Q t may deend on t ); moreover, if 0 < α < 2 and 0 < β < 2 α/ξ m, then ρ(q t ) < 1, for any Q t Q, here ρ(q t ) is the sectral radius of Q t (see Aendix B). Setting α = α ρ (22) β = β 2 α/(ξ m + ξ 1 ) (23) guarantees that ρ(q t ) = ρ. Setting α = 1 (i.e., the IST algorithm) and guarantees that β = β 2/(ξ m + ξ 1 ), (24) ρ(q t ) ρ 1 κ < 1. (25) 1 + κ (iv) If ξ m < 1, 0 < α 1, and 0 < β < 2 α, then lim t t = 0. Theorem 4 extends the results about the convergence of the linear TSIM (see Section IVC and [3]) to the nonlinear/nondifferentiable case. While the roof in [3] uses linear algebra tools, the ossible nonlinear/nondifferentiable nature of Ψ λ demands nonsmooth analysis techniques [17], [43]. The roof of Theorem 4 can be found in Aendix C. If matrix Q t is not time deendent, i.e. Q t = Q, the condition ρ(q) < 1 ould be sufficient for convergence to zero of t. Hoever, in TIST, Q t is in general not constant, thus ρ(q t ) < 1, t, is not a sufficient condition for convergence to zero of t. Convergence of a nonstationary linear iteration t+1 = Q t t, here Q t belongs to some set of matrices Q, deends on the socalled joint sectral radius (JSR) of Q [47, Proosition 3.2]. Comuting (or bounding) the JSR of (even very small) matrix sets is a hard roblem, currently under active research (see [47] and the many references therein).
6 SUBMITTED FOR PUBLICATION; The convergence stated in Theorem 4 (iv) results from the folloing fact: for α 1, there exists a matrix norm, say a, for hich Q t a ε < 1, for any Q t Q, hich is a sufficient condition for convergence to zero of the iteration t+1 = Q t t. Although, hen α > 1, Theorem 4 does not guarantee convergence, e have observed, in a large number of image deconvolution exeriments, that the algorithm alays converges for a ide range of choices of arameters α and β. In Section VI, e ill discuss ractical rules for choosing these arameters. As in linear stationary algorithms, e have exerimentally verified that ρ and ρ, resectively, are good indicators of the relative seed of TIST and IST. Seeing the algorithms as linear stationary, quantities 1/ log 10 ρ and 1/ log 10 ρ are aroximately the numbers of iterations needed to reduce the error norm by a factor of 10 (see Aendix B). For examle, ith κ 10 4 (common in image restoration), 1/ log 10 ρ 10 2 and 1/ log 10 ρ 10 4 ; i.e., in this case, TIST is exected to be roughly to orders of magnitude faster than IST, as confirmed in our exeriments. To the best of our knoledge, the bound on the convergence factor of IST given by (25) has not aeared reviously in the literature. C. Monotonic Version: MTIST Monotonicity underlies the derivation of many algorithms and is instrumental in several convergence roofs; e.g., the roof of convergence of IST (hich is monotonic for β = 1) in [20]. Monotonicity is not used in our convergence roof of TIST (hich is not necessarily monotonic), but the roof requires the condition that the observation oerator is invertible. To handle noninvertible oerators, e introduce a monotonic version of TIST (MTIST); the rationale is that, even though e can t guarantee convergence of the estimate, monotonicity combined ith the fact that the objective function is bounded belo guarantees convergence of the objective function values f(x t ). Although this is a eaker result, e have observed in many exeriments that MTIST alays converges and still does so much faster than IST. The structure of MTIST is very similar to that of TIST, ith a single difference. Formally, for t > 1, let z be given by (18); then { z f(z) f(xt ) x t+1 = Γ λ (x t ) f(z) > f(x t ). Notice that setting x t+1 = Γ λ (x t ) corresonds to taking a TIST ste ith α = β = 1, that is, a (monotonic) IST ste. VI. EXPERIMENTAL RESULTS In this section, e resent image restoration results illustrating the convergence seed of TIST in comarison ith IST. Our goal is not to assess the erformance of image restoration criteria of the form (1); this has been carried out in several other ublications, in comarison ith other state of the art criteria (see [7], [24], [27], [27], [30], [33]). It s clear that the erformance of such criteria (e.g., in terms of SNR imrovement) does not deend on the otimization algorithms used to imlement them, but only on the choice of the tye of regularizer Φ. On the other hand, the relative convergence seed of the algorithms is essentially deendent on their structure. We consider to classes of regularizers: i) Φ(x) = x 1, here x denotes avelet coefficients of the image to be inferred, and ii) Φ(x) = Φ itv (x), i.e., isotroic TV. See Sections IIC and IID for further details. In the case i) e use the simlest ossible choice of avelet: Haar discrete avelet transform (DWT). We are ell aare that this does not lead to stateoftheart erformance in terms of SNR imrovement; hoever, the conclusions obtained concerning the relative seed of the algorithms are valid for other choices of avelets and enalty functions. To imlement Ψ λ corresonding to the regularizer Φ itv (x), e use the algorithm introduced in [10]. TABLE I EXPERIMENTAL SETTING (W IS THE INVERSE DISCRETE WAVELET TRANSFORM). Ex Image Linear Oerator K Φ(x) BSNR 1 Camera H 1 (9 9 uniform) Φ itv 40dB 2 Camera H 1 W x 1 40dB 3 Lena H 2 [1,4,6,4,1] T [1,4,6,4,1] 256 Φ itv 17dB 4 Lena H 2 W x 1 17dB 5 Camera 40% missing samles Φ itv 40 db Table I shos the setting used in each of the five exeriments conducted. Exeriments 1 and 2 corresond to a strong blur ith lo noise, hereas exeriments 3 and 4 corresond to mild blur ith medium noise. Our aim in choosing these to scenarios is to illustrate that TIST converges much faster than IST in severely illconditioned LIPs and still faster than IST in mildly illconditioned LIPs. In all the exeriments, the oerator K is normalized to have ξ m = 1, thus κ = ξ 1, hich e simly denote as ξ. Finally, exeriment 5 considers a roblem in hich matrix K models the loss of 40% of the image ixels (at random locations); matrix K is thus 0.6 m m, thus noninvertible (40% of its singular values are zero). This exeriment illustrates the behavior of MTIST on an severely illosed (ξ 1 = 0) roblem. Insired by Theorem 4 (ii), the TIST arameters are initially set to α = ρ (26) β = 2 α/(1 + ξ), (27) here ρ is given by (20) and ξ is set according to a qualitative classification: ξ = 10 1 or ξ = 10 3 for, resectively, mildly or severely illconditioned LIPs. This choice may be, of course, far from otimal; e have observed, hoever, that it leads to seeds of convergence very close to the best ones obtained by hand tuning (α, β). The reason for this is that, as illustrated belo, TIST is very robust ith resect to the arameters (α, β), namely for severely illconditioned LIPs. Another alternative is to run a fe TIST iterations, say t 0, for each ξ = 10 i, ith i = 1, 2,... and choose the value that
7 SUBMITTED FOR PUBLICATION; x TIST IST ot 4 x TIST (ξ=10 4 ) TIST (ξ=10 3 ) TIST (ξ=10 2 ) Fig. 1. TVbased deconvolution in a severely illconditioned roblem (exeriment 1). Evolution of the objective function f(x t) roduced by TIST, IST ot, and. SNR Imrovement TIST IST ot Fig. 2. TVbased deconvolution in a severely illconditioned roblem (exeriment 1). Evolution of the SNR imrovement (ISNR) roduced by TIST, IST ot, and. leads to loest value of f(x t0 ). In the exeriments reorted belo, e use this rocedure ith t 0 = 5. The arameter β of IST is set according to Theorem 4 (iii); i.e., β = 2/(1 + ξ). This setting, yields the otimal sectral radius ρ(q t ) associated to the oneste iterative method. We ill refer to this articular otimal version of IST as IST ot and to the original IST (i.e., β = 1) as. Notice that since, in illconditioned roblems, ξ 1 1, the otimal β is very close to the maximum alloed value that guarantees convergence; for examle, for ξ 1 = 10 3, e have β = In all the examles, the algorithms are initialized ith x 0 given by a Wiener filter and the arameter λ is hand tuned for the best SNR imrovement. Exeriments 1 and 2: Fig. 1 shos the evolution of the objective function along the iterations 4 confirming that TIST converges much faster than IST ot and, hich take, resectively, 2400 and 5800 iterations to reach the value of f obtained ith TIST just after 100 iterations. Notice 4 Arguably, the horizontal axes should reresent CPU time instead of number of iterations; hoever, e have verified that the CPU time er iteration differs by less than 1% beteen TIST and IST, so this change ould only imly a change of scale of these horizontal axes Fig. 3. TVbased deconvolution in a severely illconditioned roblem (exeriment 1). Evolution of the objective function f(x t) roduced by TIST, for different arameters (α(ξ), β(ξ)), and by. Notice the lo sensitivity of TIST ith resect to (α(ξ), β(ξ)). also that IST ot converges aroximately tice as fast as. This attern of behavior as systematically observed in severely illconditioned LIPs. Fig. 2 shos the evolution of the SNR imrovement (ISNR) roduced by TIST, IST ot and. As exected 5, ISNR(t) also converges much faster ith TIST than ith and IST ot. Fig. 3 shos the evolution of the objective function f(x t ) roduced by TIST, for different arameters (α(ξ), β(ξ)), and by. Notice the lo sensitivity of TIST ith resect to (α(ξ), β(ξ)). This is a relevant characteristic of TIST, because the otimal setting for (α, β) is rarely kno. In order to assess the imact of the initialization on the relative erformance of the algorithms, e considered to other initialization methods: an all zeros image and the observed image. Table II shos the average (over 10 runs) number of iterations required by and IST ot to reach the value of the objective obtained by 100 iterations of TIST. Initialization ith zeros or the observed image decreases the advantage of TIST by roughly 50%; hoever, the Wiener initialization leads (after 100 iterations) to a final value of f and an ISNR hich are a little better than the other to methods. TABLE II AVERAGE NUMBER OF ITERATIONS REQUIRED BY AND IST OPT TO REACH THE SAME VALUE OF f OBTAINED BY 100 ITERATIONS OF TWIST. Initialization IST ot Wiener filter Zeros Observed image Figs. 4 and 5 lot avelet based deconvolution results obtained ith the setting of exeriment 2. The comments to this figures are similar to those made for Figs. 1 and 3: TIST converges much faster than and IST ot ; TIST has lo sensitivity ith resect to ξ. Exeriments 3 and 4: Figs. 6 and 7 lot results obtained in mildly illconditioned LIP. The first asect to note is that (as 5 ISNR(t) = 10log 10 ( y x / x t x ), here x is the original image.
8 SUBMITTED FOR PUBLICATION; x TIST IST ot 4.8 x TIST IST ot Fig. 4. Waveletbased deconvolution in a severely illconditioned roblem (exeriment 2). Evolution of the objective function f(x t) roduced by TIST, IST ot, and Fig. 7. Waveletbased deconvolution in a mildly illconditioned roblem (exeriment 3). Evolution of the objective function f(x t) roduced by TIST, IST ot, and. 4.8 x TIST (ξ=10 4 ) TIST (ξ=10 3 ) TIST (ξ=10 2 ) 3.5 x MTIST (ξ 1 = 10 3 ) MTIST (ξ 1 = 10 4 ) MTIST (ξ = 10 5 ) 1 IST β = Fig. 5. Waveletbased deconvolution in a severely illconditioned roblem (exeriment 2). Evolution of the objective function f(x t) roduced by TIST, for different arameters (α(ξ), β(ξ)), and by. Notice the lo sensitivity of TIST ith resect to (α(ξ), β(ξ)) Fig. 8. TVbased image restoration from 40% missing samles (exeriment 5). Evolution of the objective function f(x t) roduced by TIST, IST ot, and x TIST IST ot Fig. 6. TVbased deconvolution in a mildly illconditioned roblem (exeriment 3). Evolution of the objective function f(x t) roduced by TIST, IST ot, and. exected) all the algorithms converge much faster than in the severely illconditioned case. The limit situation is a denoising LIP (i.e., K = I or unitary) in hich the solution is obtained in just one ste (ith α = β = 1 and x 0 = 0). The other asect to note is that although the behavior of all the algorithms is almost identical, TIST is still slightly faster than IST. Exeriment 5: In this examle, the goal is not to resent a stateoftheart method for restoration from missing samles, but simly to illustrate the behavior of the algorithms ith a noninvertible observation model. The evolution of the objective function in Figure 8 shos that MTIST converges considerably faster than and IST ith β = In line ith the results reorted in 3 and 5, MTIST is again rather insensitive to the choice of ξ 1 (hich in this case can no longer be related to the minimum singular value of K, hich is zero) Figure 9 shos the observed image (the missing samles are set to the mid level gray value) and the restored image roduced by MTIST.
9 SUBMITTED FOR PUBLICATION; is said to be strict if the inequality holds strictly (<) for any u,v X and any α ]0, 1[. The function f is roer if f(x) <, for at least one x X, and f(x) >, for all x X. The function f is loer semicontinuous (lsc) at v if lim δց0 inf f(x) f(v), x B(v,δ) here B(v, δ) = {x : x v δ} is the δball around v, and is the norm in the Hilbert sace X. A function f is called coercive if it verifies lim x f(x) = +. Proer, lsc, coercive functions lay a key role in otimization because of the folloing theorem (see [43]): Theorem 5: If f is a roer, lsc, coercive, convex function, then inf x X f(x) is finite and the set arg min x X f(x) is nonemty. The next theorems concern strictly convex functions. Theorem 6: If f is a strictly convex function, the set argmin x X f(x) ossesses at most one element. Theorem 7: If f 1 is a convex function, f 2 is a strictly convex function, and 0 < λ <, then λf 2 and f 1 + λf 2 are strictly convex. Fig. 9. TVbased image restoration from 40% missing samles (exeriment 5); to: observed image; bottom: restored image. VII. CONCLUDING REMARKS In this aer e have introduced a ne class of iterative methods, called TIST, hich have the form of toste iterative shrinkage/thresholding (TIST) algorithms. The udate equation deends on the to revious estimates (thus the term toste), rather than only on the revious one. This class contains and extends the iterative shrinkage/thresholding (IST) methods recently introduced. We roved convergence of TIST to minima of the objective function (for a certain range of the algorithm arameters) and derived bounds for the convergence factor as a function of the arameters defining the algorithm. Exerimental results (in aveletbased and TVbased deconvolution) have shon that TIST can in fact be tuned to converge much faster than the original IST, secially in severely illconditioned roblems, here the seed u can reach to orders of magnitude in a tyical deblurring roblem. We have also introduced MTIST, a monotonic variant of TIST, conceived for noninvertible observation oerators; the erformance of MTIST as illustrated on a roblem of image restoration from missing samles. APPENDIX A: CONVEX ANALYSIS We very briefly revie some basic convex analysis results used in this aer. For more details see [43], [49]. Consider a function f : X [, + ] = R, here R is the extended real line, and X is a real Hilbert sace. The function f is convex if f(αu + (1 α)v) αf(u) + (1 α)f(v), for any u,v X and any α [0, 1]. Convexity APPENDIX B: MATRIX NORMS, SPECTRAL RADIUS, CONVERGENCE, CONVERGENCE FACTORS AND CONVERGENCE RATES Given a vector norm, A = max x =1 Ax is the matrix norm of A induced by this vector norm. A vector norm and the corresonding induced matrix norm are consistent, i.e., they satisfy Av A v. When the vector norm is the Euclidean norm (denoted 2 ), the induced matrix norm (also denoted 2 ) is called sectral norm. If A is Hermitian, A 2 = max i λ i (A) = ρ(a), called sectral radius [3]. Key results involving ρ( ) are lim k Ak = 0 ρ(a) < 1, (28) lim k Ak 1/k = ρ(a), (29) A,ε a : A a ρ(a) + ε. (30) Consider the linear system Bx = b, ith solution x and an iterative scheme yielding a sequence of iterates {x t, t N}. For a linear stationary iterative algorithm, the error e t = x t x evolves according to e t = Ae t 1, thus e t = A t e 0. From (28), the error goes to zero if and only if ρ(a) < 1. Because of (29), ρ(a) is also called the asymtotic convergence factor. The asymtotic convergence rate, given by r = log 10 ρ(a), is roughly the number of ne correct decimal laces obtained er iteration, hile its inverse aroximates the number of iterations required to reduce the error by a factor of 10. APPENDIX C: PROOF OF THEOREM 4 Before roving Theorem 4, e introduce several results on hich the roof is built, one of them being Clarke s mean value theorem for nondifferentiable functions [17]. Other reliminary results are resented and roved in Subsection C.2. Finally, Subsections C.3, C.4, C.5 and C.6 contain the roofs of arts (i), (ii), (iii), and (iv) of Theorem 4, resectively.
10 SUBMITTED FOR PUBLICATION; C.1. The NonSmooth Mean Value Theorem Definition 1: Let F : R m R m be such that each of its comonents is Lischitz and Ω F the set of oints at hich F is nondifferentiable. Let JF(x) denote the m m Jacobian matrix of F at x, hen x Ω F. The (Clarke s [17]) generalized Jacobian of F at x is given by { } F(x) = co lim JF(x i ), (31) x i x, x i Ω F here co(a) denotes the convex hull of A. If F is continuously differentiable at x, then F(x) = {JF(x)} [17]. Theorem 8: (Mean value theorem [17]) Let F be as in Definition 1 and u, v R m be any to oints. Then, F(u) F(v) co F([u, v])(u v), (32) here co F([u,v]) denotes the convex hull of the set {A : A F(r), r [u, v]}, ith [u, v] denoting the line segment beteen u and v. Exression (32) means that there exists a matrix B co F([u, v]), such that F(u) F(v) = B(u v). C.2. Preliminary Results The to folloing roositions characterize the elements of the generalized Jacobian of denoising functions, Ψ λ, and of co Ψ λ ([u,v]). Proosition 2: For any x R m, any D Ψ λ (x) is symmetric, ositive semidefinite (sd), and D 2 1. Proof: The roof distinguishes to classes of regularizers. Consider first that Ψ λ results from a regularizer in class Υ(R m ) (see Section IIB); e.g., itv, nitv, or Φ l. From Theorem 1 in Section IIB, Ψ λ (x) = x P λc (x). Thus, Ψ λ (x) = I P λc (x), that is, any element, say D, of Ψ λ (x) can be ritten as D = I A, here A P λc (x). Theorem 2.3 in [46] guarantees that A is symmetric, sd, and A 2 1. Thus, D = I A is also symmetric, sd, and D 2 1. Consider no that Ψ λ results from a Φ l regularizer, ith > 1 (see Section IIE). Due to the comonentise structure of Ψ λ, shon in (11), and since S τ, is continuously differentiable (see footnote 1), Ψ λ (x) contains a single diagonal (thus symmetric) matrix, say D. As shon in [20], S τ, (for > 1) is strictly monotonic and its derivative is uer bounded by 1, hich imlies that each entry of D belongs to ]0, 1]. This imlies that D is sd and D 2 1. Proosition 3: For any air of oints u,v R m, any B co Ψ λ ([u,v]) is symmetric, sd, and B 2 1. Proof: From Proosition 2, for any r R m, any A Ψ λ (r) is symmetric, sd, and has A 2 1. Thus co Ψ λ ([u,v]) is the convex hull of a set matrices hich are all symmetric, sd, and have norm no larger than 1. Therefore, any matrix B co Ψ λ ([u,v]) is also symmetric, sd, and has B 2 1. C.3. Proof of Theorem 4 (i) Recalling that e t = x t x and using (18), e rite e t+1 = (1 α)e t 1 +(α β)e t +β [Γ λ (x t ) Γ λ ( x)]. (33) Using the definition of Γ λ given in (19) and the mean value theorem (Theorem 8), e may rite Γ λ (x t ) Γ λ ( x) = Ψ λ (x t + K T (y Kx t )) } {{ } z t Ψ λ ( x + K T (y K x) ) } {{ } z [ = B t xt x + K T K(x t x) ] = B t [ I K T K ] e t, (34) here B t co Ψ λ ([z t,ẑ]). Recall that Proosition 3 states that B t is symmetric, sd, and has B t 2 1. Inserting (34) into (33), here e t+1 = (1 α)e t 1 + αe t β [ I B t [I K T K] ] e t = (1 α)e t 1 + [αi β M t ]e t, (35) M t = I B t [I K T K]. (36) Recalling that the stacked error vector t R 2m is ] t = [ et+1 e can use (35) to rite t = Q t t 1, here [ ] (αi β Mt ) (1 α) I Q t =. (37) I 0 Thus, Q is the set of matrices ith the form (37), here M t is given by (36) and B t is symmetric, sd, and has B t 2 1. To rove the second statement in Theorem 4 (i), e need to study ho the choice of α and β affects ρ(q t ) = max i λ i (Q t ), for any ossible M t. We begin by considering the folloing facts: (a) I K T K is symmetric and 1 ξ m λ i ( I K T K ) 1 ξ 1 (because ξ 1 λ i ( K T K ) ξ m ); (b) according to Proosition 3, B t is symmetric, sd, and B t 2 1, thus 0 λ i (B t ) 1. Consequently, using results on bounds of eigenvalues of roducts of symmetric matrices, one of hich is sd, [37, Theorem 2.2], e t min(0, 1 ξ m ) λ i ( Bt [I K T K] ) 1 ξ 1 ; (38) finally, since M t = I B t [I K T K], 0 < ξ 1 λ i (M t ) max(1, ξ m ) ξ m. (39) Folloing [3], let (µ,z) denotes any eigenair of Q t, i.e., Q t z = µz; riting z = [z T a,z T b ]T, e have [ ][ ] [ ] αi β Mt (1 α) I za za = µ. (40) I 0 The bottom m ros of (40) give z a = µ z b ; inserting this equality into the to half of (40), e obtain z b [ µ(αi β M t ) + (1 α)i]z b = µ 2 z b. (41), z b
11 SUBMITTED FOR PUBLICATION; Since the matrix in the l.h.s. of (41) can be ritten as (µ α + 1 α)i µ β M t, its eigenvectors coincide ith those of M t. Thus, ith λ denoting some eigenvalue of M t, µ has to be a solution of the folloing second degree equation Let (µ α + 1 α) µ β λ = µ 2. (42) ρ(α, β, λ) = max{ µ 1, µ 2 }, (43) here µ 1 and µ 2 are the to solutions of (42). We thus need to study ho ρ(α, β, λ) behaves for λ [λ min (M t ), λ max (M t )] [τ 1, τ m ], for each choice of α and β. Notice that (39) does not rovide τ 1 and τ m (all it guarantees is that [τ 1, τ m ] [ξ 1, ξ m ]). It is shon in [3, Lemma 5.8] that ρ(α, β, λ) < 1, for any λ [τ 1, τ m ], if 0 < α < 2 and 0 < β < 2 α/τ m. Since τ m ξ m, any β satisfying β < 2 α/ξ m also satisfies β < 2 α/τ m. Finally, notice that ρ(q t ) = max i { ρ(α, β, λ i (M t ))}; thus ρ(α, β, λ) < 1 imlies that ρ(q t ) < 1, concluding the roof of Theorem 4 (i). C.4. Proof of Theorem 4 (ii) We begin by reriting (42), for α = α and β = β, as µ 2 + (λ β α)µ + ( α 1) = 0, (44) and roving that the solutions of (44) are comlex conjugate for any λ [τ 1, τ m ]. From the classical formula for the solutions of a second degree equation, it s clear that the to roots of (44) are comlex conjugate if and only if (λ β α) 2 4( α 1), for any λ [τ 1, τ m ]; this inequality is equivalent to β 2 λ 2 2 α β λ + α 2 4 ( α 1) 0. (45) It s easy to sho that the to roots of l.h.s. of (45) are ξ 1 and ξ m ; thus, since β 2 > 0, inequality (45) is satisfied hen λ is beteen these to roots. Therefore, hen λ [τ 1, τ m ] [ξ 1, ξ m ], the roots of (44) are indeed comlex conjugate. Recall that the roduct of the to roots of a second order olynomial equals its indeendent term; alying this fact to (44) yields µ 1 µ 2 = ( α 1). For λ [τ 1, τ m ], e have µ 1 = µ 2, thus µ 1 µ 2 = µ 1 2 = µ 2 2 = ( α 1); thus ρ( α, β, λ) = max{ µ 2, µ 1 } = α 1 = ρ, for any λ [τ 1, τ m ], as stated in Theorem 4 (ii). C.5. Proof of Theorem 4 (iii) Inserting α = 1 and β = β in (42) leads to the equation µ(1 β λ) = µ 2, (46) hich has solutions µ 1 = 0 and µ 2 = (1 β λ). Consequently, ρ(1, β, λ) = max{ µ 1, µ 2 } = 1 β λ. To sho art (iii) of the theorem, e need to sho that 1 κ max ρ(1, β, λ) λ [τ 1, τ m] 1 + κ. Because ρ(1, β, λ) and (1 κ)/(1+κ) are ositive, both sides of the revious inequality can be squared. Simle maniulation allos shoing that ( ) 2 1 κ ρ 2 (1, β, ξ 1 ) = ρ 2 (1, β, ξ m ) =. 1 + κ Finally, since ρ 2 (1, β, λ) = (1 βλ) 2 is a convex function of λ, and [τ 1, τ m ] [ξ 1, ξ m ], ( ) 2 1 κ max ρ 2 (1, β, λ) max ρ 2 (1, β, λ), λ [τ 1, τ m] λ [ξ 1, ξ m ] 1 + κ concluding the roof of Theorem 4 (iii). C.6. Proof of Theorem 4 (iv) A sufficient condition for convergence to zero of the sitched linear system z t+1 = T t z t, here T t T, and T is a bounded set of matrices, is the existence of a matrix norm, such that T t ε < 1, for any T t T. Our roof uses the matrix norm A, defined as B A = ABA 1 2, (47) here A is a symmetric ositive definite matrix, hich is induced by the vector norm A = Av 2 [9]. We slit the roof into to cases: (a) With α = 1, the error e t evolves according to the oneste iteration e t+1 = (I β M t )e t ; (48) matrix M t (see (36)) can be ritten as M t = I B t U, here U = I K T K is a symmetric ositive definite matrix, thus so is U 1/2. Comuting the U 1/2 norm of (I β M t ), I β M t U 1/2 = U 1/2 (I β M t )U 1/2 2 = I β (I U 1/2 B t U 1/2 ) 2 = ρ(i β M t ) (49) here e have used the folloing facts: for a real symmetric matrix A, A 2 = ρ(a) and, for any air of square matrices A and B, ρ(ab) = ρ(ba). Finally, notice that, as shon in Section C.5, 1 κ ρ(i β M t ) = max ρ(1, β, λ) λ [τ 1, τ m] 1 + κ < 1, concluding the convergence roof, for α = 1. (b) With α < 1, let us define the matrix [ U 0 V = 0 (1 α)u ]. (50) With Q t given by (37), it is simle to conclude that V 1/2 Q t V 1/2 = [ ] (α β)i + β U 1/2 B t U 1/2 1 α I, (51) 1 α I 0 hich is a real symmetric matrix. This allos riting Q t V 1/2 = V 1/2 Q t V 1/2 2 = ρ (V 1/2 Q t V 1/2) = ρ (Q t ) = max{ ρ(α, β, λ i (M t ))} (52) i { ρ(α, β, λ)} (53) max λ [τ 1, τ m] < 1, (54) here the equality in (52) and the inequalities (53) and (54) ere shon in Section C.3.
12 SUBMITTED FOR PUBLICATION; REFERENCES [1] H. Andres and B. Hunt. Digital Image Restoration, Prentice Hall, Engleood Cliffs, NJ, [2] G. Archer and D. Titterington. On Bayesian/regularization methods for image restoration, IEEE Trans. on Image Processing, vol. 4, , [3] O. Axelsson, Iterative Solution Methods, Cambridge University Press, Ne York, [4] J. Bect, L. BlancFéraud, G. Aubert, and A. Chambolle, A l 1 unified variational frameork for image restoration, Euroean Conference on Comuter Vision ECCV 2004,. 1 13, SringerVerlag, [5] M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging, IOP Publishing, Bristol, UK, [6] J. BioucasDias, Fast GEM aveletbased image deconvolution algorithm, IEEE International Conference on Image Processing ICIP 2003, vol. 2, , [7] J. BioucasDias, Bayesian aveletbased image deconvolution: a GEM algorithm exloiting a class of heavytailed riors, IEEE Trans. on Image Processing, vol. 15, , [8] J. BioucasDias, M. Figueiredo, and J. Oliveira, Total variation image deconvolution: A majorizationminimization aroach, IEEE Intern. Conf. on Acoustics, Seech, and Signal Processing  ICASSP 2006, Toulouse, [9] Z.H. Cao, On the convergence on nonstationary iterative methods for symmetric ositive (semi)definite systems, Alied Numerical Mathematics, vol. 37, , [10] A. Chambolle, An algorithm for total variation minimization and alications, Journal of Mathematical Imaging and Vision, vol. 20, , [11] A. Chambolle, Total Variation Minimization and a Class of Binary MRF Models, Intern. Worksho on Energy Minimization Methods in Comuter Vision and Pattern Recognition EMMCVPR 2005, , Sringer Verlag, [12] A. Chambolle, R. De Vore, N. Lee, and B. Lucier, Nonlinear avelet image rocessing: variational roblems, comression, and noise removal through avelet shrinkage, IEEE Trans. on Image Processing, vol. 7, , [13] A. Chambolle, P.L. Lions, Image recovery via total variation minimization and related roblems, Numerische Mathematik, vol. 76, , [14] T. Chan, S. Esedoglu, F. Park, and A. Yi, Recent develoments in total variation image restoration, in Mathematical Models of Comuter Vision, N. Paragios, Y. Chen, and O. Faugeras (Eds), Sringer Verlag, [15] C. Chaux, P. Combettes, J.C. Pesquet, and V. Wajs. Iterative image deconvolution using overcomlete reresentations, Euroean Signal Processing Conference EUSIPCO 2006, Florence, Italy, [16] S. Chen, D. Donoho, and M. Saunders, Atomic decomosition by basis ursuit, SIAM Journal of Scientific Comutation, vol. 20, , [17] F. Clarke, Otimization and Nonsmooth Analysis, WileyInterscience, [18] P. Combettes and V. Wajs, Signal recovery by roximal forardbackard slitting, SIAM Journal on Multiscale Modeling & Simulation, vol. 4, , [19] J. Darbon and M. Sigelle, A fast and exact algorithm for total variation minimization, Proceedings of the 2nd Iberian Conference on Pattern Recognition and Image Analysis IbPRIA 2005, LNCS vol. 3522, , Sringer Verlag, [20] I. Daubechies, M. Defriese, and C. De Mol, An iterative thresholding algorithm for linear inverse roblems ith a sarsity constraint, Communications on Pure and Alied Mathematics, vol. LVII, , [21] D. Donoho and I. Johnstone, Adating to unknon smoothness via avelet shrinkage, Journal of the American Statistical Association, vol. 90, no. 432, , [22] D. Donoho, Denoising by soft thresholding, IEEE Trans. on Information Theory, vol. 41, , [23] M. Elad, Why simle shrinkage is still relevant for redundant reresentations?, IEEE Transactions on Information Theory, vol. 52, , [24] M. Elad, B. Matalon, and M. Zibulevsky, Image denoising ith shrinkage and redundant reresentations, Proceedings of the IEEE Comuter Society Conference on Comuter Vision and Pattern Recognition CVPR 2006, Ne York, [25] M. Figueiredo, Adative sarseness for suervised learning, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 25, , [26] M. Figueiredo, J. BioucasDias, and R. Noak, Majorizationminimization algorithms for aveletbased image restoration submitted, [27] M. Figueiredo and R. Noak, An EM algorithm for aveletbased image restoration, IEEE Trans. on Image Processing, vol. 12, no. 8, , [28] M. Figueiredo and R. Noak, A bound otimization aroach to aveletbased image deconvolution, IEEE Intern. Conf. on Image Processing ICIP 05, Genoa, Italy, [29] M. Figueiredo, J. BioucasDias, J. Oliveira, and R. Noak, On totalvariation denoising: A ne majorizationminimization algorithm and an exerimental comarison ith avalet denoising, IEEE Intern. Conf. on Image Processing ICIP 06, [30] J. GuerreroColon, J. Portilla, Deblurringbydenoising using satially adative Gaussian scale mixtures in overcomlete yramids. Proceedings of the IEEE International Conference on Image Processing ICIP 2006, Atlanta, GA, USA, [31] Y. Grandvalet and S. Canu, Outcomes of the equivalence of adative ridge ith least absolute shrinkage, Advances in Neural Information Processing Systems NIPS11, MIT Press, [32] D. Hunter and K. Lange. A tutorial on MM algorithms. The American Statistician, vol. 58, , [33] A. Jalobeanu, N. Kingsbury, and J. Zerubia, Image deconvolution using hidden Markov tree modeling of comlex avelet ackets, IEEE Intern. Conf. on Image Processing ICIP 01, Thessaloniki, Greece, [34] A. Katsaggelos, Digital Image Restoration, Series in Information Sciences, vol. 23, SringerVerlag, [35] D. Lorenz, Wavelet Shrinkage in Signal and Image Processing: An Investigation of Relations and Equivalences, Phd Thesis, Zentrum für Technomathematik, Universität Bremen, Available at. math.unibremen.de/ dlorenz [36] D. Lorenz, Variational denoising in Besov saces and interolation of hard and soft avelet shrinkage, Technical Reort, Zentrum für Technomathematik, Universität Bremen, Available at.math. unibremen.de/ dlorenz [37] L. Lu and C. Pearce, Some ne bounds for singular values and eigenvalues of matrix roducts, Annals of Oerations Research vol. 98, , [38] P. Moulin and J. Liu. Analysis of multiresolution image denoising schemes using generalizedgaussian and comlexity riors, IEEE Trans. Inform. Theory, vol. 45, , [39] R. Neelamani, H. Choi, and R. Baraniuk. ForWaRD: Fourieravelet regularized deconvolution for illconditioned systems. IEEE Trans. Signal Processing, vol. 52, , [40] R. Noak and M. Figueiredo, Fast aveletbased image deconvolution using the EM algorithm, Proc. 35th Asilomar Conf. on Signals, Systems, and Comuters, vol. 1, , [41] S. Osher, L. Rudin, and E. Fatemi, Nonlinear total variation based noise removal algorithms, Physica D., vol. 60, , [42] J. Portilla, V. Strela, M. Wainright, and E. Simoncelli, Image denoising using scale mixtures of Gaussians in the avelet domain, IEEE Trans. Image Processing, vol 12, , [43] R. T. Rockafellar and R. Wets, Variational Analysis, Sringer Verlag, Berlin, [44] J.L. Starck, E. Candès, and D. Donoho. Astronomical image reresentation by the curvelet transform, Astronomy and Astrohysics, vol. 398, , [45] J.L. Starck, M. Nguyen, and F. Murtagh. Wavelets and curvelets for image deconvolution: a combined aroach, Signal Processing, vol. 83, , [46] D. Sun and L. Qi Solving variational inequality roblems via smoothingnonsmooth reformulations, Journal of Comutational and Alied Mathematics, vol. 129, , [47] J. Theys, Joint Sectral Radius: Theory and Aroximations, PhD Thesis, Université Catholique de Louvain, Available at.inma.ucl.ac.be / blondel/05thesetheys.df [48] C. Wu, On the convergence roerties of the EM algorithm, The Annals of Statistics, vol. 11, , [49] C. Zalinescu, Convex Analysis in General Vector Saces, World Scientific Publishing, Singaore, 2002.
INVERSE problems abound in many application areas of
IEEE TRANSACTIONS ON IMAGE PROCESSING 1 A New TwIST: TwoStep Iterative Shrinkage/ Thresholding Algorithms for Image Restoration José M. BioucasDias, Member, IEEE, and Mário A. T. Figueiredo, Senior Member,
More informationAn important observation in supply chain management, known as the bullwhip effect,
Quantifying the Bullwhi Effect in a Simle Suly Chain: The Imact of Forecasting, Lead Times, and Information Frank Chen Zvi Drezner Jennifer K. Ryan David SimchiLevi Decision Sciences Deartment, National
More informationUniversiteitUtrecht. Department. of Mathematics. Optimal a priori error bounds for the. RayleighRitz method
UniversiteitUtrecht * Deartment of Mathematics Otimal a riori error bounds for the RayleighRitz method by Gerard L.G. Sleijen, Jaser van den Eshof, and Paul Smit Prerint nr. 1160 Setember, 2000 OPTIMAL
More informationA MOST PROBABLE POINTBASED METHOD FOR RELIABILITY ANALYSIS, SENSITIVITY ANALYSIS AND DESIGN OPTIMIZATION
9 th ASCE Secialty Conference on Probabilistic Mechanics and Structural Reliability PMC2004 Abstract A MOST PROBABLE POINTBASED METHOD FOR RELIABILITY ANALYSIS, SENSITIVITY ANALYSIS AND DESIGN OPTIMIZATION
More informationA Modified Measure of Covert Network Performance
A Modified Measure of Covert Network Performance LYNNE L DOTY Marist College Deartment of Mathematics Poughkeesie, NY UNITED STATES lynnedoty@maristedu Abstract: In a covert network the need for secrecy
More informationThe Online Freezetag Problem
The Online Freezetag Problem Mikael Hammar, Bengt J. Nilsson, and Mia Persson Atus Technologies AB, IDEON, SE3 70 Lund, Sweden mikael.hammar@atus.com School of Technology and Society, Malmö University,
More informationComparing Dissimilarity Measures for Symbolic Data Analysis
Comaring Dissimilarity Measures for Symbolic Data Analysis Donato MALERBA, Floriana ESPOSITO, Vincenzo GIOVIALE and Valentina TAMMA Diartimento di Informatica, University of Bari Via Orabona 4 76 Bari,
More informationPoint Location. Preprocess a planar, polygonal subdivision for point location queries. p = (18, 11)
Point Location Prerocess a lanar, olygonal subdivision for oint location ueries. = (18, 11) Inut is a subdivision S of comlexity n, say, number of edges. uild a data structure on S so that for a uery oint
More informationMultiperiod Portfolio Optimization with General Transaction Costs
Multieriod Portfolio Otimization with General Transaction Costs Victor DeMiguel Deartment of Management Science and Oerations, London Business School, London NW1 4SA, UK, avmiguel@london.edu Xiaoling Mei
More informationA Simple Model of Pricing, Markups and Market. Power Under Demand Fluctuations
A Simle Model of Pricing, Markus and Market Power Under Demand Fluctuations Stanley S. Reynolds Deartment of Economics; University of Arizona; Tucson, AZ 85721 Bart J. Wilson Economic Science Laboratory;
More informationIntroduction to NPCompleteness Written and copyright c by Jie Wang 1
91.502 Foundations of Comuter Science 1 Introduction to Written and coyright c by Jie Wang 1 We use timebounded (deterministic and nondeterministic) Turing machines to study comutational comlexity of
More information6.042/18.062J Mathematics for Computer Science December 12, 2006 Tom Leighton and Ronitt Rubinfeld. Random Walks
6.042/8.062J Mathematics for Comuter Science December 2, 2006 Tom Leighton and Ronitt Rubinfeld Lecture Notes Random Walks Gambler s Ruin Today we re going to talk about onedimensional random walks. In
More informationForensic Science International
Forensic Science International 214 (2012) 33 43 Contents lists available at ScienceDirect Forensic Science International jou r nal h o me age: w ww.els evier.co m/lo c ate/fo r sc iin t A robust detection
More informationJena Research Papers in Business and Economics
Jena Research Paers in Business and Economics A newsvendor model with service and loss constraints Werner Jammernegg und Peter Kischka 21/2008 Jenaer Schriften zur Wirtschaftswissenschaft Working and Discussion
More informationLargeScale IP Traceback in HighSpeed Internet: Practical Techniques and Theoretical Foundation
LargeScale IP Traceback in HighSeed Internet: Practical Techniques and Theoretical Foundation Jun Li Minho Sung Jun (Jim) Xu College of Comuting Georgia Institute of Technology {junli,mhsung,jx}@cc.gatech.edu
More informationSQUARE GRID POINTS COVERAGED BY CONNECTED SOURCES WITH COVERAGE RADIUS OF ONE ON A TWODIMENSIONAL GRID
International Journal of Comuter Science & Information Technology (IJCSIT) Vol 6, No 4, August 014 SQUARE GRID POINTS COVERAGED BY CONNECTED SOURCES WITH COVERAGE RADIUS OF ONE ON A TWODIMENSIONAL GRID
More informationRisk in Revenue Management and Dynamic Pricing
OPERATIONS RESEARCH Vol. 56, No. 2, March Aril 2008,. 326 343 issn 0030364X eissn 15265463 08 5602 0326 informs doi 10.1287/ore.1070.0438 2008 INFORMS Risk in Revenue Management and Dynamic Pricing Yuri
More informationPARAMETER CHOICE IN BANACH SPACE REGULARIZATION UNDER VARIATIONAL INEQUALITIES
PARAMETER CHOICE IN BANACH SPACE REGULARIZATION UNDER VARIATIONAL INEQUALITIES BERND HOFMANN AND PETER MATHÉ Abstract. The authors study arameter choice strategies for Tikhonov regularization of nonlinear
More informationMODELLING AND SIMULATION OF A DISH STIRLING SOLAR ENGINE. Sergio Bittanti Antonio De Marco Marcello Farina Silvano Spelta
MODELLING AND SIMULATION OF A DISH STIRLING SOLAR ENGINE Sergio Bittanti Antonio De Marco Marcello Farina Silvano Selta Diartimento di Elettronica e Informazione, Politecnico di Milano, Via Ponzio 34,
More informationJournal of Computational and Applied Mathematics
Journal of Comutational and Alied Mathematics 230 2009 69 82 Contents lists available at ScienceDirect Journal of Comutational and Alied Mathematics journal homeage: www.elsevier.com/locate/cam Proerties
More informationLarge firms and heterogeneity: the structure of trade and industry under oligopoly
Large firms and heterogeneity: the structure of trade and industry under oligooly Eddy Bekkers University of Linz Joseh Francois University of Linz & CEPR (London) ABSTRACT: We develo a model of trade
More information1 Gambler s Ruin Problem
Coyright c 2009 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins
More informationThe fast Fourier transform method for the valuation of European style options inthemoney (ITM), atthemoney (ATM) and outofthemoney (OTM)
Comutational and Alied Mathematics Journal 15; 1(1: 16 Published online January, 15 (htt://www.aascit.org/ournal/cam he fast Fourier transform method for the valuation of Euroean style otions inthemoney
More informationMonitoring Frequency of Change By Li Qin
Monitoring Frequency of Change By Li Qin Abstract Control charts are widely used in rocess monitoring roblems. This aer gives a brief review of control charts for monitoring a roortion and some initial
More informationDAYAHEAD ELECTRICITY PRICE FORECASTING BASED ON TIME SERIES MODELS: A COMPARISON
DAYAHEAD ELECTRICITY PRICE FORECASTING BASED ON TIME SERIES MODELS: A COMPARISON Rosario Esínola, Javier Contreras, Francisco J. Nogales and Antonio J. Conejo E.T.S. de Ingenieros Industriales, Universidad
More informationStochastic Derivation of an Integral Equation for Probability Generating Functions
Journal of Informatics and Mathematical Sciences Volume 5 (2013), Number 3,. 157 163 RGN Publications htt://www.rgnublications.com Stochastic Derivation of an Integral Equation for Probability Generating
More informationAn Associative Memory Readout in ESN for Neural Action Potential Detection
g An Associative Memory Readout in ESN for Neural Action Potential Detection Nicolas J. Dedual, Mustafa C. Ozturk, Justin C. Sanchez and José C. Princie Abstract This aer describes how Echo State Networks
More informationOn the predictive content of the PPI on CPI inflation: the case of Mexico
On the redictive content of the PPI on inflation: the case of Mexico José Sidaoui, Carlos Caistrán, Daniel Chiquiar and Manuel RamosFrancia 1 1. Introduction It would be natural to exect that shocks to
More informationService Network Design with Asset Management: Formulations and Comparative Analyzes
Service Network Design with Asset Management: Formulations and Comarative Analyzes Jardar Andersen Teodor Gabriel Crainic Marielle Christiansen October 2007 CIRRELT200740 Service Network Design with
More informationService Network Design with Asset Management: Formulations and Comparative Analyzes
Service Network Design with Asset Management: Formulations and Comarative Analyzes Jardar Andersen Teodor Gabriel Crainic Marielle Christiansen October 2007 CIRRELT200740 Service Network Design with
More informationCharacterizing and Modeling Network Traffic Variability
Characterizing and Modeling etwork Traffic Variability Sarat Pothuri, David W. Petr, Sohel Khan Information and Telecommunication Technology Center Electrical Engineering and Comuter Science Deartment,
More informationA Multivariate Statistical Analysis of Stock Trends. Abstract
A Multivariate Statistical Analysis of Stock Trends Aril Kerby Alma College Alma, MI James Lawrence Miami University Oxford, OH Abstract Is there a method to redict the stock market? What factors determine
More informationENFORCING SAFETY PROPERTIES IN WEB APPLICATIONS USING PETRI NETS
ENFORCING SAFETY PROPERTIES IN WEB APPLICATIONS USING PETRI NETS Liviu Grigore Comuter Science Deartment University of Illinois at Chicago Chicago, IL, 60607 lgrigore@cs.uic.edu Ugo Buy Comuter Science
More informationLoad Balancing Mechanism in Agentbased Grid
Communications on Advanced Comutational Science with Alications 2016 No. 1 (2016) 5762 Available online at www.isacs.com/cacsa Volume 2016, Issue 1, Year 2016 Article ID cacsa00042, 6 Pages doi:10.5899/2016/cacsa00042
More informationApplications to Data Smoothing and Image Processing I
Applications to Data Smoothing and Image Processing I MA 348 Kurt Bryan Signals and Images Let t denote time and consider a signal a(t) on some time interval, say t. We ll assume that the signal a(t) is
More informationImplementation of Statistic Process Control in a Painting Sector of a Automotive Manufacturer
4 th International Conference on Industrial Engineering and Industrial Management IV Congreso de Ingeniería de Organización Donostia an ebastián, etember 8 th  th Imlementation of tatistic Process Control
More informationX How to Schedule a Cascade in an Arbitrary Graph
X How to Schedule a Cascade in an Arbitrary Grah Flavio Chierichetti, Cornell University Jon Kleinberg, Cornell University Alessandro Panconesi, Saienza University When individuals in a social network
More informationWeb Application Scalability: A ModelBased Approach
Coyright 24, Software Engineering Research and Performance Engineering Services. All rights reserved. Web Alication Scalability: A ModelBased Aroach Lloyd G. Williams, Ph.D. Software Engineering Research
More informationConcurrent Program Synthesis Based on Supervisory Control
010 American Control Conference Marriott Waterfront, Baltimore, MD, USA June 30July 0, 010 ThB07.5 Concurrent Program Synthesis Based on Suervisory Control Marian V. Iordache and Panos J. Antsaklis Abstract
More informationManaging specific risk in property portfolios
Managing secific risk in roerty ortfolios Andrew Baum, PhD University of Reading, UK Peter Struemell OPC, London, UK Contact author: Andrew Baum Deartment of Real Estate and Planning University of Reading
More informationDiscrete Stochastic Approximation with Application to Resource Allocation
Discrete Stochastic Aroximation with Alication to Resource Allocation Stacy D. Hill An otimization roblem involves fi nding the best value of an obective function or fi gure of merit the value that otimizes
More informationA Virtual Machine Dynamic Migration Scheduling Model Based on MBFD Algorithm
International Journal of Comuter Theory and Engineering, Vol. 7, No. 4, August 2015 A Virtual Machine Dynamic Migration Scheduling Model Based on MBFD Algorithm Xin Lu and Zhuanzhuan Zhang Abstract This
More informationMind the Duality Gap: Logarithmic regret algorithms for online optimization
Mind the Duality Gap: Logarithmic regret algorithms for online optimization Sham M. Kakade Toyota Technological Institute at Chicago sham@ttic.org Shai ShalevShartz Toyota Technological Institute at
More informationLocal Connectivity Tests to Identify Wormholes in Wireless Networks
Local Connectivity Tests to Identify Wormholes in Wireless Networks Xiaomeng Ban Comuter Science Stony Brook University xban@cs.sunysb.edu Rik Sarkar Comuter Science Freie Universität Berlin sarkar@inf.fuberlin.de
More informationAlpha Channel Estimation in High Resolution Images and Image Sequences
In IEEE Comuter Society Conference on Comuter Vision and Pattern Recognition (CVPR 2001), Volume I, ages 1063 68, auai Hawaii, 11th 13th Dec 2001 Alha Channel Estimation in High Resolution Images and Image
More informationBranchandPrice for Service Network Design with Asset Management Constraints
BranchandPrice for Servicee Network Design with Asset Management Constraints Jardar Andersen Roar Grønhaug Mariellee Christiansen Teodor Gabriel Crainic December 2007 CIRRELT200755 BranchandPrice
More informationStatic and Dynamic Properties of Smallworld Connection Topologies Based on Transitstub Networks
Static and Dynamic Proerties of Smallworld Connection Toologies Based on Transitstub Networks Carlos Aguirre Fernando Corbacho Ramón Huerta Comuter Engineering Deartment, Universidad Autónoma de Madrid,
More informationStability Improvements of Robot Control by Periodic Variation of the Gain Parameters
Proceedings of the th World Congress in Mechanism and Machine Science ril ~4, 4, ianin, China China Machinery Press, edited by ian Huang. 868 Stability Imrovements of Robot Control by Periodic Variation
More informationMachine Learning with Operational Costs
Journal of Machine Learning Research 14 (2013) 19892028 Submitted 12/11; Revised 8/12; Published 7/13 Machine Learning with Oerational Costs Theja Tulabandhula Deartment of Electrical Engineering and
More informationTworesource stochastic capacity planning employing a Bayesian methodology
Journal of the Oerational Research Society (23) 54, 1198 128 r 23 Oerational Research Society Ltd. All rights reserved. 165682/3 $25. www.algravejournals.com/jors Tworesource stochastic caacity lanning
More informationChE 120B Lumped Parameter Models for Heat Transfer and the Blot Number
ChE 0B Lumed Parameter Models for Heat Transfer and the Blot Number Imagine a slab that has one dimension, of thickness d, that is much smaller than the other two dimensions; we also assume that the slab
More informationFailure Behavior Analysis for Reliable Distributed Embedded Systems
Failure Behavior Analysis for Reliable Distributed Embedded Systems Mario Tra, Bernd Schürmann, Torsten Tetteroo {tra schuerma tetteroo}@informatik.unikl.de Deartment of Comuter Science, University of
More informationThe Economics of the Cloud: Price Competition and Congestion
Submitted to Oerations Research manuscrit The Economics of the Cloud: Price Cometition and Congestion Jonatha Anselmi Basque Center for Alied Mathematics, jonatha.anselmi@gmail.com Danilo Ardagna Di. di
More informationSupplemental material for: Dynamic jump intensities and risk premiums: evidence from S&P500 returns and options
Sulemental material for: Dynamic jum intensities and risk remiums: evidence from S&P5 returns and otions Peter Christo ersen University of Toronto, CBS and CREATES Kris Jacobs University of Houston and
More informationEffect Sizes Based on Means
CHAPTER 4 Effect Sizes Based on Means Introduction Raw (unstardized) mean difference D Stardized mean difference, d g Resonse ratios INTRODUCTION When the studies reort means stard deviations, the referred
More informationResponseTime Control of a ProcessorSharing System Using Virtualized Server Environments Kjaer, Martin Ansbjerg; Kihl, Maria; Robertsson, Anders
Resonseime Control of a Sharing System Using Virtualized Server Environments Kjaer, Martin Ansbjerg; Kihl, Maria; Robertsson, Anders Published in: Proc. of the 7th IFAC World Congress Published: 8
More informationA Note on Integer Factorization Using Lattices
A Note on Integer Factorization Using Lattices Antonio Vera To cite this version: Antonio Vera A Note on Integer Factorization Using Lattices [Research Reort] 2010, 12 HAL Id: inria00467590
More information2D Modeling of the consolidation of soft soils. Introduction
D Modeling of the consolidation of soft soils Matthias Haase, WISMUT GmbH, Chemnitz, Germany Mario Exner, WISMUT GmbH, Chemnitz, Germany Uwe Reichel, Technical University Chemnitz, Chemnitz, Germany Abstract:
More informationSynopsys RURAL ELECTRICATION PLANNING SOFTWARE (LAPER) Rainer Fronius Marc Gratton Electricité de France Research and Development FRANCE
RURAL ELECTRICATION PLANNING SOFTWARE (LAPER) Rainer Fronius Marc Gratton Electricité de France Research and Develoment FRANCE Synosys There is no doubt left about the benefit of electrication and subsequently
More informationThe impact of metadata implementation on webpage visibility in search engine results (Part II) q
Information Processing and Management 41 (2005) 691 715 www.elsevier.com/locate/inforoman The imact of metadata imlementation on webage visibility in search engine results (Part II) q Jin Zhang *, Alexandra
More informationMatrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo
Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for
More informationwhere a, b, c, and d are constants with a 0, and x is measured in radians. (π radians =
Introduction to Modeling 3.61 3.6 Sine and Cosine Functions The general form of a sine or cosine function is given by: f (x) = asin (bx + c) + d and f(x) = acos(bx + c) + d where a, b, c, and d are constants
More informationPOISSON PROCESSES. Chapter 2. 2.1 Introduction. 2.1.1 Arrival processes
Chater 2 POISSON PROCESSES 2.1 Introduction A Poisson rocess is a simle and widely used stochastic rocess for modeling the times at which arrivals enter a system. It is in many ways the continuoustime
More informationThe Economics of the Cloud: Price Competition and Congestion
Submitted to Oerations Research manuscrit (Please, rovide the manuscrit number!) Authors are encouraged to submit new aers to INFORMS journals by means of a style file temlate, which includes the journal
More informationOn Multicast Capacity and Delay in Cognitive Radio Mobile Adhoc Networks
On Multicast Caacity and Delay in Cognitive Radio Mobile Adhoc Networks Jinbei Zhang, Yixuan Li, Zhuotao Liu, Fan Wu, Feng Yang, Xinbing Wang Det of Electronic Engineering Det of Comuter Science and Engineering
More informationNOISE ANALYSIS OF NIKON D40 DIGITAL STILL CAMERA
NOISE ANALYSIS OF NIKON D40 DIGITAL STILL CAMERA F. Mojžíš, J. Švihlík Detartment of Comuting and Control Engineering, ICT Prague Abstract This aer is devoted to statistical analysis of Nikon D40 digital
More informationMinimizing the Communication Cost for Continuous Skyline Maintenance
Minimizing the Communication Cost for Continuous Skyline Maintenance Zhenjie Zhang, Reynold Cheng, Dimitris Paadias, Anthony K.H. Tung School of Comuting National University of Singaore {zhenjie,atung}@com.nus.edu.sg
More informationPRIME NUMBERS AND THE RIEMANN HYPOTHESIS
PRIME NUMBERS AND THE RIEMANN HYPOTHESIS CARL ERICKSON This minicourse has two main goals. The first is to carefully define the Riemann zeta function and exlain how it is connected with the rime numbers.
More informationThe Fundamental Incompatibility of Scalable Hamiltonian Monte Carlo and Naive Data Subsampling
The Fundamental Incomatibility of Scalable Hamiltonian Monte Carlo and Naive Data Subsamling Michael Betancourt Deartment of Statistics, University of Warwick, Coventry, UK CV4 7A BETANAPHA@GMAI.COM Abstract
More informationOn Software Piracy when Piracy is Costly
Deartment of Economics Working aer No. 0309 htt://nt.fas.nus.edu.sg/ecs/ub/w/w0309.df n Software iracy when iracy is Costly Sougata oddar August 003 Abstract: The ervasiveness of the illegal coying of
More informationCABRS CELLULAR AUTOMATON BASED MRI BRAIN SEGMENTATION
XI Conference "Medical Informatics & Technologies"  2006 Rafał Henryk KARTASZYŃSKI *, Paweł MIKOŁAJCZAK ** MRI brain segmentation, CT tissue segmentation, Cellular Automaton, image rocessing, medical
More informationComputing the Most Probable String with a Probabilistic Finite State Machine
Comuting the Most Probable String with a Probabilistic Finite State Machine Colin de la Higuera Université de Nantes, CNRS, LINA, UMR6241, F44000, France cdlh@univnantesfr Jose Oncina De de Lenguajes
More informationIntroduction to Inverse Problems (2 lectures)
Introduction to Inverse Problems (2 lectures) Summary Direct and inverse problems Examples of direct (forward) problems Deterministic and statistical points of view Illposed and illconditioned problems
More informationSOME PROPERTIES OF EXTENSIONS OF SMALL DEGREE OVER Q. 1. Quadratic Extensions
SOME PROPERTIES OF EXTENSIONS OF SMALL DEGREE OVER Q TREVOR ARNOLD Abstract This aer demonstrates a few characteristics of finite extensions of small degree over the rational numbers Q It comrises attemts
More informationarxiv:0711.4143v1 [hepth] 26 Nov 2007
Exonentially localized solutions of the KleinGordon equation arxiv:711.4143v1 [heth] 26 Nov 27 M. V. Perel and I. V. Fialkovsky Deartment of Theoretical Physics, State University of SaintPetersburg,
More information12. Inner Product Spaces
1. Inner roduct Spaces 1.1. Vector spaces A real vector space is a set of objects that you can do to things ith: you can add to of them together to get another such object, and you can multiply one of
More informationNumerical Methods For Image Restoration
Numerical Methods For Image Restoration CIRAM Alessandro Lanza University of Bologna, Italy Faculty of Engineering CIRAM Outline 1. Image Restoration as an inverse problem 2. Image degradation models:
More informationTimeCost TradeOffs in ResourceConstraint Project Scheduling Problems with Overlapping Modes
TimeCost TradeOffs in ResourceConstraint Proect Scheduling Problems with Overlaing Modes François Berthaut Robert Pellerin Nathalie Perrier Adnène Hai February 2011 CIRRELT201110 Bureaux de Montréal
More informationThe risk of using the Q heterogeneity estimator for software engineering experiments
Dieste, O., Fernández, E., GarcíaMartínez, R., Juristo, N. 11. The risk of using the Q heterogeneity estimator for software engineering exeriments. The risk of using the Q heterogeneity estimator for
More informationSECTION 6: FIBER BUNDLES
SECTION 6: FIBER BUNDLES In this section we will introduce the interesting class o ibrations given by iber bundles. Fiber bundles lay an imortant role in many geometric contexts. For examle, the Grassmaniann
More informationSoftmax Model as Generalization upon Logistic Discrimination Suffers from Overfitting
Journal of Data Science 12(2014),563574 Softmax Model as Generalization uon Logistic Discrimination Suffers from Overfitting F. Mohammadi Basatini 1 and Rahim Chiniardaz 2 1 Deartment of Statistics, Shoushtar
More informationSTATISTICAL CHARACTERIZATION OF THE RAILROAD SATELLITE CHANNEL AT KUBAND
STATISTICAL CHARACTERIZATION OF THE RAILROAD SATELLITE CHANNEL AT KUBAND Giorgio Sciascia *, Sandro Scalise *, Harald Ernst * and Rodolfo Mura + * DLR (German Aerosace Centre) Institute for Communications
More informationFREQUENCIES OF SUCCESSIVE PAIRS OF PRIME RESIDUES
FREQUENCIES OF SUCCESSIVE PAIRS OF PRIME RESIDUES AVNER ASH, LAURA BELTIS, ROBERT GROSS, AND WARREN SINNOTT Abstract. We consider statistical roerties of the sequence of ordered airs obtained by taking
More information4 Perceptron Learning Rule
Percetron Learning Rule Objectives Objectives  Theory and Examles  Learning Rules  Percetron Architecture 3 SingleNeuron Percetron 5 MultileNeuron Percetron 8 Percetron Learning Rule 8 Test Problem
More informationMoving Objects Tracking in Video by Graph Cuts and Parameter Motion Model
International Journal of Comuter Alications (0975 8887) Moving Objects Tracking in Video by Grah Cuts and Parameter Motion Model Khalid Housni, Driss Mammass IRF SIC laboratory, Faculty of sciences Agadir
More informationTitle: Stochastic models of resource allocation for services
Title: Stochastic models of resource allocation for services Author: Ralh Badinelli,Professor, Virginia Tech, Deartment of BIT (235), Virginia Tech, Blacksburg VA 2461, USA, ralhb@vt.edu Phone : (54) 2317688,
More informationFrom Simulation to Experiment: A Case Study on Multiprocessor Task Scheduling
From to Exeriment: A Case Study on Multirocessor Task Scheduling Sascha Hunold CNRS / LIG Laboratory Grenoble, France sascha.hunold@imag.fr Henri Casanova Det. of Information and Comuter Sciences University
More informationComputational Optical Imaging  Optique Numerique.  Deconvolution 
Computational Optical Imaging  Optique Numerique  Deconvolution  Winter 2014 Ivo Ihrke Deconvolution Ivo Ihrke Outline Deconvolution Theory example 1D deconvolution Fourier method Algebraic method
More informationPredicate Encryption Supporting Disjunctions, Polynomial Equations, and Inner Products
Predicate Encrytion Suorting Disjunctions, Polynomial Equations, and Inner Products Jonathan Katz Amit Sahai Brent Waters Abstract Predicate encrytion is a new aradigm for ublickey encrytion that generalizes
More informationCRITICAL AVIATION INFRASTRUCTURES VULNERABILITY ASSESSMENT TO TERRORIST THREATS
Review of the Air Force Academy No (23) 203 CRITICAL AVIATION INFRASTRUCTURES VULNERABILITY ASSESSMENT TO TERRORIST THREATS Cătălin CIOACĂ Henri Coandă Air Force Academy, Braşov, Romania Abstract: The
More informationAdaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
More informationComplex Conjugation and Polynomial Factorization
Comlex Conjugation and Polynomial Factorization Dave L. Renfro Summer 2004 Central Michigan University I. The Remainder Theorem Let P (x) be a olynomial with comlex coe cients 1 and r be a comlex number.
More informationReDispatch Approach for Congestion Relief in Deregulated Power Systems
ReDisatch Aroach for Congestion Relief in Deregulated ower Systems Ch. Naga Raja Kumari #1, M. Anitha 2 #1, 2 Assistant rofessor, Det. of Electrical Engineering RVR & JC College of Engineering, Guntur522019,
More informationEfficient Training of Kalman Algorithm for MIMO Channel Tracking
Efficient Training of Kalman Algorithm for MIMO Channel Tracking Emna Eitel and Joachim Seidel Institute of Telecommunications, University of Stuttgart Stuttgart, Germany Abstract In this aer, a Kalman
More informationFactoring Variations in Natural Images with Deep Gaussian Mixture Models
Factoring Variations in Natural Images with Dee Gaussian Mixture Models Aäron van den Oord, Benjamin Schrauwen Electronics and Information Systems deartment (ELIS), Ghent University {aaron.vandenoord,
More informationJoint Production and Financing Decisions: Modeling and Analysis
Joint Production and Financing Decisions: Modeling and Analysis Xiaodong Xu John R. Birge Deartment of Industrial Engineering and Management Sciences, Northwestern University, Evanston, Illinois 60208,
More informationStorage Basics Architecting the Storage Supplemental Handout
Storage Basics Architecting the Storage Sulemental Handout INTRODUCTION With digital data growing at an exonential rate it has become a requirement for the modern business to store data and analyze it
More informationAn optimal batch size for a JIT manufacturing system
Comuters & Industrial Engineering 4 (00) 17±136 www.elsevier.com/locate/dsw n otimal batch size for a JIT manufacturing system Lutfar R. Khan a, *, Ruhul. Sarker b a School of Communications and Informatics,
More informationBeyond the F Test: Effect Size Confidence Intervals and Tests of Close Fit in the Analysis of Variance and Contrast Analysis
Psychological Methods 004, Vol. 9, No., 164 18 Coyright 004 by the American Psychological Association 108989X/04/$1.00 DOI: 10.1037/108989X.9..164 Beyond the F Test: Effect Size Confidence Intervals
More informationA Complete Operational Amplifier Noise Model: Analysis and Measurement of Correlation Coefficient
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: FUNDAMENTAL THEORY AND APPLICATION, VOL. 47, NO. 3, MARCH 000 40 for ractical alication, oening the ath for widesread adotion of the clockgating technique
More information