An existence result for a nonconvex variational problem via regularity


 Simon Blankenship
 1 years ago
 Views:
Transcription
1 An existence result for a nonconvex variational problem via regularity Irene Fonseca, Nicola Fusco, Paolo Marcellini Abstract Local Lipschitz continuity of minimizers of certain integrals of the Calculus of Variations is obtained when the integrands are convex with respect to the gradient variable, but are not necessarily uniformly convex. In turn, these regularity results entail existence of minimizers of variational problems with nonhomogeneous integrands nonconvex with respect to the gradient variable. The xdependence, explicitly appearing in the integrands, adds significant technical difficulties in the proof. Keywords: nonconvex variational problems, uniform convexity, regularity, implicit differential equations Mathematics Subject Classification: Primary 49J45, 49K0. Secondary 35F30, 35R70. 1 Introduction In this paper we establish existence and regularity of minimizers of energy integrals of the type f (x, Du(x)) dx, (1.1) subject to Dirichlet boundary conditions. The main feature of our problem is the fact that the integrand f = f(x, ξ) is not convex with respect to the gradient variable ξ. In recent years the study of non convex variational problems has undergone remarkable developments, motivated in part by advances in the study of material stability and instability. Contemporary issues such as phase transitions in certain alloys (see [3], [4]), nucleation [0], the onset of microstructure, and optimal design problems for thin films [18], require a good understanding of existence of (classical or generalized) equilibrium solutions for nonconvex energies. In addition, qualitative information on quasistatic solutions (e.g. regularity, hysteresis, oscillatory behavior) are needed in order to develop the evolutionary 1
2 framework, and, in particular, to search for the dynamical evolution of phase boundaries. These issues have challenged traditional theories. Within this setting, a relevant example has been considered by alljames [3], [4], who studied the two potential wells problem: minimize f (Du(x)) dx, where u : R n R n is a vectorvalued function, f : R n n [0, + ) is identically zero on two distinct potential wells SO(n)ξ, SO(n)η and f > 0 elsewhere. Here ξ, η R n n and SO(n) stands for the special orthogonal group. The existence of minimizers for the two potential wells problem has been obtained in two dimensions (i.e., n = ) by Dacorogna Marcellini [10] and by Müller Šverák [6] (for the case n = 3 see also Dolzmann Kirchheim Müller Šverák [15]. Nothing is known in higher dimension or for general integrands f as in (1.1). In this paper we restrict ourselves to the scalarvalued case, as a starting point to approach the vectorial setting. Also, the scalarvalued case is still far from being completely understood, unless the integrand f depends only on the gradient variable ξ and some special assumptions are made on the boundary data (see the references quoted below). Here we consider general boundary data u 0 W 1,p (), p > 1, and we allow the nonconvex integrands f to explicitly depend on x as in (1.1). In the proofs of the attainment results presented below the xdependence introduces substantial technical difficulties. The proof of the existence results for nonconvex variational problems considered in this paper hinges on the local Lipschitz continuity of minimizers of the relaxed problem associated to the bipolar f of f. These regularity results are presented in Section, and they apply to minimizers of some integrals of the Calculus of Variations with integrands f (x, ξ) convex with respect to ξ R n, but not everywhere uniformly convex; hence, we believe that the regularity results presented in Section should be of interest by themselves. In Section 3 we consider the variational problem { } inf f (x, Du(x)) dx : u u 0 + W 1,p 0 (), (1.) where u 0 W 1,p () is a given boundary datum and f = f(x, ξ) is a continuous function satisfying some growth conditions similar to the ones considered in the previous Section, so as to ensure Lipschitz continuity of minimizers of the relaxed problem. Here the most relevant fact is that f may be nonconvex with respect to the variable ξ R n. It is known that the variational problem (1.) may lack a minimizer (see Marcellini []; see also [6], [1], [1]). In the examples of nonexistence the following condition, expressed in terms of the bipolar f of f, is violated: for every x the function f (x, ) is affine on the set A(x) = {ξ R n : f(x, ξ) > f (x, ξ)},
3 i.e., there exist a continuous function q and a vector field m of class C 1, defined in the open set A := {x : A(x) }, such that f (x, ξ) = q(x) + m(x), ξ, x A, ξ A(x). (1.3) We also assume that the boundary (more precisely, the part of the boundary in ) of the set {x A : div m(x) = 0} (1.4) has zero (ndimensional) measure. In this paper we prove that (1.3), (1.4) (see also the more general assumptions made in Section 3.) are sufficient conditions for existence of minimizers to the variational problem (1.). We emphasize that we do not require any other condition on the vector field m other than (1.4); in particular, we do not assume that the vector field m has null divergence. We notice that, while condition (1.3) is necessary for guaranteeing the existence of minimizers (see [], [6], [1]), we do not know whether condition (1.4) may be removed. Existence theorems without convexity assumptions have been widely investigated in the onedimensional case n = 1 (see [1] for an extensive list of references). Theorem 3.1 in Section 3 is specific to the case n, and it is an extension of some analogous results, obtained under more restrictive assumptions, by Marcellini [], MascoloSchianchi [3], Cellina [7] and Friesecke [1]. In particular, Theorem 3.1 is an extension of related results recently proved by Sychev [8] and Zagatti [9] for integrands independent of x and under a strong assumption on the growth of f which ensures the almost everywhere differentiability of minimizers, i.e., p > n, by CeladaPerrotta [5] for p > 1, and by DacorognaMarcellini in [13], [1]. Finally we recall that Marcellini [] pointed out the necessity of the condition of affinity (1.3) of the function f on the set where f f to guarantee existence of minimizers. Cellina [6], [7] and Friesecke [1] proved the necessity and sufficiency of the condition of affinity for linear boundary data u 0. The explicit dependence of the integrand on the variable x was first considered by MascoloSchianchi in [4], assuming that the divergence of the vector field m in (1.3) is identically equal to zero in, in addition to other strong assumptions on the boundary data u 0. Also, in [7] Raymond studied a case where the divergence of the vector field m in (1.3) is always different from zero in, and some type of explicit dependence on u is allowed. Local Lipschitz Continuity.1 Preliminary results Let f : R n [0, + ) be a continuous function such that 0 f(ξ) L(1 + ξ p ), (.1) 3
4 where L > 0, p > 1. We say that f is uniformly convex at infinity if there exist R, ν > 0 such that, if the segment with endpoints ξ 1, ξ (that we will denote by [ξ 1, ξ ]) is contained in the complement of the closed ball R, then f ( ξ1 + ξ ) 1 f(ξ 1) + 1 f(ξ ) ν(1 + ξ 1 + ξ ) p ξ1 ξ. (.) Note that (.) is equivalent to ( ) ξ1 + ξ f 1 f(ξ 1) + 1 f(ξ ) ν ( ξ 1 + ξ ) p ξ1 ξ, for some ν > 0, since ξ 1, ξ > R > 0. If the above inequality (.) is satisfied for any ξ 1, ξ R n, then we say that f is uniformly convex in R n. A form of uniform convexity at infinity was also considered by Mascolo and Schianchi in [5]. The lemma below is proved in [1]. Lemma.1 If γ > 1/ then there exist positive constants c 1 = c 1 (γ), c = c (γ) such that, for all ξ, η R n, c 1 (1 + ξ + η ) γ 1 0 t(1 + tξ + (1 t)η ) γ dt c (1 + ξ + η ) γ. The following result provides two conditions which are equivalent to uniform convexity in R n. Proposition. Let f : R n [0, + ) be a continuous function satisfying (.1). The following conditions are equivalent: (i) f is uniformly convex in R n ; (ii) f(ξ) = c 1 ν(1 + ξ ) p/ + g(ξ), for some c 1 = c 1 (p) > 0, where g(ξ) is a convex function such that 0 g(ξ) L(1 + ξ p ) for all ξ; (iii) Q [f(ξ + Dϕ(x)) f(ξ)] dx c ν Q (1 + ξ + Dϕ ) p Dϕ(x) dx, for all ξ R n, ϕ C0(Q), 1 where Q = (0, 1) n and c = c (p) is a suitable constant. Remark.3 The condition (iii) in Proposition. is related to the notion of uniform quasiconvexity, introduced by Evans [16] and later studied by Evans Gariepy [17]. Proof of Proposition.. (i) = (ii). We define g(ξ) := f(ξ) c 1 ν(1 + ξ ) p/, where c 1 will be chosen later, and we show that g is convex. Given ξ 1, ξ R n we set ξ := (ξ 1 + ξ )/. From (i) we easily get that 1 g(ξ 1) + 1 g(ξ ) g(ξ) + ν(1 + ξ 1 + ξ ) p ξ1 ξ + c [ 1ν (1 + ξ ) p/ (1 + ξ 1 ) p/ (1 + ξ ) p/]. 4
5 Thus the assertion follows immediately from the fact that there exists a constant c = c(p) such that (1 + ξ 1 ) p/ + (1 + ξ ) p/ (1 + ξ ) p/ c(1 + ξ 1 + ξ ) p ξ1 ξ, and by setting c 1 := c 1. To establish this inequality we write, for i = 1,, where h (ξ) := (1 + ξ ) p/, yielding h (ξ i ) = h (ξ) + Dh (ξ), ξ i ξ (1 t) D h (ξ + t (ξ i ξ)) (ξ i ξ), ξ i ξ dt (1 + ξ i ) p/ (1 + ξ ) p/ p(1 + ξ ) p ξi ξ, ξ +c(p) ξ i ξ 1 0 (1 t)(1 + ξ + t(ξ i ξ) ) p dt. It suffices now to sum the above inequalities for i = 1, and apply Lemma.1. (ii) = (iii). From Lemma.1 we easily get that (iii) holds for the function ξ (1+ ξ ) p/. Hence the general case follows from Jensen s inequality applied to g. (iii) = (i). See the proof of Proposition.5 with θ = 1/ in [19]. Lemma.4 Let f : R n [0, + ) be a C function. Then f satisfies (.) if and only if there exists a constant c 0 such that for all ξ R n \ R D f(ξ)λ, λ c 0 ν (1 + ξ ) p λ λ R n. (.3) The proof of Lemma.4 is straightforward and it is left to the reader. Lemma.5 Let f : R n [0, + ) be a continuous function satisfying (.1) and (.). Then there exist R 0, ν 0, C 0 > 0, depending only on R, ν and L, such that for all ξ R n \ R0 there exists q ξ R n such that q ξ C 0 (1 + ξ p 1 ) and f(η) f(ξ) + q ξ, η ξ + ν 0 (1 + ξ + η ) p ξ η η R n. (.4) Moreover, if ξ > R 0, then f (ξ) = f(ξ). Proof. For 0 < ε < 1 set f ε := ρ ε f, where ρ ε (η) := ε n ρ(η/ε) and ρ(η) = ˆρ( η ) is a positive radially symmetric mollifier with support equal to, with := 1, ρ(η) > 0 if η < 1 and ρ(η)dη = 1. From (.) it follows easily that if [ξ 1, ξ ] R n \ R+1, ξ := (ξ 1 + ξ )/, then 1 [f ε(ξ 1 ) +f ε (ξ )] f ε (ξ)+ν ξ 1 ξ ρ(η)(1+ ξ 1 +εη + ξ +εη ) p dη. 5
6 The integral above can be estimated from below by ρ(η)(1+ ξ 1 + ξ +ε η +4ε ξ, η ) p 1 dη (1+ ξ 1 + ξ ) p { ξ,η 0} if p, and by ρ(η)(1 + ξ 1 + ξ + 4ε η ) p dη 5 (p )/ (1 + ξ 1 + ξ ) p when 1 < p <. In both cases 1 [f ε(ξ 1 ) + f ε (ξ )] f ε (ξ) + cν(1 + ξ 1 + ξ ) p ξ1 ξ ; hence, by Lemma.4, if ξ R + 1, then D f ε (ξ)λ, λ cν (1 + ξ ) p λ, λ R n. (.5) Moreover, it can be easily checked that 0 f ε (ξ) C(L)(1 + ξ p ) for all ξ, and a simple argument based on the convexity of f ε in R n \ R+1 shows that there exists a constant C 1 (L, R) such that Df ε (ξ) C 1 (1 + ξ p 1 ), ξ R n \ R+. (.6) We claim that there exists R 0 >> 1 such that, if ξ > R 0, then f ε (η) f ε (ξ) + Df ε (ξ), η ξ + c(1 + ξ + η ) p ξ η (.7) for all η R n. Assume that (.7) holds. Notice that, if ξ > R 0 > R +, then by (.6) there exists a sequence (ε h ) converging to 0 such that Df εh (ξ) q ξ for some q ξ R n such that q ξ C 1 (1 + ξ p 1 ). Hence (.4) follows from (.7), letting ε go to 0 +. The equality f (ξ) = f(ξ) for ξ > R 0 then follows at once from (.4). The remaining of the proof concerns the assertion of the claim (.7). Fix ξ such that ξ > R 0, with R 0 > (R + 3) to be chosen later, and denote by C ξ the open cone with vertex at ξ, tangent to the ball R+3. Case 1: If η R n \ C ξ or η C ξ \ R+3 and η, ξ 0, then f ε is convex along the line t ξ + t(η ξ) provided R 0 is sufficiently large, and (.7) follows from (.5). Case : If η R+3 we consider ξ := ξ(r + 3)/ ξ and a constant M := C(L)(1 + (R + 3) p ) such that 0 f ε (η) M for all η R+3 and all ε (0, 1); by Case 1 we may apply (.7) to ξ (notice that ξ R+3, ξ, ξ > 0), thus getting f ε (η) = f ε (ξ) + f ε (η) f ε (ξ) 6
7 f ε (ξ) + Df ε (ξ), ξ ξ M + c (1 + (R + 3) + ξ ) p ( ξ R 3) f ε (ξ) + Df ε (ξ), η ξ Df ε (ξ) η ξ M + c (1 + η + ξ ) p ξ η. The estimate (.7) now follows for η and ξ from the previous inequalities, together with the estimate Df ε (ξ) η ξ + M 1 c(1 + η + ξ ) p ξ η, and the latter holds by virtue of (.6), and provided ξ > R 0 and R 0 > (R + 3) sufficiently large. Case 3: Finally, let us assume that η C ξ \ R+3, η, ξ < 0. In this case we have ξ η > ξ + η and, denoting by η the projection of η on the boundary of the cone, and by α ξ the half angle at the vertex of C ξ, η η ξ η sin α ξ = R + 3 ξ η 1 ξ η. (.8) ξ Notice that, if R 0 is sufficiently large, then [η, η] R n \ R+ ; therefore we may use (.6) to estimate f ε (η) f ε ( η). This, together with (.7) applied to η C ξ, yields f ε (η) = f ε ( η) + f ε (η) f ε ( η) f ε (ξ)+ Df ε (ξ), η ξ C 1 (1+ η p 1 + η p 1 ) η η +c(1+ η + ξ ) p ξ η. Since, by (.8), ξ η ξ η 3 ξ η, for any p > 1 we have easily c(1 + η + ξ ) p ξ η c(p)(1 + η + ξ ) p ξ η, and, using (.6) once more, we obtain f ε (η) f ε (ξ) + Df ε (ξ), η ξ (.9) C 1 η η [ (1 + ξ p 1 ) + (1 + η p 1 + η p 1 ) ] +c(p)(1 + η + ξ ) p ξ η. y virtue of (.8), and recalling that ξ + η < ξ η, we have C 1 η η [ (1 + ξ p 1 ) + (1 + η p 1 + η p 1 ) ] c R + 3 ξ ξ η (1 + η p 1 + ξ p 1 ) c R + 3 ξ η (1 + η + ξ ) p ξ (c(p)/)(1 + η + ξ ) p ξ η if ξ > R 0, with R 0 large enough. This, together with (.9), concludes the proof of (.7). Remark.6 Let f satisfy (.1) and (.), and fix a point ξ 0 such that R 0 < ξ 0 < R 0. Applying (.4) with ξ = ξ 0 and recall that q ξ C 0 (1 + ξ p 1 ), for all η such that η > R 0 it holds f(η) c 1 (R 0, ν 0, C 0 ) η p c (R 0, ν 0, C 0 ). Hence, f(ξ) c 1 ξ p c for all ξ R n, with c 1, c depending only on R, ν, L. 7
8 . A regularity result In this section we assume that f : R n [0, + ) is a continuous function satisfying the growth condition 0 f(x, ξ) L(1 + ξ p ), (x, ξ) R n (.10) and for some L > 0. Let us denote by f := f (x, ξ) the bipolar of f, that is the convex envelope of f(x, ). We assume that f is continuous and that f is uniformly convex at infinity with respect to ξ (see (.)), i.e., there exist R, ν > 0 such that if the segment ξ 1, ξ is contained in the complement of the closed ball R, then for all x f ( x, ξ 1 + ξ ) 1 f(x, ξ 1)+ 1 f(x, ξ ) ν(1+ ξ 1 + ξ ) p ξ1 ξ. (.11) Finally we assume further that, if ξ > R, then the vector field x f ξ (x, ξ) is weakly differentiable and If u W 1,p loc D x f ξ (x, ξ) L(1 + ξ p 1 ), (x, ξ) (R n \ R ). (.1) () and A is open, then we set F (u, A) := f(x, Du(x)) dx. A The main result of this section is Theorem.7 below. We recall that u is said to be a local minimizer of F in if F (u, R (x 0 )) F (v, R (x 0 )) whenever R (x 0 ) and v u + W 1,p 0 ( R (x 0 )). Theorem.7 Let f : R n [0, + ) be a continuous function satisfying (.10), (.11) and (.1). If u W 1,p loc () is a local minimizer of the functional F, then u is locally Lipschitz continuous in. Moreover, there exists a constant C 0, depending on L, p, ν, R, such that, if r (x 0 ), then ) sup Du p C 0 (1 + Du p dx. (.13) r/ (x 0) r(x 0) We first show in Lemma.8 that, provided we know already that u is locally Lipschitz, (.13) holds with a constant C 0 depending only on L, p, ν, R. Once the a priori estimate (.13) is established, the regularity result is obtained using an approximation argument. Lemma.8 Let f satisfy the assumptions of Theorem.7. Assume, in addition, that f is C and that, for all x, ξ, λ R n, D ξi ξ j f(x, ξ)λ i λ j ε 0 (1 + ξ ) p λ, (.14) 8
9 and that u W 1,p loc () is a locally Lipschitz local minimizer of F in. Then (.13) holds with a constant depending only on L, p, ν, R and, in particular, independent on ε 0. Proof. Step 1: From Lemma.4 we have that for every x, ξ, λ R n, with ξ > R, D ξi ξ j f(x, ξ)λ i λ j cν(1 + ξ ) p λ. (.15) Since ũ(y) := u(x 0 + ry)/r is a local minimizer in ( x 0 )/r of the functional F, where F (v) := ( x f(x 0)/r 0 + ry, Dv(y))dy still satisfies the assumptions of Theorem.7, it is clear that in order to prove (.13) we may always assume, with no loss of generality, that := 1. Since u satisfies the Euler equation for F, D ξi f(x, Du)D i φ dx = 0, φ C0(), 1 using (.14) and the fact that D ξi ξ j f(x, Du(x)) are locally bounded in (which follows from the C regularity of f, together with the fact that Du is locally bounded) we have that u W, loc (), by a standard different quotient argument. We fix s {1,..., n}, η C0(), 1 0 η 1, ψ C (), and in the above Euler equation we take φ = η D s ψ to obtain D ξi f(x, Du)D s (D i ψ)η dx = ηd ξi f(x, Du)D s ψd i η dx. Integrating by parts the first integral, we have D ξi ξ j f(x, Du)D j (D s u)d i ψη dx = ηd ξi f(x, Du)D s ψd i η dx D xsξ i f(x, Du)D i ψη dx ηd ξi f(x, Du)D i ψd s η dx (.16) for all functions ψ W 1, (). Set n n V + (x) := 1+R + [(D h u(x) R) + ], V (x) := 1+R + [(D h u(x)+r) ], h=1 and notice that there exist constants c 1, c, depending only on n, such that c 1 (V + (x) + V (x)) 1 + R + Du(x) c (V + (x) + V (x)). (.17) Let ψ := V β + (D s u R) +, where β 0. y (.10) and the convexity of f (x, ), D ξ f(x, ξ) c(1 + ξ ) p 1, and (.16) yields D ξi ξ j f(x, Du)D j (D s u R) + D i (D s u R) + V β + η dx ( n +β D ξi ξ j f(x, Du)D j (D s u R) + (D s u R) + D i 9 h=1 h=1 [(D h u R) + ] ) V β 1 + η dx
10 +cβ c η(η + Dη )(1 + Du p 1 ) D(D s u R) + V β + dx ( n ) D [(D h u R) + ] (D s u R) + V β 1 + dx. η(η + Dη )(1 + Du p 1 ) h=1 Since all the integrals are evaluated in the set where Du > R, summing up on s, using (.15), the fact that (D s u R) + V 1/ + and Young s inequality, it follows easily that ( n ) (1 + Du ) p D [(D i u R) + ] V β 1 + η dx h=1 c ν (1 + Du p ) V β + (η + Dη ) dx, where the constant c depends only on n, p, L. Since the integral on the left hand side is evaluated in the set where Du > R, in turn this last inequality is equivalent to the following one: c ν (1 + R + Du ) p ( n ) D [(D i u R) + ] h=1 (1 + R + Du ) p/ V β + (η + Dη ) dx. V β 1 + η dx Inserting ψ = V β (D s u + R) in (.16), and using a similar argument, we get also ( n ) (1 + R + Du ) p D [(D i u + R) ] V β 1 η dx h=1 c ν (1 + R + Du ) p/ V β (η + Dη ) dx. Therefore, adding the last two inequalities and using (.17), we arrive to [ ] V p DV + V β DV V β 1 η dx c ν V p/+β (η + Dη ) dx, where V := max {V + ; V }. Step : From the inequality above we deduce that DV p/4+β/ η dx c(p + β) V p/+β (η + Dη ) dx. In turn, this implies that D(V p/4+β/ η) dx c(p + β) 10 V p/+β (η + Dη ) dx,
11 where the constant c depends only on L, p, n, R, ν. Setting γ := p/4 + β/ p/4, using the Sobolev Poincaré inequality, and the arbitrariness of β 0, we get that, for any γ p/4, V γ η L χ () cγ V γ (η + Dη ) L (), where χ := n/(n ) if n 3, or any number > 1 if n =. Considering the sequence of radii r i := 1/ + 1/ i for i = 1,..., we apply the inequality above to γ = γ i := (p/4)χ i 1, and choose η C0() 1 such that η = 1 on ri+1, 0 η 1, Dη c i. We obtain V L γ i+1 (ri+1 ) (ci γ i ) 1/γ i V L γ i (ri ). Iterating the above formula yields, for every i, V L γ i+1 (1/ ) C V L p/ (), where C = i=1 (ci γ i ) 1/γ i < +. Therefore, letting i go to + and using (.17), we obtain (.13). Remark.9 It follows immediately from the proof that the estimate (.13) may be generalized to read ( ) sup ρ(x 0) Du p C(ρ) 1 + Du p dx, r(x 0) for all 0 < ρ < r, where C(ρ) depends only on L, p, ν, R and ρ. We are now in position to prove Theorem.7, by using the following approximation lemma. Lemma.10 Let g : R n [0, + ) be a C convex function such that for all ξ R n 0 g(ξ) L(1 + ξ p ), where p > 1, L > 0, and assume that there exist R, ν > 0, such that if ξ > R, λ R n, D ij g(ξ)λ i λ j ν(1 + ξ ) p λ. Then there exists a constant c = c(n, p) and a sequence g h of C (R n ) convex functions such that (a) 0 g h (ξ) cl(1 + ξ p ) ξ R n ; (b) for any h there exists ε h > 0 such that, for all ξ, λ R n, ε h (1 + ξ ) p λ D ij g h (ξ)λ i λ j ε 1 (1 + ξ ) p λ ; (c) D ij g h (ξ)λ i λ j cν(1 + ξ ) p λ, λ R n, ξ > R + 1; (d) g h g uniformly on compact subset of R n. h 11
12 Proof. The proof of this lemma can be obtained arguing as in the Step and Step 3 of the proof of Lemma 3.4 of [19], with the obvious simple modifications needed in the present case. Therefore we omit the details. Proof of Theorem.7. Notice that if u is a local minimizer F, then u is also a local minimizer of the relaxed functional v f (x, Dv)dx, where, for all x, f (x, ξ) is the bipolar of ξ f(x, ξ). Indeed, if v u + W 1,p 0 ( r (x 0 )), then f (x, Dv) dx = inf { lim inf r(x 0) r(x 0) f(x, Dv h ) dx : v h v 0 in w W 1,p 0 ( r (x 0 )) Also, by virtue of Lemma.5, the function f satisfies the assumptions of Theorem.7. Therefore, with no loss of generality, we may assume that f is convex in ξ. Step 1: Let us assume that f(x, ξ) = N a i (x)g i (ξ), i=1 where, for i = 1,..., N, the function g i C (R n ) satisfies the assumptions of Lemma.10 for some L, R, ν > 0, and D g i (ξ)λ, λ ε 0 (1 + ξ ) (p )/ λ for all ξ, λ R n and for some ε 0 > 0. Moreover, let us assume that, for all i, the function a i is a nonnegative C function such that Da i M and that γ 1 < N i=1 a i(x) < γ for all x and for some positive constant γ. For every i let us denote by g i,h a sequence of C (R n ) functions such that g i,h g i uniformly on the compact sets of R n, satisfying the conditions (a), (b) and (c) of Lemma.10, and let us set for all (x, ξ) R n f h (x, ξ) := N a i (x) g i,h (ξ). i=1 From Remark.6 it follows easily that there exist constants c 1, c, depending only on L, R, ν, γ, such that, for all (x, ξ) and for any h, f(x, ξ), f h (x, ξ) c 1 ξ p c. (.18) Given r (x 0 ), we denote by u h the solution of the problem { } min f h (x, Dv) dx : v u + W 1,p 0 ( r (x 0 )). (.19) r(x 0) Since the functions g i,h satisfy condition (b) of Lemma.10, standard elliptic regularity theory implies that u h C 1,α () W, loc () for any h. From the assumptions on f, from the approximation provided by Lemma.10, and from }. 1
13 (.18), it follows that the sequence u h is bounded in W 1,p ( r (x 0 )). Moreover, by Lemma.8 (see Remark.9), for all ρ < r we obtain ( ) sup ρ(x 0) Du h p C 1 + Du h p dx, (.0) r(x 0) where the constant C depends ultimately only on L, p, R, ν, γ, M and ρ, but not on h. Hence we may assume, up to a subsequence, that u h u weakly* in W 1, ( ρ (x 0 )) for any ρ < r. Since f h f uniformly on compact sets of R n and the integrand f is convex, for any ρ < r we have f(x, Du )dx lim inf f(x, Du h )dx = lim inf f h (x, Du h )dx ρ(x 0) ρ(x 0) lim inf f h (x, Du) dx = r(x 0) r(x 0) ρ(x 0) f(x, Du) dx, where we used the fact that u h is a solution for (.19). Letting ρ r and recalling that u is a local minimizer of the functional F, we deduce that u is also a minimizer of F in r (x 0 ). Since the functional F is strictly convex, we have that u = u. Passing to the limit as h + in (.0), we conclude that u is also locally Lipschitz. Indeed, using the minimality of u h and (.0), we get ( ) sup Du p lim inf sup Du h p r/ (x 0) r/ (x 0) ( ) ( ) c lim inf c lim inf 1 + r(x 0) ( 1 + Du h p dx c lim inf 1 + ) ( f h (x, Du) dx c 1 + r(x 0) r(x 0) r(x 0) f h (x, Du h ) dx Du p dx Step : Let us now assume that f C ( R n ) and that there exists ε 0 > 0 such that D ξi ξ j f(x, ξ)λ i λ j ε 0 (1 + ξ ) (p )/ λ for all (x, ξ) R n and for any λ R n. Fix an open set A and let us prove that (.13) holds for any ball r (x 0 ) A (with a constant C 0 not depending on A). To this aim let ψ C 0 (R n ) be a cutoff function such that 0 ψ(x) 1 for all x, suppψ ( 3, 3) n and such that ψ(x) 1 if x [ 1, 1] n. For any h N we denote by Q i,h (x i,h ) the standard covering of R n with closed cubes, centered at x i,h, with sides parallel to the coordinates axes, side length equal to /h and having pairwise disjoint interiors. Then, for any i, h, we set ψ i,h (x) := ψ (h (x x i,h )) and ). σ h (x) := ψ i,h (x), i=1 ϕ i,h (x) := ψ i,h(x) σ h (x). 13
14 Finally, for all h such that 1 n/h <dist(a; ), and for every x A, ξ R n, we set f h (x, ξ) := ϕ i,h (x)f(x i,h, ξ). i=1 Notice that the above sum is actually finite (indeed it consists of at most 3 n terms), and that each function f h is of the type considered in Step 1. Moreover, we claim that the functions f h satisfy the assumptions of Lemma.8 with suitable constants L, ε 0, R, ν not depending on h. The verification of our claim in the case of assumptions (.10), (.11) (or equivalently (.15)) and (.14) follows easily by the corresponding assumptions on f. We limit ourselves to show that for any h D xξ f h (x, ξ) cl(1 + ξ p 1 ), (x, ξ) A (R n \ R ), (.1) where L is the constant appearing in (.1) (relative to f) and c is a constant depending only on n, ψ, but not on h. Let us fix x 0 A and ξ R n \ R. y construction there exist at most 3 n cubes, Q j1,h(x j1,h),..., Q j3 n,h(x j3 n,h), such that for any x in a suitable neighborhood U of x 0 f h (x, ξ) = 3 n l=1 ϕ jl,h(x) f(x jl,h, ξ), Therefore for all x U we have that D xξ f h (x, ξ) = = 3 n l=1 3 n l= 3 n l=1 ϕ jl,h(x) = 1. D x ϕ jl,h(x) D ξ f(x jl,h, ξ) (.) D x ϕ jl,h(x) [D ξ f(x jl,h, ξ) D ξ f(x j1,h, ξ)]. In view of assumption (.1), we have that for all l, D ξ f(x jl,h, ξ) D ξ f(x j1,h, ξ) c(n)l h (1 + ξ p 1 ). On the other hand, for any j, there exists a set of indices I j containing j, with #(I j ) = 3 n, such that, for all x R n, D x ϕ j,h (x) = D xψ j,h (x) σ h (x) ψ j,h(x) σ h (x) k I j D x ψ k,h (x). Therefore, since by construction σ h (x) 1 for all x, we have that, for all x R n and any j, D x ϕ j,h (x) (3 n + 1)h( D x ψ ). In view of the above estimates and from (.), we may conclude that for all (x, ξ) A (R n \ R ) and for any h D xξ f h (x, ξ) c(n)l D x ψ (1 + ξ p 1 ), 14
15 and thus (.1) follows. Finally, notice that f h (x, ξ) f(x, ξ) uniformly in A K for every K compact subset of R n. Hence the rest of the proof goes as in Step 1, since also in this case the function ξ f(x, ξ) is strictly convex for all x A. Step 3: Let f satisfy the assumptions of Theorem.7. Fix an open set A, an infinitesimal sequence ε h of positive numbers and a positive symmetric mollifier ρ. For h large enough we set for all (x, ξ) A R n f h (x, ξ) := ρ(y)ρ(η)f(x + ε h y, ξ + ε h η) dydη + ε h (1 + ξ p ), where is the unit open ball in R n. Notice that each function f h is of the type considered in Step and that f h (x, ξ) f(x, ξ) uniformly on any set of the type A K, with K R n compact. Moreover, the functions f h satisfy the assumptions of Theorem.7 with the corresponding constants L, R, ν bounded from above and away from zero. As in Step 1, given a ball r (x 0 ) A we denote by u h the solution of the problem { } min f h (x, Dv) dx : v u + W 1,p ( r (x 0 )). r(x 0) From the assumptions on f and the construction of the functions f h it follows easily that the sequence u h is bounded in W 1,p ( r (x 0 )). Moreover, by Step, for all ρ < r we have ( ) sup ρ(x 0) Du h p C 1 + Du h p dx, r(x 0) where the constant C depends only on L, p, R, ν and ρ, but not on h. Hence we may assume that, up to a subsequence, u h u weakly* in W 1, ( ρ (x 0 )) for any ρ < r. As in Step 1 we have again that also u is a minimizer of F in r (x 0 ). However, in the present case the functional F is not necessarily strictly convex, hence we may not conclude as before that u = u in r (x 0 ). Set E 0 := {x r (x 0 ) : Du (x)+du(x) > R 0 }, where R 0 is the constant provided by Lemma.5. If E 0 has positive measure, then from the convexity of f(x, ) and Remark.6 we have, setting ũ := (u + u )/, f(x, Dũ) dx 1 f(x, Du) dx+ 1 f(x, Du ) dx. ρ(x 0)\E 0 ρ(x 0)\E 0 ρ(x 0)\E 0 Also, applying twice (.4) in Lemma.5, first with ξ := Dũ and η := Du, and then with ξ := Dũ and η := Du, adding up these two inequalities yields f(x, Dũ) dx < 1 f(x, Du)dx+ 1 f(x, Du )dx. ρ(x 0) E 0 ρ(x 0) E 0 ρ(x 0) E 0 Adding these two inequalities we get a contradiction with the minimality of u and u. Therefore E 0 has zero measure. Applying Step to the functions u h, 15
16 (.4) to the functions f h, and using the minimality of u h, we deduce that ( ) sup Du p lim inf sup Du h p r/ (x 0) r/ (x 0) ( C lim inf 1 + ( c lim inf 1 + r(x 0) r(x 0) ) ( Du h p dx c lim inf 1 + ) ( f h (x, Du ) dx c 1 + r(x 0) r(x 0) Then the result follows, since Du(x) + Du (x) R 0 for a.e. x. f h (x, Du h ) dx Du p dx 3 Attainment of minima for nonconvex problems Here we give an existence result for the variational problem { } inf f (x, Du(x)) dx : u u 0 + W 1,p 0 (), (3.1) where is a bounded open set of R n and u 0 W 1,p (), p > 1. Throughout this section we assume that f : R n R is a continuous function satisfying the growth condition c 1 ξ p c f(x, ξ) L(1 + ξ p ), (x, ξ) R n, (3.) for some constants c 1, c, L > 0, and is uniformly convex at infinity with respect to ξ, i.e., there exist R, ν > 0 such that, if the segment [ξ 1, ξ ] is contained in R n \ R, then ( f x, ξ ) 1 + ξ 1 f(x, ξ 1) + 1 f(x, ξ ) ν(1 + ξ 1 + ξ ) p ξ1 ξ (3.3) for every x (see (.)). Notice that, if 0 f(x, ξ) L(1 + ξ p ), then condition (3.3) implies the coercivity inequality in the left hand side of (3.) (see Remark.6). In addition, we assume that there exists the distributional derivative D xξ f(x, ξ) and D xξ f(x, ξ) L(1 + ξ p 1 ), x, ξ > R, (3.4) provided L in (3.) is chosen to be sufficiently large (see (.1)). Let us denote by f := f (x, ξ) the bipolar of f, that is the convex envelope of f(x, ) (i.e., the largest convex function in ξ which is less than or equal to f(x, ) on R n ). We assume that f is continuous; hence, for any x, the set A(x) := {ξ R n : f(x, ξ) > f (x, ξ)} (3.5) ). ) 16
17 is open. We shall prove the existence of a minimizer for the problem (3.1) under the main assumption that f (x, ) is affine on each connected component of A(x). However, in order to present the argument of the proof in a simplified setting, we shall treat first the case where f (x, ) is affine (with the same slope) on the whole set A(x). We will refer to this situation as the model case. The proof of this case contains all the ideas and technical tools which are needed to treat the general situation in which f (x, ) is affine (with possibly different slopes) on each connected component of A(x). 3.1 The model case As before f : R n R is a continuous function satisfying (3.) (3.4). We assume that f : R n R is continuous and we denote by A(x) the set defined in (3.5) and by A := {x : A(x) }. Since f and f are continuous functions, A is open. Here we consider the case where f (x, ) is affine in A(x); more precisely, we assume that there exist a function q C 0 ( A ) and a vector field m C 1 ( A ; R n ) such that f (x, ξ) = q(x) + m(x), ξ, x A, ξ A(x). (3.6) We also assume that the boundary of the set where the divergence of m is equal to zero is negligible, i.e., Finally, for every x A, we set meas ( {x A : div m(x) = 0}) = 0. (3.7) E(x) := {ξ R n : f (x, ξ) = q(x) + m(x), ξ } ; note that, by the growth conditions in (3.) and by the assumption that f is uniformly convex at infinity with respect to ξ, then the set E(x) is bounded uniformly for x A (see also Lemma.5). We assume there exists an increasing function ω : [0, + ) [0, + ), with ω(t) = 0 if and only if t = 0, such that, if x A, ξ E(x), η R n \ E(x), then ( f x, ξ + η ) 1 f (x, ξ) + 1 f (x, η) ω ( ξ η ). (3.8) The main result of this section is the following existence theorem. Theorem 3.1 Let f, f : R n R be continuous functions (f not necessarily convex with respect to ξ R n ). Under the above assumptions on f and f ((3.) (3.4) and (3.6) (3.8)), for any given boundary datum u 0 W 1,p () the variational problem (3.1) attains its minimum. Moreover, any minimizers is of class W 1, loc (). 17
18 The proof of Theorem 3.1 is obtained using the same method as in the work of Dacorogna Marcellini [13], [1], who considered integrands independent of x. Our proof however follows from some new lemmas. The first one concerns the relaxed variational problem { } inf f (x, Du(x)) dx : u u 0 + W 1,p 0 (). (3.9) Lemma 3. The minimum of the relaxed variational ( problem (3.9) ) is attained. Moreover, there exist a minimizer v W 1, loc () u 0 + W 1,p 0 () of (3.9) and an open set (possibly empty) such that { Dv(x) A(x) a.e. x Dv(x) / A(x) a.e. x \ (3.10) and div m = 0 in. Remark 3.3 Formally, if Dv(x) A(x) then by (3.6) f (x, Dv(x)) = q(x) + m(x), Dv(x). Therefore, the Euler s equation for v gives n s=1 x s f ξ s (x, Dv) = div m(x) = 0, a.e. x such that Dv(x) A(x). Thus (3.10) would follow if we could define := {Dv(x) A(x)}. A striking feature of Lemma 3. is that the set may be chosen to be open, and so Lemma 3. may be considered to be a strong form of Euler s first variation for the minimizers. The property (3.10) is a regularity result, and in fact it follows from the regularity results obtained in Section. The proof of Lemma 3. follows the argument by Dacorogna Marcellini [1] in Theorem Previous arguments related to Lemma 3. are due to De lasi Pianigiani [14], Sychev [8], and Zagatti [9]. Proof of Lemma 3.. As before we denote by A the open subset of consisting of those points x such that A(x). We split A into three sets (possibly empty), + A := {x A : div m(x) > 0}, A := {x A : div m(x) < 0}, (3.11) 0 A := {x A : div m(x) = 0}. (3.1) Since div m is a continuous function, + A A int 0 A is an open set and, by (3.7), meas ( A \ ( + A A int ( ) A)) 0 meas 0 A = 0. 18
19 y (3.), (3.3), (3.4), Lemma.5, and Theorem.7 the relaxed variational ( problem (3.9) has a minimizer u in the Sobolev class W 1, loc () u0 + W 1,p () ). Thus, by Rademacher theorem (see, for example, Theorem..1 of [30], or Theorem.14 of []) u is classically differentiable for almost every x. Let x 0 be a point of A \ 0 A where u is differentiable. Then u (x) = u (x 0 ) + Du (x 0 ), x x 0 + o ( x x 0 ), x. (3.13) Also, assume that Du (x 0 ) A(x 0 ) = {ξ R n : f(x 0, ξ) > f (x 0, ξ)}. Since A and A(x) are open sets, there exists γ (0, 1) (depending on u and x 0 ) such that x A, ξ A(x), (3.14) for all (x, ξ) R n such that x x 0 γ, ξ Du (x 0 ) γ. (3.15) Recall that x 0 A \ 0 A ; thus we can also assume that γ is sufficiently small so that { x0 ± A, x x 0 γ = x ± A, x 0 int 0 A, x x 0 γ = x int 0 A. (3.16) Choose δ (0, γ] (depending on x 0 ) sufficiently small such that and o ( x x 0 ) x x 0 δ γ, x δ (x 0 ), x x 0, (3.17) γ Du (x 0 ) + 4γ, δ(x 0 ). (3.18) y (3.16) and by the definition of + A, A, 0 A in (3.11), (3.1), we have { x0 ± A = div m(x) 0 x δ (x 0 ), x 0 int 0 A = div m(x) = 0 x δ (x 0 ). (3.19) For every r (0, δ], let us define in the function v r x 0 by v r x 0 (x) := u (x 0 ) + Du (x 0 ), x x 0 ± γ (r x x 0 ), x, the sign + being chosen if x 0 + A, while the sign is selected if x 0 A. If x 0 int 0 A then any sign in the definition of vr x 0 (x) is a good choice; in order to fix the ideas, we choose the sign + if x 0 int 0 A. Since D x = 1 for every x R n \ {0}, we obtain Dv r x0 (x) Du (x 0 ) = γ D x x0 = γ a.e. x, 19
20 and thus by (3.14), (3.15), Dvx r 0 (x) A(x), a.e. x δ (x 0 ), r (0, δ]. (3.0) If x 0 + A int 0 A we set G(x 0, r) := { x δ (x 0 ) : vx r 0 (x) u (x) }, (3.1) and if x 0 A we define G(x 0, r) := { x δ (x 0 ) : v r x 0 (x) u (x) }. (3.) We claim that G(x 0, r) is a closed set satisfying r/3 (x 0 ) G(x 0, r) r (x 0 ). (3.3) Let us verify (3.3) when x 0 + A int 0 A. If x δ(x 0 ) but x / r (x 0 ) (that is r < x x 0 δ) then by (3.13) and (3.17) we get vx r 0 (x) u (x) = γ (r x x 0 ) o ( x x 0 ) ( < γ x x 0 o ( x x 0 ) = x x 0 γ + o ( x x ) 0 ) 0. x x 0 Thus v r x 0 (x) u (x) < 0 and x / G(x 0, r). On the other hand if x r/3 (x 0 ), then r/3 x x 0, and again by (3.13), (3.17), we obtain vx r 0 (x) u (x) = γ (r x x 0 ) o ( x x 0 ) ( γ x x 0 o ( x x 0 ) = x x 0 γ o ( x x ) 0 ) 0 x x 0 and x G(x 0, r). Thus (3.3) is proved. y (3.1) and the continuity of u and vx r 0 we have G(x 0, r) = { x δ (x 0 ) : v r x 0 (x) = u (x) }, thus G(x 0, r) and G(x 0, r ) are disjoint for r r and we conclude that only countably many of these boundary sets can have positive measure. Therefore we can always choose a sequence r h of real numbers such that r h 0 as h +, 0 < r h δ, h N, meas ( G(x 0, r h )) = 0, h N. Let us consider the measurable subset of (3.4) M := { x 0 + A A int 0 A : u differentiable at x 0, Du (x 0 ) A(x 0 ) } and consider the family of open sets G := {int G(x, r h ) : x M, r h as in (3.4)}. 0
Controllability and Observability of Partial Differential Equations: Some results and open problems
Controllability and Observability of Partial Differential Equations: Some results and open problems Enrique ZUAZUA Departamento de Matemáticas Universidad Autónoma 2849 Madrid. Spain. enrique.zuazua@uam.es
More informationErgodicity and Energy Distributions for Some Boundary Driven Integrable Hamiltonian Chains
Ergodicity and Energy Distributions for Some Boundary Driven Integrable Hamiltonian Chains Peter Balint 1, Kevin K. Lin 2, and LaiSang Young 3 Abstract. We consider systems of moving particles in 1dimensional
More informationDecoding by Linear Programming
Decoding by Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles, CA 90095
More informationA Modern Course on Curves and Surfaces. Richard S. Palais
A Modern Course on Curves and Surfaces Richard S. Palais Contents Lecture 1. Introduction 1 Lecture 2. What is Geometry 4 Lecture 3. Geometry of InnerProduct Spaces 7 Lecture 4. Linear Maps and the Euclidean
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 1289. Compressed Sensing. David L. Donoho, Member, IEEE
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 1289 Compressed Sensing David L. Donoho, Member, IEEE Abstract Suppose is an unknown vector in (a digital image or signal); we plan to
More informationWHICH SCORING RULE MAXIMIZES CONDORCET EFFICIENCY? 1. Introduction
WHICH SCORING RULE MAXIMIZES CONDORCET EFFICIENCY? DAVIDE P. CERVONE, WILLIAM V. GEHRLEIN, AND WILLIAM S. ZWICKER Abstract. Consider an election in which each of the n voters casts a vote consisting of
More informationAn Elementary Introduction to Modern Convex Geometry
Flavors of Geometry MSRI Publications Volume 3, 997 An Elementary Introduction to Modern Convex Geometry KEITH BALL Contents Preface Lecture. Basic Notions 2 Lecture 2. Spherical Sections of the Cube 8
More informationSubspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at UrbanaChampaign
More informationEMPIRICAL PROCESSES: THEORY AND APPLICATIONS
NSFCBMS Regional Conference Series in Probability and Statistics Volume 2 EMPIRICAL PROCESSES: THEORY AND APPLICATIONS David Pollard Yale University Sponsored by the Conference Board of the Mathematical
More informationHow Bad is Forming Your Own Opinion?
How Bad is Forming Your Own Opinion? David Bindel Jon Kleinberg Sigal Oren August, 0 Abstract A longstanding line of work in economic theory has studied models by which a group of people in a social network,
More informationOptimization with SparsityInducing Penalties. Contents
Foundations and Trends R in Machine Learning Vol. 4, No. 1 (2011) 1 106 c 2012 F. Bach, R. Jenatton, J. Mairal and G. Obozinski DOI: 10.1561/2200000015 Optimization with SparsityInducing Penalties By
More informationA SELFGUIDE TO OMINIMALITY
A SELFGUIDE TO OMINIMALITY CAMERINO TUTORIAL JUNE 2007 Y. PETERZIL, U. OF HAIFA 1. How to read these notes? These notes were written for the tutorial in the Camerino Modnet Summer school. The main source
More informationRegular Languages are Testable with a Constant Number of Queries
Regular Languages are Testable with a Constant Number of Queries Noga Alon Michael Krivelevich Ilan Newman Mario Szegedy Abstract We continue the study of combinatorial property testing, initiated by Goldreich,
More informationThe Backpropagation Algorithm
7 The Backpropagation Algorithm 7. Learning as gradient descent We saw in the last chapter that multilayered networks are capable of computing a wider range of Boolean functions than networks with a single
More informationTHE PROBLEM OF finding localized energy solutions
600 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 3, MARCH 1997 Sparse Signal Reconstruction from Limited Data Using FOCUSS: A Reweighted Minimum Norm Algorithm Irina F. Gorodnitsky, Member, IEEE,
More informationLaw Invariant Risk Measures have the Fatou Property
Law Invariant Risk Measures have the Fatou Property E. Jouini W. Schachermayer N. Touzi Abstract S. Kusuoka [K 1, Theorem 4] gave an interesting dual characterization of law invariant coherent risk measures,
More informationOrthogonal Bases and the QR Algorithm
Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries
More informationONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK
ONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK Definition 1. A random walk on the integers with step distribution F and initial state x is a sequence S n of random variables whose increments are independent,
More informationSpaceTime Approach to NonRelativistic Quantum Mechanics
R. P. Feynman, Rev. of Mod. Phys., 20, 367 1948 SpaceTime Approach to NonRelativistic Quantum Mechanics R.P. Feynman Cornell University, Ithaca, New York Reprinted in Quantum Electrodynamics, edited
More informationMatthias Beck Gerald Marchesi Dennis Pixton Lucas Sabalka
Matthias Beck Gerald Marchesi Dennis Pixton Lucas Sabalka Version.5 Matthias Beck A First Course in Complex Analysis Version.5 Gerald Marchesi Department of Mathematics Department of Mathematical Sciences
More informationFoundations of Data Science 1
Foundations of Data Science John Hopcroft Ravindran Kannan Version /4/204 These notes are a first draft of a book being written by Hopcroft and Kannan and in many places are incomplete. However, the notes
More informationSome Applications of Laplace Eigenvalues of Graphs
Some Applications of Laplace Eigenvalues of Graphs Bojan MOHAR Department of Mathematics University of Ljubljana Jadranska 19 1111 Ljubljana, Slovenia Notes taken by Martin Juvan Abstract In the last decade
More informationGeneralized compact knapsacks, cyclic lattices, and efficient oneway functions
Generalized compact knapsacks, cyclic lattices, and efficient oneway functions Daniele Micciancio University of California, San Diego 9500 Gilman Drive La Jolla, CA 920930404, USA daniele@cs.ucsd.edu
More informationSecondorder conditions in C 1,1 constrained vector optimization
Ivan Ginchev, Angelo Guerreggio, Matteo Rocca Secondorder conditions in C 1,1 constrained vector optimization 2004/17 UNIVERSITÀ DELL'INSUBRIA FACOLTÀ DI ECONOMIA http://eco.uninsubria.it In questi quaderni
More informationWhen Is There a Representer Theorem? Vector Versus Matrix Regularizers
Journal of Machine Learning Research 10 (2009) 25072529 Submitted 9/08; Revised 3/09; Published 11/09 When Is There a Representer Theorem? Vector Versus Matrix Regularizers Andreas Argyriou Department
More informationCOSAMP: ITERATIVE SIGNAL RECOVERY FROM INCOMPLETE AND INACCURATE SAMPLES
COSAMP: ITERATIVE SIGNAL RECOVERY FROM INCOMPLETE AND INACCURATE SAMPLES D NEEDELL AND J A TROPP Abstract Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect
More informationHow to Use Expert Advice
NICOLÒ CESABIANCHI Università di Milano, Milan, Italy YOAV FREUND AT&T Labs, Florham Park, New Jersey DAVID HAUSSLER AND DAVID P. HELMBOLD University of California, Santa Cruz, Santa Cruz, California
More informationFast Solution of l 1 norm Minimization Problems When the Solution May be Sparse
Fast Solution of l 1 norm Minimization Problems When the Solution May be Sparse David L. Donoho and Yaakov Tsaig October 6 Abstract The minimum l 1 norm solution to an underdetermined system of linear
More informationOptimal Inapproximability Results for MAXCUT
Electronic Colloquium on Computational Complexity, Report No. 101 (2005) Optimal Inapproximability Results for MAXCUT and Other 2Variable CSPs? Subhash Khot College of Computing Georgia Tech khot@cc.gatech.edu
More informationProbability in High Dimension
Ramon van Handel Probability in High Dimension ORF 570 Lecture Notes Princeton University This version: June 30, 2014 Preface These lecture notes were written for the course ORF 570: Probability in High
More information