Measurement incompatibility: a resource for quantum steering

Size: px
Start display at page:

Download "Measurement incompatibility: a resource for quantum steering"

Transcription

1 University of Trento Department of Physics Master s degree in physics Final Thesis Measurement incompatibility: a resource for quantum steering Author: Fabiano Lever Supervisors: Otfried Gühne Iacopo Carusotto Roope Uola March 2016

2 Physics is like sex. Sure, it may have some practical results, but that s not why we do it. Richard P. Feynman

3 Acknowledgements The writing of this thesis would not have been possible without the help that many people gave me in the last few months. Therefore, I wish to thank my supervisor prof. Otfried Gühne and Roope Uola for welcoming me in Siegen, helping me in the choice of the subject and for listening to my doubts and guiding me through the reasoning that is at the base of this work. I also thank prof. Iacopo Carusotto for granting me the opportunity to work in this field and for giving me useful advices from a different perspective. Thanks go to my family and my fiancée for supporting and helping me in these months. Finally, I want to thank all the members of the TQO group is Siegen, and particularly my office-mates Tristan and Jannik, for the fun times spent together. iii

4

5 Contents Acknowledgements iii Abstract ix 1 Introduction Basic notions Quantum mechanics as a probabilistic theory States as density operators Multipartite systems Qubits Measurements in Quantum Mechanics Limitations of PVMs Positive Operator Valued Measures (POVMs) Quantum Channels Measurement models Instruments Compatibility of Measurements The usual notion of observable compatibility Compatibility for generalized measurements POVMs Instruments v

6 vi Contents 3 Steering State steering State assemblages One-way steering Channel steering Channel assemblages Steering quantification Steerable weight Steering robustness Joint Measurability and Steering The general connection Interpretation of SE observables Incompatibilty quantifiers Incompatibility weight Incompatibility robustness Steering inequalities from incompatibility Channel steering and joint measurability Incompatibility of Alice s measurements Bob s SE observables for channel steering Resource theory of incompatibility Theoretical framework Incompatibility monotones Incompatibility breaking channels Conclusions and Outlooks 75 Bibliography 77

7 Dedicated to my Dad. vii

8

9 Abstract One of the most prominent features of quantum mechanics is the incompatibility of measurements. This feature, not present in classical physics, captures the fact that when performing observations on quantum systems we find that, in general, the results of the subsequent measurement of different (and classically uncorrelated) properties depend on the order on which the measurements are carried out. In this work, measurement incompatibility is regarded as an useful property of experimental setups, for example, recent results [1] have shown that incompatibility of measurements is needed for the task of quantum steering. After an introduction to selected notions of quantum theory needed in the rest of the discussion (chapter 1), chapter 2 formalizes the notion of measurement incompatibility for various objects used to describe measurement procedures. Chapter 3 offers an introduction to quantum steering and presents steering quantifiers often used in the literature. Chapter 4 explores the connection between the steerability of a setup and the compatibility of observables, and presents original results on the steerability of broadcast quantum channels. Finally, in chapter 5 a formal resource theory for incompatibility is developed, joining partial results present in the literature.

10

11 Chapter 1 Introduction The aim of this chapter is to provide a brief introduction to the mathematical notions and formalism used in the rest of this work, as well as giving physical reasons for the need of such formalism. 1.1 Basic notions Quantum mechanics as a probabilistic theory Physical theories are formal devices used to predict the result of a measurement procedure on a physical system. Such procedure usually consists of a preparation stage in which the system undergoes a controlled evolution and a subsequent interaction with a measurement device, which outputs one of the possible outcomes of the experiment. This setup is depicted in figure: 1

12 Basic notions x 1 ρ x 2 E. x n preparation measurement outcome A probabilistic theory makes predictions in the form of probabilities; i.e., it assigns probabilities p(x i ρ, E) to each experimental outcome x i, given the preparation procedure ρ and the measurement E. Quantum mechanics is a probabilistic theory built on Hilbert spaces, that is, operators on Hilbert spaces are used to construct the phase space of the theory States as density operators Preparation procedures (that is, states of the system) are formalized by the use of unit trace, positive 1 operators acting on a Hilbert space H. Those operators are called density operators. We will denote the space of density operators on a Hilbert space H with S(H). This definition of states differs from the textbook notion in which states are vectors in H; the main reason to introduce such operators is to be able to unify the formal treatment of pure states and ensambles of states, as will be clear in the next section. With this definition, the notion of pure states can be recovered as extremal states in the convex 2 space S(H), that is, the density operator for a pure state ψ, given by ρ = ψ ψ, cannot be written as a convex combination of two other operators. Note that, since ψ ψ is a projector, we have ρ 2 = ρ. For statistical mixtures of states, the density operator is defined as ρ = 1 An operator V is said to be positive if ψ V ψ 0 ψ H 2 The convexity of S(H) is trivial: any convex combination of positive, unit-trace operators is also positive and self-adjoint. The trace condition is also trivially satisfied.

13 Chapter 1. Introduction 3 i p i ψ i ψ i, where p i is the probability for the system to be in the state ψ i. Since i p i = 1, the condition tr[ρ] = 1 is satisfied. However, it s easy to check that the relation ρ 2 = ρ holds only if ρ describes a pure state Multipartite systems In the following discussion, we will often need to formalize the notion of multipartite state, that is, the state of a large system composed of multiple subsystems. We denote the Hilbert space of each subsystem with H 1, H 2,... H N, and describe the larger system as the tensor product H = H 1 H 2... H N. A basis for H can be obtained as { ϕ 1,i ϕ 2,j ϕ N,l }, where { ϕ 1,i } is a basis for H 1, and so on. Restricting ourselves to only two subsystems, in order to ease the notation, we see that a general vector can be decomposed on this basis as: ψ = i,j α i,j ϕ 1,i ϕ 2,j (1.1) Therefore, density operators on H take the form: ρ = α i,j α k,l ϕ 1 i ϕ 1 k ϕ 2 j ϕ 2 l (1.2) i,j,k,l Theorem 1.1. (Schmidt decomposition) Let H 1 and H 2 be Hilbert spaces of dimension n and m respectively; w.l.o.g we can assume m n. For any vector ψ H 1 H 2 there exists orthonormal basis { ϕ 1,i }, { ϕ 2,i }

14 Basic notions for H 1 and H 2 and non-negative reals α i such that: ψ = m α i ϕ 1,i ϕ 2,i (1.3) i The number of non-zero coefficients α i is known as the Schmidt rank of ψ Proof. The proof of this fact can be found in [2]. Definition 1.2. Given a multipartite state ψ H 1 H 2, we say that the state is entangled if its Schmidt rank is bigger than 1. Otherwise we say that the state is separable. Using the density operator notation for states, we have that if the total larger system has a density matrix ρ 12, the marginal density matrix for one subsystem can be obtained by taking a partial trace over the other subsystem. That is: ρ 1 = tr 2 [ρ 12 ] = i ϕ 2,i ρ 12 ϕ 2,i (1.4) Proposition 1.3. If a bipartite state ρ AB has a pure marginal state ρ A, then the state ρ AB is separable. Proof. If the bipartite state is pure, the density matrix ρ AB can always be written as: ρ AB = ψ ψ = i,j α i α j ϕ A,i ϕ A,j ϕ B,i ϕ B,j (1.5)

15 Chapter 1. Introduction 5 where α i are the Schmidt coefficient of ψ as in Theorem 1.1. We can now write: ρ A = tr B [ρ AB ] = l ϕ B,l ψ ψ ϕ B,l = = l,i,j α i α j ϕ A,i ϕ A,j ϕ B,l ϕ B,i ϕ B,j ϕ B,l = = l,i,j α i α j ϕ A,i ϕ A,j δ li δ lj = (1.6) = l α 2 l ϕ A,l ϕ A,l If ρ A is pure at most one of the α l can be non-zero, therefore he have that the Schmidt rank of ψ is one, and that the state is separable. In the case of mixed states, it s easy to see that a mixed state can have a pure marginal only if it is separable; that is, only if the total state is a tensor product of a pure state with a mixed one Qubits Throughout this work, we will often use the notion of qubits in proof and examples. A qubit is any physical quantum system that can be described by a 2-dimensional Hilbert space; the simplest example of a qubit is a spin- 1 2 particle. The states are represented as two component vectors, represented in coordinates with respect to the standard basis 0 = ( 1 0 ) and 1 = ( 0 1 ). It is customary to extend the notion of Pauli matrices, usually used for spin

16 Measurements in Quantum Mechanics systems, to any kind of two-level system, defining the operators: ( 1 0 ) ( ) 0 1 ( ) 0 i σ z = 0 1 σ x = 1 0 σ y = i 0 (1.7) The vectors 0 and 1 are eigenvectors of σ z, while the eigenvectors of σ x and σ y are respectively ± = 0 ± 1 and = 0 ± i 1. Bloch sphere representation An arbitrary density operator ρ representing the state of a qubit system is any positive operator such that tr[ρ] = 1. We can parametrize this class of operators by the following relation[2]: ρ = 1 (1 + λˆn σ) (1.8) 2 where ˆn is a unit vector in R 3, λ [0, 1] and σ = (σ x, σ y, σ z ). With this in mind, we can represent all qubit states as the vectors v = λˆn contained in a unit sphere; this representation is known as the Bloch sphere representation. Note that the points on the surface of the sphere (i.e. points for which λ = 1) are projectors, and therefore correspond to pure states; points inside the sphere are mapped to statistical mixtures. 1.2 Measurements in Quantum Mechanics In the standard presentation of quantum mechanics, measurable quantities correspond to self-adjoint operators Ô acting on a Hilbert space H. The possible outcomes of the measurement process are the eigenvalues

17 Chapter 1. Introduction 7 λ i of the operator, while the probability of obtaining the outcome λ i is given by ψ ϕ i 2, with Ôϕ i = λ i ϕ i, given that the state of the system is ψ ψ. After the measurement the system is left in the state defined by the projection of ψ ψ on the eigenspace corresponding to the measurement outcome. The expectation value of a measurement is then obtained as: Ô = = d d λ i p i = λ i ψ ϕ i 2 i=0 i=0 d ψ λ i ϕ i ϕ i ψ = ψ Ô ψ = tr[ôρ ψ] i=0 (1.9) where ρ ψ = ψ ψ is the density operator corresponding to the pure state ψ. Relation (1.9) can be extended to act also on density matrices that correspond to mixed states, taking ρ = p ψ ψ + (1 p) ψ ψ we find: Ô = tr[ôρ] = p tr[ô ψ ψ ] + (1 p) tr[ô ψ ψ ] (1.10) which is indeed the correct expression for the expectation value of Ô on the mixed state. Such kind of measurements are commonly referred to as Projection-Valued Measures (PVMs), as they can be defined by the set of orthonormal projectors { ϕ i ϕ i } that make up the corresponding observable. Each element of the set matches one of the possible outcomes of the measurement process.

18 Measurements in Quantum Mechanics Limitations of PVMs PVMs, as we will see, are not the most general form of quantum measurement one can devise. Although they are usually presented as the standard formalization of measurement, there are many procedures that extract information about a quantum system that cannot be formalized by PVMs. Example 1.1. A measurement device on a qubit acts as follows: A classical coin flip experiment is used to decide the fate of the qubit. In one case, a projective measurement along the z axis will take place, and in the other case the qubit will be measured along the x axis. The outcome of the process is just the outcome of the chosen measurement, but no information about which PVM has been performed is available. The setup is summarized in figure: ρ p = 0.5 p = 0.5 σ z σ x Figure 1.1 Defining 3 E + = 1 2 (P 0 + P + ) and E = 1 2 (P 1 + P ), and calling ρ the state of the input qubit one can easily check that the outcome probabilities for this set up are given by: p ± = tr[e ± ρ] (1.11) Unfortunately E± 2 E ± and therefore the set {E ± } is not a PVM. A closer look at the definitions of E ± reveals that the above procedure can be thought as a statistical mixture of the two PVMs {P 0, P 1 } and {P +, P }. 3 P i = i i ; i = 0, 1, +,.

19 Chapter 1. Introduction 9 Example 1.2. Consider the following measurement procedure: the system being measured is entangled with another auxiliary quantum system which has been prepared in a known state, and a projective measurement is then performed on the auxiliary system. As we will see in detail in the next section, this kind of measurement can give information about the system that can not be obtained by performing a standard PVM on the initial system alone (e.g. a 3 outcome measurement on a 2 dimensional system) Positive Operator Valued Measures (POVMs) The more general description of a measurement is given by the probabilities of obtaining a certain outcome when measuring a (pure or mixed) state ρ. With this in mind, we can associate to each outcome i a function ɛ i : H [0, 1] such that ɛ i (ρ) is the probability of obtaining the outcome i when measuring the state ρ. We will call such functions effects[2]. Effects can be represented as positive operators E acting on H as follows: ɛ(ρ) = tr[eρ], with ˆ0 E 1 4. The name effect will be used to refer both to the function ɛ and to the corresponding operator E. Definition 1.4. A Positive Operator Valued Measure (POVM) is a set of effects {E i } n i=1 such that n i=1 E i = 1. Confronting this with a standard PVM (i.e., a set of projectors summing up to the identity) we can see that the only difference is the additional requirement for the elements of PVMs to be orthonormal 5. Therefore, all PVMs are also POVMs. 4 With the notation ˆ0 E 1 it is meant that 0 ɛ(ρ) 1 ρ S(H). 5 Formally, for a PVM {P i} we require P ip j = δ ijp i

20 Measurements in Quantum Mechanics With this new definition of measurement, we can overcome the limitations presented in the previous section: Example 1.3. Referring to the setup presented in example 1.1, we can check that the set of operators {E ± } introduced there constitute a POVM: they are positive (they are defined as sum of positive operators) and: i=± E i = 1 2 (P 0 + P 1 + P + + P ) = 1 (1 + 1) = 1 (1.12) 2 Example 1.4. The process described in example 1.2 can be formalized as follows: the input state ρ s is coupled to the ancillary system giving a total state ρ sa = ρ s ρ a. A general interaction between the systems can be described with a unitary transformation U: ρ sa Uρ sa U (1.13) Performing a standard projective measurement {P i } on the ancilla gives the following probabilities for the outcomes: p i = tr[(1 P i ) (Uρ s ρ a U )] (1.14) Using the cyclic property of the trace and rearranging, we can rewrite: ] p i = tr[u (1 P i )U (ρ s ρ a )] = tr S [ρ s tr A [U (1 P i )U(1 ρ A )] (1.15) Defining now E i = tr A [U (1 P i )U(1 ρ A )], we find that the outcomes probabilities can be obtained from an expression involving only states and operators acting on the input system side: p i = tr[ρ s E i ] (1.16)

21 Chapter 1. Introduction 11 We will now show that the set {E i } defined as above is indeed a POVM. The positivity condition is guaranteed by p i being probabilities (and from the fact that we made no assumption on the form of ρ s ), while E i = tr A [U (1 i i P i )U(1 ρ A )] = tr A [U U(1 ρ A )] = 1 tr A [ρ A ] = 1 (1.17) These two examples show how POVMs can be used to describe measurement procedures which are not formalizable by using only PVMs. Theorem 1.5. A POVM {E i } N i=1, on a d-dimensional system H can be realized as a PVM in an N-dimensional extension of H, H ext = H H A Proof. Without loss of generality we can restrict ourselves to the case where all effects are rank-1 operators, as any POVM can be decomposed as a set pure effects {E i = ψ i ψ i }, where ψ i are subnormalized states. We will also assume N > d, as the case N d is completely trivial. The conclusion is equivalent to finding a set of N orthogonal projectors M = {P i } on H ext (i.e. a PVM) such that the effects composing the POVM can be obtained projecting the set M in the subspace H of H ext, formally: tr[ρe i ] = tr[ρπp i Π] = tr[πρπp i ] = tr[ρ ext P i ] (1.18) where Π is the operator projecting H ext on H. We can chose the projectors P i as the projectors on the states: N φ i = ψ i + c is ϕ s (1.19) s=d+1

22 Measurements in Quantum Mechanics where ϕ s are N d vectors orthogonal to each other and to all φ i and c is are coefficients to be determined. It is now sufficient to prove that it is always possible to choose the coefficients c is in such a way that the φ i form an orthonormal basis. To do this, choose a basis for H and denote with a ik, k = 1...d the coefficients of the expansion of the ψ i on this basis. Consider now the following matrix: a 1,1... a 1,d c 1,d+1... c 1,N.... (1.20) a N,1... a N,d c N,d+1... c N,N The rows of this matrix are just the representation of the φ i on some basis. Since ψ i ψ i = 1, we have δ ij = N l a il a lj, hence, the first d columns of the matrix are orthonormal. There are now infinitely many ways to choose the c ij for all the columns (and therefore the rows) to be orthonormal. In general, this way of realizing a POVM will not have a direct physical interpretation, since not all systems Hilbert spaces can be extended as required in the theorem. However: Proposition 1.6. Acting with a POVM {E i } N i=1 on a system S is equivalent to acting with a PVM on a larger system of which S is a subsystem. A constructive proof of this can be found in Ref. [3]. A more general proof of this fact will be given later in the discussion.

23 Chapter 1. Introduction Quantum Channels Let us pause our discussion about measurements and focus on a step that usually precedes the measurement in most experimental schemes: the system, before being measured, undergoes some transformation that alter its state. In textbook quantum theory, these kind of state transformation are represented by a unitary operator U acting on the state, that is: ρ = UρU. The operator U is usually implemented as a time evolution of the system, driven by a carefully chosen Hamiltonian so that U = e iht. This kind of formalization is however, not sufficient to describe the full set of transformation a system can udergo. Example 1.5. Consider a photon travelling in an optical fiber: due to the interaction with the fiber, the quantum state of the photon will couple with the fiber in which it is travelling, causing decoherence in the photon state. While the action on the total system (fiber + photon) is unitary, the state transformation that the photon undergoes can not be described by a unitary operation: we need a more general formal device to describe such scenarios. The most general operator on S(H) that can describe a state transformation has to be - in order to preserve probabilities - linear, completely positive and trace preserving. The only non obvious requirement is complete positivity: Consider the map V A acting on a subsystem A. We can extend its action to a larger space, for instance: (V A 1 B )(ρ A ρ B ) = V A (ρ a ) ρ B (1.21) The positivity of V A is not enough to ensure the positivity of V A 1 B, which has to be required in order to avoid negative probabilities arising in

24 Measurements in Quantum Mechanics the extended space H A H B. Therefore, we require that V A 1 B to be positive for any extension B; this requirement is called complete positivity. Example 1.6. An example of a positive but not completely positive operator is the partial transposition. Consider a Hilbert space H and a basis {ϕ i } N i=1 over it. The transpose operator can be defined on this basis as: τ( ϕ i ϕ j ) = ϕ j ϕ i. Since transposing an operator does not change its eigenvalues, τ is a positive operator. Consider now the extended operator τ A 1 B acting on the density operator ψ ψ = 1 N i,j ϕ i ϕ j ϕ i ϕ j. The result is: τ A 1 B ψ ψ = 1 N ϕ j ϕ i ϕ i ϕ j (1.22) i,j This operator is not positive, as acting on ψ = ϕ 1 ϕ 2 ϕ 2 ϕ 1 gives: i,j ϕ j ϕ i ϕ i ϕ j ψ = ψ (1.23) Therefore, we must conclude that transposition is a positive but a not completely positive operator. Proposition 1.7. Let S L(H A ) be an operator. The map N S (T ) = ST S is linear and completely positive. Proof. The linearity is manifest in the definition. Consider now a positive operator O on an extended space H A H B. For every vector ψ H A H B

25 Chapter 1. Introduction 15 we have: ψ (N S 1 B )O ψ = ψ (S 1 B )O(S 1 B ) ψ = ψ S O ψ S 0 (1.24) since O is positive. Therefore (N S 1 B ) is positive and N S is completely positive. With the above considerations in mind, we are now ready to define: Definition 1.8. A Quantum channel is a map V : S(H) S(H ) such that: is linear is completely positive is trace preserving 6 We now present a few important results regarding quantum channels that will become useful later in the discussion. Theorem 1.9. (Stinespring s theorem) If τ : S(H) S(H) is a channel, then there exists a Hilbert space H E, a pure state η H E and a unitary U on H H E such that: τ(ρ) = tr E [U (ρ η) U ] ρ H (1.25) Proof. The proof can be found in [4, p ]. The meaning of this result is that the action of any channel on a quantum system can be physically realized by coupling it to an auxiliary system and 6 tr[v (ρ)] = tr[ρ] ρ S(H)

26 Measurements in Quantum Mechanics performing a carefully chosen time evolution on the composite system. Moreover, since equation (1.25) can be used to define a channel (given η H E and U), we have that time evolutions in open quantum systems can be described using quantum channels. Theorem (Kraus decomposition) A linear map τ : T (H) T (H) is a channel if and only if there exists a sequence of bounded operators A 1, A 2, A 3,... such that: τ(t ) = i A i T A i A i A i = 1 (1.26) i Proof. Proposition 1.7 ensures that each map N Ak (T ) = A k T A k is linear and completely positive (and so is τ). We also have: tr[τ(t )] = tr[a i T A i ] = [( ) ] tr[a i A it ] = tr A i A i T = tr[t ] i i i (1.27) therefore τ is a channel. We will now prove the other direction. By theorem 1.9 we have that τ admits a dilation (H E, U, η). We can always write η = ϕ 1 ϕ 1 with ϕ 1 H E ; we can also choose a basis {ϕ i } d i=1 for H E and define the action of the operators A k as: ψ A k ψ = ψ ϕ k U ψ ϕ 1 ψ, ψ H (1.28) Since ψ A k ψ = ψ ϕ k U ψ ϕ 1 ψ ψ U we conclude that the A k are bounded operators.

27 Chapter 1. Introduction 17 We also have: ψ τ( η η ) ψ = ψ tr E [U η η ϕ 1 ϕ 1 U ] ψ = k = k ψ ϕ k U η η ϕ 1 ϕ 1 U ψ ϕ k ψ ϕ k U η ϕ 1 η ϕ 1 U ψ ϕ k (1.29) = k ψ A k η η A k ψ and therefore τ( η η ) = k A k η η A k. We also have: [ ] tr[t ] = tr[τ(t )] = tr A k A kt k T S(H) (1.30) thus k A k A k = 1. Proposition Two finite sets {A i } i = 1 n and {B i } i = 1 m of bounded operators define the same channel if and only if: A j = where the matrix u ik is unitary. m u jk B k (1.31) k=1 Proof. The proof of this fact can be found in [2, p ]. Theorem (Choi s theorem) Let τ be a positive linear map on L(H), with dim(h) = d. The following statements are equivalent: i) τ is completely positive

28 Measurements in Quantum Mechanics ii) 1 d τ is positive iii) The matrix ({ϕ i } d i=1 is any basis for H): is positive. Φ τ = 1 τ( ϕ 1 ϕ 1 )... τ( ϕ 1 ϕ d ). d.... (1.32) τ( ϕ d ϕ 1 )... τ( ϕ d ϕ d ) Proof. From the definition of complete positivity we have (i) (ii). Consider now the positive matrix M = 1 d Noting that: jj kk = ψ ψ L(H) L(H) (1.33) jk (1 d τ)m = 1 j k τ( j k ) = Φ τ (1.34) d jk and using the positivity of (1 d τ) and M we have the positivity of Φ τ and therefore (ii) (iii). We will now prove (iii) (i). Denote with φ l C d C d l = 1...n the unnomrmalized eigenvectors of the positive operator Φ τ, so that Φ τ = n l φ l φ l. They are orthogonal and linearly independent. Looking at C d C d as the direct sum C d C d, we can define P j as the projector on the jth copy of C d. We now have: τ[ ϕ j ϕ k ] = P j Φ τ P k = l P j φ l φ l P k (1.35)

29 Chapter 1. Introduction 19 Define now n operators by the relation V l ϕ j = P j φ l. Thus: τ[ ϕ j ϕ k ] = l V l ϕ j ϕ k V l (1.36) We now have τ(t ) = l V lt V l conclude the proof. T L(H), using Proposition 1.7 we Definition Given a completely positive linear map τ : S(H) S(H), we define the Heisenberg picture of τ as a map τ H such that: tr[τ H (T )ρ] = tr[t τ(ρ)] ρ S(H), T T (H) 7 (1.37) Proposition Given a channel τ : S(H) S(H), its Heisenberg picture representation τ H can be obtained as: τ H (T ) = tr E [V (T 1)V ] (1.38) Where V = U(1 η) is an isometry on H E, with U, η and H E as in theorem 1.9. Proof. From the definition of τ H we have: tr[τ H (T )ρ] = tr[t τ(ρ)] ρ S(H), T [0, 1] (1.39) 7 T (H) = {T islinearand tr[ T T ] < + }

30 Measurements in Quantum Mechanics From theorem 1.9 we have: [ ] [ ] tr[τ H (T )ρ] = tr T tr E [U(ρ η)u ] = tr (T 1) (U(ρ η)u ) [ ] [ ] = tr U (T 1)U(ρ η) = tr tr E [U (T 1)U(1 η)] ρ [ ] = tr tr E [V (T 1)V ]ρ ρ S(H), T [0, 1] concluding the proof. (1.40) Measurement models When performing a measurement on a system, there are usually many ways to devise a physical procedure that implements the measurement and yields the correct outcome probabilities. A typical way of realizing a measurement has been described in Examples 1.2 and 1.4, with the system under study being coupled with an auxiliary system -called probe system- and by then using a POVM -called pointer observable- on the probe. A measurement model is a formal description of such a procedure: Definition A measurement model is a quadruple (K, χ, V, {F i }) such that: K is the Hilbert space of the probe system. χ is the initial state of the probe V is a channel from H K to itself, describing the coupling between the system and the probe.

31 Chapter 1. Introduction 21 {F i } is a POVM on the probe (the pointer observable). Definition Given a POVM {E i } on a system, we say that M = (K, χ, V, {F i }) is a measurement model for {E i } if: tr[ρe i ] = tr[v (ρ χ)v (1 F i )] ρ S(H) (1.41) Remark Note that the behaviour of the channel V need not to be specified full; since the probe is always prepared in the state χ, it s only necessary to specify the action of the channel on the subspace T (H) range( χ χ ) Instruments Until now we focused our discussion on the description of the outcome probabilities for some measurement procedure. We now want to shift our attention to the state of the measured system after the measurement has been performed. In order to derive the after measurement state, we suppose that, after measuring {E i } on the state ρ by means of some measurement model M = (K, χ, V, {F i }), we measure the observable {B j } on the system. In this scenario, the probability of getting the outcome i for the first measurement and j for the second can be expressed as: p(i, j) = tr[v (ρ χ)v (B j F i )] = tr[b j tr K [V (ρ χ)v (1 F i )]] (1.42) On the other hand, if we denote with ρ M i the post-measurement state we get by obtaining outcome i when measuring M, the same probability can

32 Measurements in Quantum Mechanics be written as: p(i, j) = p(j i) p(i) = tr[b j ρ M i ] p(i) (1.43) Defining the unnormalized state ρ M i = ρ M i p(i), by confronting (1.42) and (1.43) we obtain: ρ M i = tr K [V (ρ χ)v (1 F i )] (1.44) For each outcome i, equation (1.44) defines a linear mapping Ii M (ρ) = ρ M i that maps pre-measurement states into (unnormalized) post-measurement states. Note that tr[i M i (ρ)] = tr[ρ M i ] = p(i). Definition An instrument is a set of maps {I i } such that: I i is linear and completely positive tr[ i I i(ρ)] = 1 ρ T (H) tr[i i Y (ρ)] = i Y tr[i i(ρ)], where Y is a set of possible outcomes. Remark Note that, given an instrument {I i }, the sum of it s elements i I i defines a channel. Definition Given a POVM {E i } on a system and and an instrument {I i }, we say that the instrument is E-compatible if: tr[i i (ρ)] = tr[e i ρ] (1.45) That is, if the instrument reproduces the outcome probabilities of the meausrement described by the POVM. Proposition Given the POVM {E i }, we will show that the set {I E i }, where: I E i (ρ) = E i ρ E i (1.46)

33 Chapter 1. Introduction 23 is an E-compatible instrument. Proof. For each i, the map defined in eq. (1.21) is linear and completely positive (Proposition 1.7). Furthermore, using the cyclic property of the trace and the fact that i E i = 1, we easily get that tr[ i I i(ρ)] = 1. We also have - for disjoint outcomes i and j - that E i j = E i + E j, therefore, using again the cyclic property of the trace we prove the last condition of definition 1.18 and that {Ii E } is indeed an instrument. Its E-compatibility follows again from the properties of the trace. Theorem (Ozawa s Theorem) For every instrument {I i } there is a measurement model M = (K, χ, V, {F i }) such that tr K [V (ρ χ)v (1 F i )] = I M i (ρ) = I i (ρ) ρ S(H) (1.47) where V is a unitary and {F i } is a PVM. Proof. Each element of the Instrument I i admits a Kraus decomposition I i (ρ) = j A jρa j. The sum of all elements in the instrument gives a decomposition for the channel I Ω = i I i; this decomposition contains a finite number, say N, of operators. Using theorem 1.9 we find a dilation (H E, η, U) such that I Ω (ρ) = tr E [U(ρ η η )U ]. Without loss of generality, the dimension of H E can be chosen to be equal to N, since we can always increase its size by a tensor product with another Hilbert space. Choosing an orthonormal basis {ϕ i } for H E, we can define N operators on H by the relation: ψ B j ψ = ψ ϕ j U ψ η ψ, ψ H (1.48)

34 Measurements in Quantum Mechanics With this definition for B j, we have I Ω (ρ) = j B jρb j. We have therefore that the two sets {B j } and {A i } are decomposition of the same channel, and by proposition 1.11 they are connected by a unitary u ik. Defining another basis for H E as ϕ i = k u ik ϕ k, we determine a PVM { F i = ϕ i ϕ i }. We now have that A j ρa j = tr E[U(ρ η η )U (1 F j )] ρ S(H) (1.49) The measurement model M is then composed by the Hilbert space K = H E, the state χ = η, the channel defined by U through proposition 1.7, and the PVM {F i } obtained by opportunely grouping the outcomes of { F i }. We are now ready to give a proof for proposition 1.6: Proof. For any POVM {E i }, using proposition 1.21 we can find an E- compatible instrument {Ii E }. Using Then theorem 1.22 we can find a measurement model that realizes the instrument (and therefore the POVM) by means of a PVM on a auxiliary system. Remark In our discussion, we have introduced three different ways to describe a measurement on a quantum system: POVMs, measurement models and instruments. These three descriptions can be organized in a hierarchy. POVMs characterize only the outcome probabilities of a measurement procedure, but they don t give any information on the post measurement state or on the physical procedure used to perform the observation. For each POVM, we can find many different instruments that produce its given outcome probability distribution with distinct post measurement states.

35 Chapter 1. Introduction 25 Similarly, a general instrument can by physically realized by means of different measurement models; for example, given a particular measurement model for an instrument, one could construct another by increasing the dimension of the probe system and extending the interaction channel to the extra dimensions with the identity channel.

36

37 Chapter 2 Compatibility of Measurements In this chapter we explore compatibility of quantum measurements, extending the textbook definition (i.e. commutativity) to the generalized measurements introduced in Chapter The usual notion of observable compatibility In the standard formulation of quantum theory, two observables (PVMs) are said to be compatible if they are jointly measurable, i.e. if there is a procedure that can measure both observables on a system without having the outcome of one spoil the measurement of the other. In general this does not happen: after a measurement the system is left in an eigenstate of the measured observable, and therefore a generic subsequent measurement 27

38 The usual notion of observable compatibility would be affected by the outcome of the first 1. However, if it is possible to find a basis of common eigenstates for the two observables, we can determine the value of both on the same state. This is because the post measurement state will lie completely in one of the eigenspaces of each observable. Theorem 2.1. Two observables A and B admit a decomposition on a common basis of eigenstates { ϕ i } if and only if [A, B] = 0. Proof. Denoting with µ i and λ i the eigenvalues of A and B respectively, it s easy to prove that commutativity follows form the existence of { ϕ i }: AB ψ = A i = B i λ i c i ϕ i = i µ i c i ϕ i = BA ψ λ i µ i c i ϕ i (2.1) To prove the other direction we note that, if A v = µ i v AB v = BA v = Bµ i v = µ i B v (2.2) And therefore that B preserves the eigenspaces of A. Now, restricting B to the eigenspace of each eigenvalue µ i and diagonalizing B in each restriction, we find a basis of eigenstates of B wich are also eigenstates of A. With the above result in mind, we conclude that two PVMs are compatible (or jointly measurable) if their corresponding observables commute. A a set of PVMs is said to be compatible if each pair is. 1 Think about position and momentum: the measurement of one destroys all information about the probability distribution of the other

39 Chapter 2. Compatibility of Measurements Compatibility for generalized measurements POVMs We now wish to find a notion of compatibility applicable to POVMs. We will again use the idea of joint measurability. Definition 2.2. A set of POVMs {E i λ } 2, is said to be jointly measurable or compatible if there is a POVM {G k } from which the effects E i λ can be attained as: E i λ = p k (i λ)g k (2.3) k where p k (i λ) are positive constants with i p k(i λ) = 1. In practice, this means that the set {E i λ } is jointly measurable if we can obtain the outcome probabilites for each POVM in the set by postprocessing the outcome distribution of {G k }. Proposition 2.3. A set of POVMs {E i1 1}...{E in N} is jointly measurable if and only if there exist a marginal POVM {G i1...i N }, such that: E iλ λ = G i1...i N (2.4) i 1...i λ 1,i λ+1...i N Proof. The if implication is trivial, as equation (2.4) is a particular type of post-processing, as required in the definition of joint measurability. We will prove the other implication for a set of two POVM, the generalization to any finite number is then straightforward. 2 λ indexes the POVMs in the set, i the effects for each POVM. That is i E i λ = 1 λ

40 Compatibility for generalized measurements Consider the jointly measurable generalized measurements {E i 1 } and {E j 2 }. From the definition of joint measurability we have that there exists a POVM { G k } such that: E i 1 = k p k (i 1) G k E i 2 = k p k (i 2) G k (2.5) Define now p k (i, j) = p k (i 1) p k (j 2). It s easy to see that 0 < p k (i, j) < 1 and i,j p k(i, j) = 1. Choosing now: G i,j = p k (i, j) G k (2.6) k completes the proof. Example 2.1. Consider the generalized measurements S x and S z, defined by the effects S ± x = 1 2 (1 ± 1 2 σ x ) and S ± z = 1 2 (1 ± 1 2 σ z ). They represent a smeared spin measurement (they can be regarded as a mixture of an actual measurement and a coin flip) from which we still obtain some information about the spin state of the system, but not all the available information is extracted. If we consider the generalized measurement: G i,j = 1 4 (1 + i σ x + j σ z ) i, j { 1, +1} (2.7) 2 2 it is easy to see that we can recover the original effects as S ± x = i G i,± and S ± z = j G ±,j. Remark 2.4. Notice that the effects S ± x and S ± z introduced in Example 2.1 do not commute with each other, but the corresponding POVMs statisfy the condition to be jointly measurable. We therefore conclude that

41 Chapter 2. Compatibility of Measurements 31 the commutativity criterion for joint measurability introduced for PVMs does not hold for generalized measurements. Example 2.2. Consider now two general spin measurements defined by {E + = E, E = 1 E} and {F + = F, F = 1 F }, with: E = 1 2 (1 + λ n σ) F = 1 (1 + λ m σ) (2.8) 2 We see that E and F are positive - and therefore define valid POVMs - for 0 < λ < 1. We want to find for which values of the parameter λ the joint measurability of the two POVMs is spoiled. If we assume that the two measurements are compatible, by proposition 2.3 we have that there exists a four outcome measurement {G ++, G +, G +, G } such that E = G ++ + G + and F = G ++ + G +. Denoting G ++ as G, from the positivity of effects we find the following relations: 0 G 0 G + = E G 0 G + = F G (2.9) And lastly: E + F = G + G + + G + + G = 1 G + G 1 + G (2.10)

42 Compatibility for generalized measurements Without loss of generality we can parametrize G as G = 1 2 (γ1 + g σ). With this parametrization the inequalities (2.9) and (2.10) become: g γ (2.11) λ n g 1 γ (2.12) λ m g 1 γ (2.13) λ n + λ m g γ (2.14) Using the triangle inequality for pairs (2.11), (2.14) and (2.13), (2.12) we get: λ n + λ m 2γ λ n λ m 2 2γ (2.15) Adding them together gives a necessary condition for the two POVMs to be jointly measurable: λ n + m + λ n m 2 (2.16) By taking g = 1 2 (λ n + λ m) and γ = g we see that this condition is also sufficient. We conclude that the two generalized measurement are compatible if and only if: λ 2 n + m + n m = 1 (2.17) 1 + sin(θ) Where 0 < θ < π is the angle between n and m. For orthogonal spin measurement, the requirement becomes λ 1 2.

43 Chapter 2. Compatibility of Measurements Instruments Following the same path we adopted for POVMs, we will call two instrument compatible if they satisfy the following criterion: Definition 2.5. A set of instruments {I i λ } 3, is said to be jointly measurable or compatible if there exists a general instrument {G k } from which the maps I i λ can be attained as: I i λ = k p k (i λ)g k (2.18) Where p k (i λ) are positive constants with i p k(i λ) = 1. However, for instruments it s possible to give another - weaker - definition of compatibility: Definition 2.6. A set of instruments {I i λ } is said to be weakly compatible if the sum of the elements for each instrument gives the same channel. That is: for some channel Λ. I i λ = Λ λ (2.19) i Proposition 2.7. Every set of compatible instruments {I i λ } is also weakly compatible. Proof. I i λ = i i,k p k (i λ)g k = k G k = Λ (2.20) 3 As before, λ indexes the instruments in the set, i the elements in each instrument. That is tr[ i I i λ(ρ)] = 1 λ, ρ

44 Compatibility for generalized measurements where we used the fact that i p k(i λ) = 1 when summing over i. The final result does not depend on λ, and therefore all instruments of a compatible set sum up to the same channel. Definition 2.8. An instrument {I i } is compatible with a channel Λ if i I i = Λ. Proposition 2.9. Let Λ be a channel on H with a minimal 4 dilation (see proposition 1.14): Λ H (T ) = tr E [V (T 1)V ] (2.21) An instrument {Ii H } 5 is compatible with the channel Λ H if and only if there is is a POVM {A i } such that: I H i (T ) = tr E [V (T A i )V ] (2.22) If such POVM exists, then it is unique. Proof. A proof of this fact can be found in [5]. Corollary Given an instrument {Ii H } in the Heisenberg picture, as in eq (2.22), the Schrödinger picture can be written as: I i (ρ) = tr E [(1 A i )V (ρ 1)V ] ρ S(H), T T (H) (2.23) 4 Minimal dilations are realized taking the ancillary space H E to have minimal dimension 5 We are using the Heisenberg representation of the instrument, as defined in definition 1.13

45 Chapter 2. Compatibility of Measurements 35 Proof. [ ] tr[t I i (ρ)] = tr[ii H (T )ρ] = tr tr E [V (T A i )V ] ρ [ ] = tr V (T A i )V (ρ 1) = tr [(T A i ) V (ρ 1)V ] [ ] = tr (T 1) (1 A i )V (ρ 1)V ) [ ] = tr T tr E [(1 A i )V (ρ 1)V ] ρ S(H), T T (H) (2.24)

46

47 Chapter 3 Steering 3.1 State steering The concept of state steering was introduced by Schrödinger as a response to the celebrated Einstein-Pdolsky-Rosen (EPR) paper. In an EPR setup, two parties - usually denoted by Alice and Bob - share a couple of entangled particles. Steering refers to the ability of one party, say Alice, to remotely affect the other party s - say Bob s - state by performing local measurements on her side and sending the measurement settings and outcome to Bob. This reflects the fact that, for some shared states, the outcome probabilities of Bob s measurements cannot be explained by considering only classical correlations between the marginal states possessed by Bob and Alice (a practical illustration of a steerable scenario will be given in example 3.1). 37

48 State steering Definition 3.1. Labeling with {A i λ } and {B j µ } the generalized measurements on Alice s and Bob s sides in a EPR-like setup, where the state ρ AB is shared between the two parties, we say that the scenario is nonsteerable if the probabilities of measurement outcomes can be written as: P (i, j λ, µ) = tr[ρ AB (A i λ B j µ )] = k p(i λ, k) p(k) tr[ρ k B j µ ] (3.1) where p and p are probability distributions and ρ k S(H B ) k. If equation 3.1 holds, Bob can just assume to have the marginal states ρ k with probability p(k). The overall probability distribution P (i, j λ, µ) is then obtained considering classical correlations between Alice s and Bob s sides, which are encoded in p(i λ, k) (i.e. the probability for Alice to obtain outcome i given she measures POVM λ when Bob has state k). Therefore, Alice s choice of measurement cannot remotely influence Bob s state. Example 3.1. (Steerable scenario) Let us assume that the two parties share the maximally entangled state ψ = 1 2 ( ). Depending on Alice choice of measurement, say σ x or σ z, Bob s will find his state to be either x ± or z ±. Bob has access to Alice s measurement choices and results, so he can verify that his marginal state is always equal to Alice s, and conclude that Alice measurement choice can remotely prepare his state. We say that Alice can steer Bob. A more detailed discussion of this scenario is given in example 4.3. Proposition 3.2. If Alice and Bob share a separable state, the scenario is unsteerable for any steering attempt from Alice.

49 Chapter 3. Steering 39 Proof. Every separable state can be written as: ρ = k p k (ρ A k ρb k ). Labeling Alice s and Bob s measurements as in definition 3.1 we have: P (i, j λ, µ) = tr[ρ (A i λ B j µ )] = k p k tr [ (ρ A k ρb k ) (A i λ B j µ ) ] = k p k tr [ ρ A k A ] [ ] i λ tr ρ B k B j µ (3.2) =: k p k p(i λ, k) tr [ ρ B k B ] j µ which is the required expression for non-steerablity. Example 3.2. (Werner state) Consider that Alice and Bob share the state W λ = λ ψ ψ + 1 λ 4 1, where ψ = It is well known [6] that this state is separable for λ < 1 3 and entangled otherwise. Somewhat surprisingly, for λ < 1 2 we have that W λ gives rise to local statistics for arbitrary projective measurements. This means that for 1 3 < λ < 1 2, the state, albeit being entangled, can not be used for quantum steering State assemblages A steering scenario is defined by both the bipartite state shared between the parties and the set of measurement Alice can choose from; the notion of state assemblage is useful to describe steering scenarios, as it contains information about both. Definition 3.3. A state assemblage for a steering scenario is a set of positive matrices {ρ a λ } (λ labels measurement chosen by Alice and a the

50 State steering outcome) such that: ρ a λ = ρ λ (3.3) where ρ is a unit-trace density matrix. a State assemblages can be used to describe the collection of unormalized partial states possesed by Bob after Alice s measurement. The requirement (3.3) is necessary to avoid signaling, i.e Bob must not be able to determine which measurement Alice performed. The elements of the assemblage are obtained as 1 2. [ ] ρ a λ = tr A A a λ 1 ρ AB A a λ 1 (3.4) Definition 3.4. A state assemblage is unsteerable if its elements ρ a λ can be written as: ρ a λ = p k (a λ)ρ k (3.5) k where tr [ k ρ k] = 1 and the p k (a λ) are positive constants such that a p k(a λ) = One-way steering In the previous discussion, we saw that a steering scenario is intrinsically asymmetrical: one of the two parties is untrusted and tries to demonstrate steering capabilities to the trusted party. Given this fundamental asymmetry, a natural question arises: are there scenarios which are ony 1 ρ AB denotes the bipartite sate shared by the parties, {A a λ } is the set of measurements Alice can choose from 2 The (unnormalized) state shared after Alice measurement is calculated using the so called Lüders instruments introduced in proposition 1.21

51 Chapter 3. Steering 41 steerable in one direction? That is, are there scenarios in which Alice can steer Bob, yet Bob cannot steer Alice? Example 3.3. Consider the bipartite state 3 4 : ρ AB = αp ψ + 1 α 5 ( ) (3.6) Similarly to a Werner state, for α < 1 2 a LHS model can be constructed for steering attempts from Bob with PVMs. Conversely, it is known that the scenario allows steering from Alice to Bob for α with specifically choosen measurements. Here we omit giving the measurements and the LHS model explicitly, referring the reader to [7]. Hence the state in (3.6) is one way steerable for α 1 2. As shown in example 3.3 the answer to the aforementioned question is, perhaps surprisingly, yes. 3.2 Channel steering In order to introduce the notion of channel steering we need to introduce first broadcast channels, i.e. channels with multiple outputs. Definition 3.5. A broadcast channel Λ C AB is a channel Λ : S(H C ) S(H A H B ) whose output space is composed by multiple systems. 3 P ψ is the projector on the singlet state ψ = 1 2 ( ) 4 0 α 1

52 Channel steering Definition 3.6. Given a broadcast channel Λ C AB, we say that the channel Γ C B is a reduction of Λ if Γ C B = tr A Λ C AB (3.7) Moreover, if (3.7) holds, we say that Λ C AB is an extension of Γ C B. With these definition in mind, we wish to identify the steerability of the broadcast channel Λ C AB as the possibility for Alice to steer Bob by performing measurements on her output of the channel. In such a scenario the information available to Alice is more than just classical correlations between Alice s and Bob s outputs. Definition 3.7. We say that Λ C AB is an incoherent extension of Λ C B if there is an Λ C B -compatible instrument {Λ C B i } and a set of states {ρ i } such that: Λ C AB ( ) = i Λ C B i ( ) ρ i (3.8) We say that the extension Λ C AB is coherent if it is not incoherent. Note that for an incoherent extension the information available to Alice is at most classical information about which subchannel Λ C B i was applied to the input. In fact, any input ρ C will give a separable state ρ AB as output: ρ AB = Λ C AB (ρ C ) = i Λ C B i (ρ C ) ρ i = i p i ρ B i ρ A i (3.9) Steering attempts by Alice will be made with the use of measurements;

53 Chapter 3. Steering 43 each measurement {A i } on A leads to a reduction Λ C B of the original channel through the relation: Λ C B i ( ) = tr A [ (Ai 1)Λ C AB ( ) ] (3.10) Where {Λ C B i } is an instrument compatible with the channel Λ C B. We are now ready to give the definition of channel steering, the reason for such a definition will be made clear by Theorem 3.9. Definition 3.8. We say that the channel extension Λ C AB is unsteerable if any set of POVMs {A i λ } on A leads to a compatible set of instruments {Λ C B i λ } via the relation (3.10). Theorem 3.9. Every incoherent channel extension Λ C AB is unsteerable. Conversely, every compatible set of subchannels (instruments) {Λ C B i λ } for Λ C B can be obtained through (3.10) from an incoherent channel extension. Proof. Following [8], for any measurement {A i λ } we can write: Λi λ C B [ ( ) = tr A Ai λ Λ C AB ( ) ] = tr A = k Λ C B k [ ( ) tr A [ Ai λ ρ k ] = A i λ k k ] Λ C B k ( ) ρ k p k (i λ) Λ C B k ( ) (3.11) Which proves that Λ C AB is unsteerable. On the other hand, any set {Λ C B i λ = k p k(i λ) Λ C B k } of compatible instruments can be obtained applying the set of measurements {A i λ = k p k(i λ) k k } to the A output of Λ C AB = k ΛC B k k k. By

54 Channel steering direct calculation we have: tr A [ Ai λ Λ C AB ( ) ] =tr A [( k p k (i λ) k k ) ( = k,k δ k,k p k (i λ)λ C B k ( ) = k k Λ C B k ( ) k k p k (i λ)λ C B k ( ) )] =Λ C B i λ Thus, if Bob can be convinced that Alice can steer his channel, then Alice must have access to a coherent extension of that channel Channel assemblages In analogy with state assemblages, we can define channel assemblages as: Definition A channel assemblage {Λ C B i λ } is a set of weakly compatible instruments. That is, a set of instrument summing up to the same channel. The elements that compose each instrument are also called subchannels. Requiring the channels to be weakly compatible ensures that the nonsignaling condition (3.3) is met for every input state. Definition A channel assemblage {Λ C B i λ } is said to be unsteerable if its instrument elements are compatible 5. That is, if there is a joint instrument {G k } such that Λ C B i λ = k p k(i λ)g k. A channel assemblage is steerable if it is not unsteerable. 5 See definition 2.5

BOX. The density operator or density matrix for the ensemble or mixture of states with probabilities is given by

BOX. The density operator or density matrix for the ensemble or mixture of states with probabilities is given by 2.4 Density operator/matrix Ensemble of pure states gives a mixed state BOX The density operator or density matrix for the ensemble or mixture of states with probabilities is given by Note: Once mixed,

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

arxiv:quant-ph/9607009v1 11 Jul 1996

arxiv:quant-ph/9607009v1 11 Jul 1996 Distillability of Inseparable Quantum Systems Micha l Horodecki Department of Mathematics and Physics University of Gdańsk, 80 952 Gdańsk, Poland arxiv:quant-ph/9607009v1 11 Jul 1996 Pawe l Horodecki Faculty

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

PHYSICAL REVIEW A 79, 052110 2009. Multiple-time states and multiple-time measurements in quantum mechanics

PHYSICAL REVIEW A 79, 052110 2009. Multiple-time states and multiple-time measurements in quantum mechanics Multiple-time states and multiple-time measurements in quantum mechanics Yakir Aharonov, 1, Sandu Popescu, 3,4 Jeff Tollaksen, and Lev Vaidman 1 1 Raymond and Beverly Sackler School of Physics and Astronomy,

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Detection of quantum entanglement in physical systems

Detection of quantum entanglement in physical systems Detection of quantum entanglement in physical systems Carolina Moura Alves Merton College University of Oxford A thesis submitted for the degree of Doctor of Philosophy Trinity 2005 Abstract Quantum entanglement

More information

Introduction to Quantum Computing

Introduction to Quantum Computing Introduction to Quantum Computing Javier Enciso encisomo@in.tum.de Joint Advanced Student School 009 Technische Universität München April, 009 Abstract In this paper, a gentle introduction to Quantum Computing

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

[1] Diagonal factorization

[1] Diagonal factorization 8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010 Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Lecture 18 - Clifford Algebras and Spin groups

Lecture 18 - Clifford Algebras and Spin groups Lecture 18 - Clifford Algebras and Spin groups April 5, 2013 Reference: Lawson and Michelsohn, Spin Geometry. 1 Universal Property If V is a vector space over R or C, let q be any quadratic form, meaning

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

Classification of Cartan matrices

Classification of Cartan matrices Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Finite dimensional C -algebras

Finite dimensional C -algebras Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

Quantum Computing. Robert Sizemore

Quantum Computing. Robert Sizemore Quantum Computing Robert Sizemore Outline Introduction: What is quantum computing? What use is quantum computing? Overview of Quantum Systems Dirac notation & wave functions Two level systems Classical

More information

Till now, almost all attention has been focussed on discussing the state of a quantum system.

Till now, almost all attention has been focussed on discussing the state of a quantum system. Chapter 13 Observables and Measurements in Quantum Mechanics Till now, almost all attention has been focussed on discussing the state of a quantum system. As we have seen, this is most succinctly done

More information

Particle Physics. Michaelmas Term 2011 Prof Mark Thomson. Handout 7 : Symmetries and the Quark Model. Introduction/Aims

Particle Physics. Michaelmas Term 2011 Prof Mark Thomson. Handout 7 : Symmetries and the Quark Model. Introduction/Aims Particle Physics Michaelmas Term 2011 Prof Mark Thomson Handout 7 : Symmetries and the Quark Model Prof. M.A. Thomson Michaelmas 2011 206 Introduction/Aims Symmetries play a central role in particle physics;

More information

A Primer on Index Notation

A Primer on Index Notation A Primer on John Crimaldi August 28, 2006 1. Index versus Index notation (a.k.a. Cartesian notation) is a powerful tool for manipulating multidimensional equations. However, there are times when the more

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

0.1 Phase Estimation Technique

0.1 Phase Estimation Technique Phase Estimation In this lecture we will describe Kitaev s phase estimation algorithm, and use it to obtain an alternate derivation of a quantum factoring algorithm We will also use this technique to design

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

5.3 The Cross Product in R 3

5.3 The Cross Product in R 3 53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

More information

Chapter 9 Unitary Groups and SU(N)

Chapter 9 Unitary Groups and SU(N) Chapter 9 Unitary Groups and SU(N) The irreducible representations of SO(3) are appropriate for describing the degeneracies of states of quantum mechanical systems which have rotational symmetry in three

More information

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH CHRISTOPHER RH HANUSA AND THOMAS ZASLAVSKY Abstract We investigate the least common multiple of all subdeterminants,

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

CBE 6333, R. Levicky 1. Tensor Notation.

CBE 6333, R. Levicky 1. Tensor Notation. CBE 6333, R. Levicky 1 Tensor Notation. Engineers and scientists find it useful to have a general terminology to indicate how many directions are associated with a physical quantity such as temperature

More information

Unified Lecture # 4 Vectors

Unified Lecture # 4 Vectors Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

More information

Row Echelon Form and Reduced Row Echelon Form

Row Echelon Form and Reduced Row Echelon Form These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

4: SINGLE-PERIOD MARKET MODELS

4: SINGLE-PERIOD MARKET MODELS 4: SINGLE-PERIOD MARKET MODELS Ben Goldys and Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2015 B. Goldys and M. Rutkowski (USydney) Slides 4: Single-Period Market

More information

3. Mathematical Induction

3. Mathematical Induction 3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Mathematical Induction

Mathematical Induction Mathematical Induction (Handout March 8, 01) The Principle of Mathematical Induction provides a means to prove infinitely many statements all at once The principle is logical rather than strictly mathematical,

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

Asymptotic Behaviour and Cyclic Properties of Tree-shift Operators

Asymptotic Behaviour and Cyclic Properties of Tree-shift Operators Asymptotic Behaviour and Cyclic Properties of Tree-shift Operators University of Szeged, Bolyai Institute 29th July 2013, Gothenburg Banach Algebras and Applications 1 Introduction, Motivation Directed

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

Linear Algebra: Determinants, Inverses, Rank

Linear Algebra: Determinants, Inverses, Rank D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information