Matrix Polynomials in the Theory of Linear Control Systems
|
|
- Christian White
- 7 years ago
- Views:
Transcription
1 Matrix Polynomials in the Theory of Linear Control Systems Ion Zaballa Departamento de Matemática Aplicada y EIO
2 Matrix Polynomials in Linear Control
3 Matrix Polynomials in Linear Control Coefficient Matrices of highorder systems ( ) d T dt y(t) = V ( d x(t) = U ( dt d dt ) x(t) + W ) u(t) ( d dt ) u(t)
4 Matrix Polynomials in Linear Control Coefficient Matrices of highorder systems Matrix Polynomials Representing systems ( ) d T dt y(t) = V ( d x(t) = U ( dt d dt ) x(t) + W ) u(t) ( d dt ) u(t) ẋ(t) = Ax(t) + Bu(t)
5 Matrix Polynomials in Linear Control Coefficient Matrices of highorder systems Matrix Polynomials Representing systems Behaviours (Poldermann & Willems) ( ) d T dt y(t) = V ( d x(t) = U ( dt d dt ) x(t) + W ) u(t) ( d dt ) u(t) ẋ(t) = Ax(t) + Bu(t) ( ) d ( ) d P dt y(t) = Q dt u(t)
6 Polynomial System Matrices T ( ) d dt y(t) = V x(t) = U ( d dt ( ) d dt ) x(t) + W u(t) ( d dt ) u(t)
7 state vector Polynomial System Matrices ( ) ( ) d d T x(t) = U u(t) dt ( ) dt d y(t) = V x(t) + W dt ( d dt ) u(t)
8 state vector Polynomial System Matrices input or control vector ( ) ( ) d d T x(t) = U u(t) dt ( ) dt ( ) d d y(t) = V x(t) + W u(t) dt dt
9 state vector Polynomial System Matrices input or control vector ( ) ( ) d d T x(t) = U u(t) dt ( ) dt ( ) d d y(t) = V x(t) + W u(t) dt dt output or measurement vector
10 state vector Polynomial System Matrices input or control vector ( ) ( ) d d T x(t) = U u(t) dt ( ) dt ( ) d d y(t) = V x(t) + W u(t) dt dt output or measurement vector If R(λ) = R p λ p + R p 1 λ p R 1 λ + R 0 is a matrix polynomial: ( ) d d p x(t) d p 1 x(t) dx(t) R x(t) = R p dt dt p + R p 1 dt p R 1 + R 0 x(t) dt
11 state vector Polynomial System Matrices input or control vector ( ) ( ) d d T x(t) = U u(t) dt ( ) dt ( ) d d y(t) = V x(t) + W u(t) dt dt output or measurement vector If R(λ) = R p λ p + R p 1 λ p R 1 λ + R 0 is a matrix polynomial: ( ) d d p x(t) d p 1 x(t) dx(t) R x(t) = R p dt dt p + R p 1 dt p R 1 + R 0 x(t) dt [ ] [ ] T (s) U(s) x(s) = V (s) W (s) ū(s) [ ] 0, det T (s) 0 ȳ(s)
12 state vector Polynomial System Matrices input or control vector ( ) ( ) d d T x(t) = U u(t) dt ( ) dt ( ) d d y(t) = V x(t) + W u(t) dt dt output or measurement vector If R(λ) = R p λ p + R p 1 λ p R 1 λ + R 0 is a matrix polynomial: ( ) d d p x(t) d p 1 x(t) dx(t) R x(t) = R p dt dt p + R p 1 dt p R 1 + R 0 x(t) dt [ ] [ ] T (s) U(s) x(s) = V (s) W (s) ū(s) [ ] 0, det T (s) 0 ȳ(s) Polynomial System Matrix
13 ȳ(s) = V (s) x(s) + W (s)ū(s) = Transfer Function Matrix
14 Transfer Function Matrix T (s) x(s) = U(s)ū(s) ȳ(s) = V (s) x(s) + W (s)ū(s) =
15 Transfer Function Matrix T (s) x(s) = U(s)ū(s) ȳ(s) = V (s) x(s) + W (s)ū(s) = (V (s)t (s) 1 U(s) + W (s)) ū(s)
16 Transfer Function Matrix T (s) x(s) = U(s)ū(s) Transfer Function Matrix ȳ(s) = V (s) x(s) + W (s)ū(s) = (V (s)t (s) 1 U(s) + W (s)) ū(s)
17 Transfer Function Matrix T (s) x(s) = U(s)ū(s) Transfer Function Matrix ȳ(s) = V (s) x(s) + W (s)ū(s) = (V (s)t (s) 1 U(s) + W (s)) ū(s) When do two polynomial system matrices yield the same Transfer Function Matrix?
18 Strict System Equivalence Unimodular: det = c 0 [ M(s) 0 X(s) I p ] n m [ ] [ ] [ ] n T 1 (s) U 1 (s) N(s) Y (s) T2 (s) U = 2 (s) p V 1 (s) W 1 (s) 0 I m V 2 (s) W 2 (s)
19 Strict System Equivalence Unimodular: det = c 0 [ ] [ M(s) 0 T 1 (s) X(s) I p V 1 (s) ] [ ] [ U 1 (s) N(s) Y (s) T = 2 (s) W 1 (s) 0 I m V 2 (s) ] U 2 (s) W 2 (s)
20 Strict System Equivalence Unimodular: det = c 0 [ ] [ M(s) 0 X(s) I p T 1 (s) V 1 (s) U 1 (s) W 1 (s) ] [N(s) ] Y (s) 0 I m = [ T 2 (s) V 2 (s) U 2 (s) W 2 (s) ]
21 Strict System Equivalence Unimodular: det = c 0 [ M(s) ] [ 0 T 1 (s) X(s) V 1 (s) I p U 1 (s) W 1 (s) ] [ ] [ N(s) Y (s) = 0 I m T 2 (s) V 2 (s) U 2 (s) W 2 (s) ]
22 Strict System Equivalence [ M(s) 0 X(s) I p ] n m [ ] [ ] [ ] n T 1 (s) U 1 (s) N(s) Y (s) T2 (s) U = 2 (s) p V 1 (s) W 1 (s) 0 I m V 2 (s) W 2 (s) V 1 (s)t 1 (s) 1 U 1 (s) + W 1 (s) = V 2 (s)t 2 (s) 1 U 2 (s) + W 2 (s)
23 Strict System Equivalence [ M(s) 0 X(s) I p ] n m [ ] [ ] [ ] n T 1 (s) U 1 (s) N(s) Y (s) T2 (s) U = 2 (s) p V 1 (s) W 1 (s) 0 I m V 2 (s) W 2 (s)? V 1 (s)t 1 (s) 1 U 1 (s) + W 1 (s) = V 2 (s)t 2 (s) 1 U 2 (s) + W 2 (s)
24 Coprime Matrix Polynomials A(s) = Ã(s) C(s) B(s) = B(s) C(s) Right Common Factor A(s), B(s) right coprime C(s) unimodular A(s), B(s) right coprime [ ] A(s) B(s) e [ ] In 0 A(s), B(s) left coprime [ A(s) B(s) ] e [ In 0 ]
25 Coprimeness and Strict System Equivalence If G(s) F(s) p m, (T (s), U(s), V (s), W (s)) Realization of G(s) if G(s) = W (s) + V (s)t (s) 1 U(s). order= deg(det T (s)) (T (s), U(s), V (s), W (s)) realization of least order (T (s), U(s)) left coprime and (T (s), V (s)) right coprime [ ] [ ] T1 (s) U If P 1 (s) = 1 (s) T2 (s) U, P V 1 (s) W 1 (s) 2 (s) = 2 (s) polynomial system matrices of least V 2 (s) W 2 (s) order P 1 (s) s.s.e. P 2 (s) V 1 (s)t 1 (s) 1 U 1 (s) + W 1 (s) = V 2 (s)t 2 (s) 1 U 2 (s) + W 2 (s)
26 Systems in State-Space Form (Σ) { ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) P (s) = [ sin A ] B C 0
27 (Σ) Systems in State-Space Form { ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) P (s) = [ sin A ] B C 0 Controllability in [t 0, t 1 ]: x 0, x 1 R n, u defined in [t 0, t 1 ] such that the solution of the I.V.P. ẋ(t) = Ax(t) + Bu(t), x(t 0 ) = x 0, satisfies x(t 1 ) = x 1. x x 1 x 0 t 0 t 1 t
28 (Σ) Systems in State-Space Form { ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) P (s) = [ sin A ] B C 0 Controllability in [t 0, t 1 ]: x 0, x 1 R n, u defined in [t 0, t 1 ] such that the solution of the I.V.P. ẋ(t) = Ax(t) + Bu(t), x(t 0 ) = x 0, satisfies x(t 1 ) = x 1. x x 1 x 0 t 0 t 1 (Σ) controllable (A, B) controllable t
29 Controllability, Observability and Coprimeness (Σ) { ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) (A, B) controllable is equivalent to: P (s) = [ sin A ] B C 0 rank [ B AB A n 1 B ] = n, or ( [sin si n A and B are left coprime A B ] e [ In 0 ])
30 Controllability, Observability and Coprimeness (Σ) { ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) (A, B) controllable is equivalent to: P (s) = [ sin A ] B C 0 rank [ B AB A n 1 B ] = n, or ( [sin si n A and B are left coprime A B ] e [ In 0 ]) Observability in [t 0, t 1 ]: The value of y in [t 0, t 1 ] determines the state at t 0, x(t 0 ), and so the vector function x(t) in [t 0, t 1 ]. (Σ) observable (A, C) observable (A T, C T ) controllable.
31 Controllability, Observability and Coprimeness (Σ) { ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) (A, B) controllable is equivalent to: P (s) = [ sin A ] B C 0 rank [ B AB A n 1 B ] = n, or ( [sin si n A and B are left coprime A B ] e [ In 0 ]) Observability in [t 0, t 1 ]: The value of y in [t 0, t 1 ] determines the state at t 0, x(t 0 ), and so the vector function x(t) in [t 0, t 1 ]. (Σ) observable (A, C) observable (A T, C T ) controllable. [ ] sin A B P (s) = is of least order if and only if (A, B) controllable C 0 and (A, C) observable.
32 Transfer Function Matrix From now on: { ẋ(t) = Ax(t) + Bu(t) (Σ) y(t) = I n x(t) P (s) = Transfer Function Matrix: G(s) = (si n A) 1 B [ sin A ] B I n 0 s 0
33 Transfer Function Matrix From now on: { ẋ(t) = Ax(t) + Bu(t) (Σ) y(t) = I n x(t) P (s) = Transfer Function Matrix: G(s) = (si n A) 1 B lcm{denominators of G(s)} [ sin A ] B I n 0 s 0 G(s) = Ñ(s)( d(s) I n) 1
34 Transfer Function Matrix From now on: { ẋ(t) = Ax(t) + Bu(t) (Σ) y(t) = I n x(t) P (s) = Transfer Function Matrix: G(s) = (si n A) 1 B lcm{denominators of G(s)} [ sin A ] B I n 0 s Remove right common factors from Ñ(s) and d(s)i n G(s) = Ñ(s)( d(s) I n) 1 = N(s)D(s) 1 0
35 Transfer Function Matrix From now on: { ẋ(t) = Ax(t) + Bu(t) (Σ) y(t) = I n x(t) P (s) = Transfer Function Matrix: G(s) = (si n A) 1 B lcm{denominators of G(s)} If (A, B) controllable [ sin A ] B I n 0 s Remove right common factors from Ñ(s) and d(s)i n G(s) = Ñ(s)( d(s) I n) 1 = N(s)D(s) 1 [ ] sin A B sse I n 0 I n m D(s) I m 0 N(s) 0 0 (n m)
36 Polynomial Matrix Representations [ ] [ ] [ ] U(s) 0 sin A B V (s) Y (s) = X(s) I n I n 0 0 I m I n m D(s) I m 0 N(s) 0 U(s) [ si n A B ] [ ] [ ] V (s) Y (s) In m 0 0 = 0 I m 0 D(s) I m ( )
37 Polynomial Matrix Representations [ ] [ ] [ ] U(s) 0 sin A B V (s) Y (s) = X(s) I n I n 0 0 I m I n m D(s) I m 0 N(s) 0 U(s) [ si n A B ] [ ] [ ] V (s) Y (s) In m 0 0 = 0 I m 0 D(s) I m X(s) = [ V 1 (s) Y (s) ] U(s), N(s) = Y (s)d(s) + V 2 (s), where V (s) = [ V 1 (s) V 2 (s) ], V 2 n m ( )
38 Polynomial Matrix Representations [ ] [ ] [ ] U(s) 0 sin A B V (s) Y (s) = X(s) I n I n 0 0 I m I n m D(s) I m 0 N(s) 0 U(s) [ si n A B ] [ ] [ ] V (s) Y (s) In m 0 0 = 0 I m 0 D(s) I m X(s) = [ V 1 (s) Y (s) ] U(s), N(s) = Y (s)d(s) + V 2 (s), where V (s) = [ V 1 (s) V 2 (s) ], V 2 n m ( ) A Polynomial Matrix Representation of a controllable system (A, B) is any non-singular Matrix Polynomial D(s) such that ( ) is satisfied for some unimodular matrices U(s), V (s) and some matrix Y (s).
39 Some Consequences of the Definition. I If D(s) is a PMR of (A, B), D(s)W (s) is also a PMR of (A, B) for any unimodular W (s). U(s) [ si n A B ] [ ] [ ] V (s) Y (s) In m 0 0 = 0 I m 0 D(s) I m U(s) [ si n A B ] [ ] [ ]. V (s) W (s) Y (s) In m 0 0 = 0 I m 0 D(s)W (s) I m W (s) = Diag(I n m, W (s)).
40 Some Consequences of the Definition. I If D(s) is a PMR of (A, B), D(s)W (s) is also a PMR of (A, B) for any unimodular W (s). U(s) [ si n A B ] [ ] [ ] V (s) Y (s) In m 0 0 = 0 I m 0 D(s) I m U(s) [ si n A B ] [ ] [ ]. V (s) W (s) Y (s) In m 0 0 = 0 I m 0 D(s)W (s) I m W (s) = Diag(I n m, W (s)). If D 1 (s), D 2 (s) are PMRs of (A 1, B 1 ) and (A 2, B 2 ) : D 2 (s) = D 1 (s)w (s) (A 2, B 2 ) = (P 1 A 1 P, P 1 B 1 ). W (s) unimodular, P invertible.
41 Some Consequences of the Definition. II If D(s) = D l s l + D l 1 s l D 1 s + D 0 (si n A) 1 B = N(s)D(s) 1 (si n A) 1 BD(s) = N(s) A l BD l + A l 1 BD l ABD 1 + BD 0 = 0 ( ) (A, B) controllable and ( ) D(s) PMR of (A, B)
42 Some Consequences of the Definition. II If D(s) = D l s l + D l 1 s l D 1 s + D 0 (si n A) 1 B = N(s)D(s) 1 (si n A) 1 BD(s) = N(s) A l BD l + A l 1 BD l ABD 1 + BD 0 = 0 ( ) (A, B) controllable and ( ) D(s) PMR of (A, B) si n A is a linearization of D(s) [ ] [ ] [ V (s) Y (s) U(s) si n A B = 0 I m I n m 0 0 D(s) 0 I m ]
43 Some Consequences of the Definition. II If D(s) = D l s l + D l 1 s l D 1 s + D 0 (si n A) 1 B = N(s)D(s) 1 (si n A) 1 BD(s) = N(s) A l BD l + A l 1 BD l ABD 1 + BD 0 = 0 ( ) (A, B) controllable and ( ) D(s) PMR of (A, B) si n A is a linearization of D(s) [ ] [ ] [ V (s) Y (s) U(s) si n A B = 0 I m I n m 0 0 D(s) 0 I m ] Given D(s), non-singular, is there, for any linearization si n A of D(s), a control matrix B such that (A, B) is controllable and D(s) is a PMR of (A, B)?
44 Controllability indices Assume (A, B) controllable: rank [ B AB A n 1 B ] = n
45 Controllability indices Assume (A, B) controllable: rank [ B AB A n 1 B ] = n n = 8, m = 5 B b 2 b 3 b 4
46 Controllability indices Assume (A, B) controllable: rank [ B AB A n 1 B ] = n n = 8, m = 5 B AB b 2 b 3 b 4 Ab 2 Ab 4
47 Controllability indices Assume (A, B) controllable: rank [ B AB A n 1 B ] = n n = 8, m = 5 B AB A 2 B b 2 b 3 b 4 Ab 2 Ab 4 A 2 b 4
48 Controllability indices Assume (A, B) controllable: rank [ B AB A n 1 B ] = n n = 8, m = 5 B AB A 2 B A 3 B b 2 b 3 b 4 Ab 2 Ab 4 A 2 b 4 A 3 b 4
49 Controllability indices Assume (A, B) controllable: rank [ B AB A n 1 B ] = n n = 8, m = 5 B AB A 2 B A 3 B A 4 B b 2 b 3 b 4 Ab 2 Ab 4 A 2 b 4 A 3 b 4 A 4 b 4
50 Controllability indices Assume (A, B) controllable: rank [ B AB A n 1 B ] = n n = 8, m = 5 B AB A 2 B A 3 B A 4 B A 5 B A 6 B A 7 B b 2 b 3 b 4 Ab 2 Ab 4 A 2 b 4 A 3 b 4 A 4 b 4
51 Controllability indices Assume (A, B) controllable: rank [ B AB A n 1 B ] = n n = 8, m = 5 B AB A 2 B A 3 B A 4 B A 5 B A 6 B A 7 B b 2 b 3 b 4 Ab 2 Ab 4 A 2 b 4 l 1 = 0, l 2 = 2, l 3 = 1, l 4 = 5, l 5 = 0 k 1 = 5, k 2 = 2, k 3 = 1, k 4 = 0, k 5 = 0 A 3 b 4 A 4 b 4 Controllability Indices
52 Controllability indices Assume (A, B) controllable: rank [ B AB A n 1 B ] = n n = 8, m = 5 B AB A 2 B A 3 B A 4 B A 5 B A 6 B A 7 B b 2 b 3 b 4 Ab 2 Ab 4 A 2 b 4 l 1 = 0, l 2 = 2, l 3 = 1, l 4 = 5, l 5 = 0 k 1 = 5, k 2 = 2, k 3 = 1, k 4 = 0, k 5 = 0 A 3 b 4 A 4 b 4 Controllability Indices Controllability indices of (A, B)= minimal indices of s [ I n 0 ] [ A B ] Matlab
53 PMRs and Controllability indices unordered controllability indeces s l s l 2 0 D(s) = D hc + D..... lc (s). 0 0 s l m det D hc 0 degree of i-th column l i
54 PMRs and Controllability indices unordered controllability indeces s l s l 2 0 D(s) = D hc + D..... lc (s). 0 0 s l m det D hc 0 degree of i-th column l i Matrix polynomials with this property are called column proper or column reduced
55 PMRs and Controllability indices unordered controllability indeces s l s l 2 0 D(s) = D hc + D..... lc (s). 0 0 s l m det D hc 0 degree of i-th column l i Matrix polynomials with this property are called column proper or column reduced Let A(s) F[s] m m be a non-singular matrix polynomial. Then there is U(s) F[s] m m, unimodular, such that D(s) = A(s)U(s) is a column proper matrix. In general, D(s) is not unique but all have the same column degrees up to reordering.
56 Column Proper Matrix Polynomials Given A(s), choose a linearization si n A
57 Column Proper Matrix Polynomials Given A(s), choose a linearization si n A D(s) = A(s)U(s) D(s) = D hc Diag(s l1, s l2,..., s lm )+ D lc (s) column proper
58 Column Proper Matrix Polynomials Given A(s), choose a linearization si n A D(s) = A(s)U(s) D(s) = D hc Diag(s l1, s l2,..., s lm )+ D lc (s) column proper D(s) = Diag(s l1, s l2,..., s lm ) + D 1 hc D lc(s) column proper
59 Column Proper Matrix Polynomials Given A(s), choose a linearization si n A D(s) = A(s)U(s) D(s) = D hc Diag(s l1, s l2,..., s lm )+ D lc (s) column proper (A c, B c ) MATLAB example D(s) = Diag(s l1, s l2,..., s lm ) + D 1 hc D lc(s) column proper
60 Column Proper Matrix Polynomials Given A(s), choose a linearization si n A D(s) = A(s)U(s) D(s) = D hc Diag(s l1, s l2,..., s lm )+ D lc (s) column proper (A c, B c ) MATLAB example D(s) = Diag(s l1, s l2,..., s lm ) + D 1 hc D lc(s) column proper (A c, B c D 1 hc )
61 Column Proper Matrix Polynomials Given A(s), choose a linearization si n A D(s) = A(s)U(s) D(s) = D hc Diag(s l1, s l2,..., s lm )+ D lc (s) column proper (A c, B c ) (A c, B c D 1 hc ) MATLAB example A and A c linearizations of A(s): A = P 1 A cp D(s) = Diag(s l1, s l2,..., s lm ) + D 1 hc D lc(s) column proper B = P 1 B c D 1 hc A(s) is a Polynomial Matrix Representation of (A, B) and l 1,... l m are its (unordered) controllability indices.
62 Wiener-Hopf Factorization and Indices A 1 (s), A 2 (s) F[s] m n Wiener-Hopf equivalent (at on the left): A 2 (s) = B(s) A 1 (s) U(s) Biproper: invertible lim B(s) s Unimodular: lim s a U(s) invertible, a C
63 Wiener-Hopf Factorization and Indices A 1 (s), A 2 (s) F[s] m n Wiener-Hopf equivalent (at on the left): A 2 (s) = B(s) A 1 (s) U(s) Biproper: invertible lim B(s) s Unimodular: lim s a U(s) invertible, a C A(s)U(s) = D hc Diag(s l 1,..., s lm ) + D lc (s) = [D hc + D lc (s) Diag(s l 1,..., s lm l )] Diag(s 1,..., s lm ) }{{} B(s) F pr(s) m m lim s B(s) = D hc invertible
64 Wiener-Hopf Factorization and Indices A 1 (s), A 2 (s) F[s] m n Wiener-Hopf equivalent (at on the left): A 2 (s) = B(s) A 1 (s) U(s) Biproper: invertible lim B(s) s Unimodular: lim s a U(s) invertible, a C A(s)U(s) = D hc Diag(s l 1,..., s lm ) + D lc (s) = [D hc + D lc (s) Diag(s l 1,..., s lm l )] Diag(s 1,..., s lm ) }{{} B(s) F pr(s) m m lim s B(s) = D hc invertible A(s) W H Diag(s k 1, s k 2,..., s km ) k 1 k 2 k m = Wiener-Hopf factorization indices of A(s)
65 Brunovsky-Kronecker canonical form Diag(s k 1, s k 2,..., s km ) is a PMR of (A c, B c ) A c = Diag F k i k i, B c = Diag. F k i 1, 1 i m s s [ I n 0 ] [ 0 s ] se. A c B c Diag 0 0 s F k i (k i +1) : 1 i m s 1 Kronecker canonical form
66 Brunovsky-Kronecker canonical form Diag(s k 1, s k 2,..., s km ) is a PMR of (A c, B c ) A c = Diag F k i k i, B c = Diag. F k i 1, 1 i m s s [ I n 0 ] [ 0 s ] se. A c B c Diag 0 0 s F k i (k i +1) : 1 i m s 1 Kronecker canonical form (A c, B c ) = system in Brunovsky form
67 Feedback equivalence (A, B) has controllability indices k 1 k 2 k m s [ I n 0 ] [ A B ] se s [ I n 0 ] [ A c ] B c P (s [ I n 0 ] [ A B ] )T = s [ I n 0 ] [ A c ] B c P [ A B ] [ P 1 ] 0 = [ A R Q c ] B c (A c, B c ) = (P AP 1 + P BF, P BQ)
68 Feedback equivalence (A, B) has controllability indices k 1 k 2 k m s [ I n 0 ] [ A B ] se s [ I n 0 ] [ A c ] B c P (s [ I n 0 ] [ A B ] )T = s [ I n 0 ] [ A c ] B c P [ A B ] [ P 1 ] 0 = [ A R Q c ] B c (A c, B c ) = (P AP 1 + P BF, P BQ) Feedback equivalence Any controllable system is feedback equivalent to a system in Brunovsky form
69 Summarizing D(s) is a Polynomial Matrix Representation of a controllable (A, B) U(s) [ si n A B ] [ ] [ ] V (s) Y (s) In m 0 0 = 0 I m 0 D(s) I m (si n A) 1 B = N(s)D(s) 1, N(s), D(s) right coprime A l BD l + A l 1 BD l ABD 1 + BD 0 = 0 (D(s) = D l s l + D l 1 s l D 0 )
70 Summarizing D(s) is a Polynomial Matrix Representation of a controllable (A, B) U(s) [ si n A B ] [ ] [ ] V (s) Y (s) In m 0 0 = 0 I m 0 D(s) I m (si n A) 1 B = N(s)D(s) 1, N(s), D(s) right coprime A l BD l + A l 1 BD l ABD 1 + BD 0 = 0 (D(s) = D l s l + D l 1 s l D 0 ) (A, B) D(s) (P 1 AP, P 1 B) D(s)U(s) Similarity Right Equivalence (P 1 AP + P 1 BF, P 1 BQ) B(s)D(s)U(s) Feedback Equivalence Wiener-Hopf Equivalence Controllability Indices Left Wiener-Hopf indices
71 Matrix Polynomials with non-singular Leading Coefficient non-singular D(s) = D l s l + D l 1 s l D 1 s + D 0 B(s) = D l + D l 1 s D 1 s l+1 + D 0 s l biproper ( lim B(s) = D l ) s m {}}{ B(s) 1 D(s) = s l I m ( l, l,..., l) = Wiener-Hopf indices of D(s)
72 Matrix Polynomials with non-singular Leading Coefficient non-singular D(s) = D l s l + D l 1 s l D 1 s + D 0 B(s) = D l + D l 1 s D 1 s l+1 + D 0 s l biproper ( lim B(s) = D l ) s m {}}{ B(s) 1 D(s) = s l I m ( l, l,..., l) = Wiener-Hopf indices of D(s) D(s) Polynomial Matrix Representation of (A, B) rank [ B AB A lm 1 B ] = lm and A l BD l + A l 1 BD l ABD 1 + BD 0 = 0 m {}}{ Since ( l, l,..., l) = Controllability indices of (A, B) rank [ B AB A lm 1 B ] = rank [ B AB A l 1 B ] (A, B)= Standard Pair of D(s)
73 References H. H. Rosenbrock, State-space and Multivariable Theory, Thomas Nelson and Sons, London, T. Kailath, Linear Systems, Prentice Hall, New Jersey, P. A. Fuhrmann, A Polynomial Approach to Linear Algebra, Springer, New York, 1996 J. W. Polderman, J. C. Willems, Introduction to Mathematical System Theory. A behavioral Approach Springer, New York, P. Fuhrmann, J. C. Willems, Factorization indices at infinity for rational matrix functions, Integral Equations Operator Theory, 2/3, (1979), pp
74 References I. Gohberg, M. A. Kaashoek, F. van Schagen, Partially Specified Matrices and Operators: Classification, Completion, Applications, Bikhäuser, Basel, K. Clancey, I. Gohberg, Factorization of Matrix Functions and Singular Integral Operators, Birkhäuser, Basel, I. Gohberg, P. Lancaster, L. Rodman, Matrix Polynomials, Academic Press, New York, 1982 and SIAM, Philadelphia, I. Zaballa, Controllability and Hermite indices of matrix pairs, Int. J. Control, 68 (1), (1997), pp
Linear Control Systems
Chapter 3 Linear Control Systems Topics : 1. Controllability 2. Observability 3. Linear Feedback 4. Realization Theory Copyright c Claudiu C. Remsing, 26. All rights reserved. 7 C.C. Remsing 71 Intuitively,
More information1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
More informationCONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
More informationG(s) = Y (s)/u(s) In this representation, the output is always the Transfer function times the input. Y (s) = G(s)U(s).
Transfer Functions The transfer function of a linear system is the ratio of the Laplace Transform of the output to the Laplace Transform of the input, i.e., Y (s)/u(s). Denoting this ratio by G(s), i.e.,
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationCONTROLLER DESIGN USING POLYNOMIAL MATRIX DESCRIPTION
CONTROLLER DESIGN USING POLYNOMIAL MATRIX DESCRIPTION Didier Henrion, Centre National de la Recherche Scientifique, France Michael Šebek, Center for Applied Cybernetics, Faculty of Electrical Eng., Czech
More informationMatrix Differentiation
1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More informationMATRICES WITH DISPLACEMENT STRUCTURE A SURVEY
MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY PLAMEN KOEV Abstract In the following survey we look at structured matrices with what is referred to as low displacement rank Matrices like Cauchy Vandermonde
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationMatlab and Simulink. Matlab and Simulink for Control
Matlab and Simulink for Control Automatica I (Laboratorio) 1/78 Matlab and Simulink CACSD 2/78 Matlab and Simulink for Control Matlab introduction Simulink introduction Control Issues Recall Matlab design
More informationis in plane V. However, it may be more convenient to introduce a plane coordinate system in V.
.4 COORDINATES EXAMPLE Let V be the plane in R with equation x +2x 2 +x 0, a two-dimensional subspace of R. We can describe a vector in this plane by its spatial (D)coordinates; for example, vector x 5
More information3.1 State Space Models
31 State Space Models In this section we study state space models of continuous-time linear systems The corresponding results for discrete-time systems, obtained via duality with the continuous-time models,
More informationLinear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices
MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two
More informationCOMMUTATIVITY DEGREES OF WREATH PRODUCTS OF FINITE ABELIAN GROUPS
Bull Austral Math Soc 77 (2008), 31 36 doi: 101017/S0004972708000038 COMMUTATIVITY DEGREES OF WREATH PRODUCTS OF FINITE ABELIAN GROUPS IGOR V EROVENKO and B SURY (Received 12 April 2007) Abstract We compute
More informationx + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3
Math 24 FINAL EXAM (2/9/9 - SOLUTIONS ( Find the general solution to the system of equations 2 4 5 6 7 ( r 2 2r r 2 r 5r r x + y + z 2x + y + 4z 5x + 6y + 7z 2 2 2 2 So x z + y 2z 2 and z is free. ( r
More information1 Cubic Hermite Spline Interpolation
cs412: introduction to numerical analysis 10/26/10 Lecture 13: Cubic Hermite Spline Interpolation II Instructor: Professor Amos Ron Scribes: Yunpeng Li, Mark Cowlishaw, Nathanael Fillmore 1 Cubic Hermite
More informationASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1
19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A Two-DOF Mass-Spring-Dashpot Dynamic System Consider the lumped-parameter, mass-spring-dashpot dynamic system shown in the Figure. It has two point
More informationLecture 5 Rational functions and partial fraction expansion
S. Boyd EE102 Lecture 5 Rational functions and partial fraction expansion (review of) polynomials rational functions pole-zero plots partial fraction expansion repeated poles nonproper rational functions
More informationPart IB Paper 6: Information Engineering LINEAR SYSTEMS AND CONTROL Dr Glenn Vinnicombe HANDOUT 3. Stability and pole locations.
Part IB Paper 6: Information Engineering LINEAR SYSTEMS AND CONTROL Dr Glenn Vinnicombe HANDOUT 3 Stability and pole locations asymptotically stable marginally stable unstable Imag(s) repeated poles +
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationSolution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
More informationMAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =
MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the
More informationBindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
More informationNonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/?
Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability p. 1/? p. 2/? Definition: A p p proper rational transfer function matrix G(s) is positive
More informationLinearly Independent Sets and Linearly Dependent Sets
These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation
More informationLecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25-Sep-02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
More informationLEVERRIER-FADEEV ALGORITHM AND CLASSICAL ORTHOGONAL POLYNOMIALS
This is a reprint from Revista de la Academia Colombiana de Ciencias Vol 8 (106 (004, 39-47 LEVERRIER-FADEEV ALGORITHM AND CLASSICAL ORTHOGONAL POLYNOMIALS by J Hernández, F Marcellán & C Rodríguez LEVERRIER-FADEEV
More informationNonlinear Algebraic Equations Example
Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities
More informationCongestion Control of Active Queue Management Routers Based on LQ-Servo Control
Congestion Control of Active Queue Management outers Based on LQ-Servo Control Kang Min Lee, Ji Hoon Yang, and Byung Suhl Suh Abstract This paper proposes the LQ-Servo controller for AQM (Active Queue
More informationA note on companion matrices
Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationChapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter
More informationCOMMUTATIVITY DEGREES OF WREATH PRODUCTS OF FINITE ABELIAN GROUPS
COMMUTATIVITY DEGREES OF WREATH PRODUCTS OF FINITE ABELIAN GROUPS IGOR V. EROVENKO AND B. SURY ABSTRACT. We compute commutativity degrees of wreath products A B of finite abelian groups A and B. When B
More informationIRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationNumerical Methods for Differential Equations
Numerical Methods for Differential Equations Course objectives and preliminaries Gustaf Söderlind and Carmen Arévalo Numerical Analysis, Lund University Textbooks: A First Course in the Numerical Analysis
More informationECE 516: System Control Engineering
ECE 516: System Control Engineering This course focuses on the analysis and design of systems control. This course will introduce time-domain systems dynamic control fundamentals and their design issues
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More informationThe Matrix Elements of a 3 3 Orthogonal Matrix Revisited
Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation
More informationThe Geometry of Polynomial Division and Elimination
The Geometry of Polynomial Division and Elimination Kim Batselier, Philippe Dreesen Bart De Moor Katholieke Universiteit Leuven Department of Electrical Engineering ESAT/SCD/SISTA/SMC May 2012 1 / 26 Outline
More informationLecture 2 Matrix Operations
Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or
More informationCITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationDesign Methods for Control Systems
Design Methods for Control Systems Okko H. Bosgra Delft University of Technology Delft, The Netherlands Huibert Kwakernaak Emeritus Professor University of Twente Enschede, The Netherlands Gjerrit Meinsma
More information1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More informationTime-Domain Solution of LTI State Equations. 2 State-Variable Response of Linear Systems
2.14 Analysis and Design of Feedback Control Systems Time-Domain Solution of LTI State Equations Derek Rowell October 22 1 Introduction This note examines the response of linear, time-invariant models
More informationThe Models of Zero Dynamics and Stabilization for Linear DAEs
Technische Universität Ilmenau Institut für Mathematik Preprint No. M 13/08 Zero dynamics and stabilization for linear DAEs Thomas Berger 2013 Impressum: Hrsg.: Leiter des Instituts für Mathematik Weimarer
More informationMean value theorem, Taylors Theorem, Maxima and Minima.
MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.
More informationThe Method of Least Squares
The Method of Least Squares Steven J. Miller Mathematics Department Brown University Providence, RI 0292 Abstract The Method of Least Squares is a procedure to determine the best fit line to data; the
More informationDepartment of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI
Department of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI Solving a System of Linear Algebraic Equations (last updated 5/19/05 by GGB) Objectives:
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation
More informationTypical Linear Equation Set and Corresponding Matrices
EWE: Engineering With Excel Larsen Page 1 4. Matrix Operations in Excel. Matrix Manipulations: Vectors, Matrices, and Arrays. How Excel Handles Matrix Math. Basic Matrix Operations. Solving Systems of
More informationHEAVISIDE CABLE, THOMSON CABLE AND THE BEST CONSTANT OF A SOBOLEV-TYPE INEQUALITY. Received June 1, 2007; revised September 30, 2007
Scientiae Mathematicae Japonicae Online, e-007, 739 755 739 HEAVISIDE CABLE, THOMSON CABLE AND THE BEST CONSTANT OF A SOBOLEV-TYPE INEQUALITY Yoshinori Kametaka, Kazuo Takemura, Hiroyuki Yamagishi, Atsushi
More informationArithmetic and Algebra of Matrices
Arithmetic and Algebra of Matrices Math 572: Algebra for Middle School Teachers The University of Montana 1 The Real Numbers 2 Classroom Connection: Systems of Linear Equations 3 Rational Numbers 4 Irrational
More informationORDINARY DIFFERENTIAL EQUATIONS
ORDINARY DIFFERENTIAL EQUATIONS GABRIEL NAGY Mathematics Department, Michigan State University, East Lansing, MI, 48824. SEPTEMBER 4, 25 Summary. This is an introduction to ordinary differential equations.
More informationDynamic Eigenvalues for Scalar Linear Time-Varying Systems
Dynamic Eigenvalues for Scalar Linear Time-Varying Systems P. van der Kloet and F.L. Neerhoff Department of Electrical Engineering Delft University of Technology Mekelweg 4 2628 CD Delft The Netherlands
More informationRoot Computations of Real-coefficient Polynomials using Spreadsheets*
Int. J. Engng Ed. Vol. 18, No. 1, pp. 89±97, 2002 0949-149X/91 $3.00+0.00 Printed in Great Britain. # 2002 TEMPUS Publications. Root Computations of Real-coefficient Polynomials using Spreadsheets* KARIM
More informationA characterization of trace zero symmetric nonnegative 5x5 matrices
A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the
More informationOPTIMAl PREMIUM CONTROl IN A NON-liFE INSURANCE BUSINESS
ONDERZOEKSRAPPORT NR 8904 OPTIMAl PREMIUM CONTROl IN A NON-liFE INSURANCE BUSINESS BY M. VANDEBROEK & J. DHAENE D/1989/2376/5 1 IN A OPTIMAl PREMIUM CONTROl NON-liFE INSURANCE BUSINESS By Martina Vandebroek
More informationThe integrating factor method (Sect. 2.1).
The integrating factor method (Sect. 2.1). Overview of differential equations. Linear Ordinary Differential Equations. The integrating factor method. Constant coefficients. The Initial Value Problem. Variable
More information7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction
More informationOn the Geometry and Parametrization of Almost Invariant Subspaces and Observer Theory
On the Geometry and Parametrization of Almost Invariant Subspaces and Observer Theory Dissertation zur Erlangung des naturwissenschaftlichen Doktorgrades der Bayerischen Julius-Maximilians-Universität
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More informationThe Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every
More informationMAT188H1S Lec0101 Burbulla
Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u
More informationFrom general State-Space to VARMAX models
From general State-Space to VARMAX models José Casals Alfredo García-Hiernaux Miguel Jerez October 27, 200 Abstract Fixed coefficients State-Space and VARMAX models are equivalent, meaning that they are
More informationDATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
More informationInteger Factorization using the Quadratic Sieve
Integer Factorization using the Quadratic Sieve Chad Seibert* Division of Science and Mathematics University of Minnesota, Morris Morris, MN 56567 seib0060@morris.umn.edu March 16, 2011 Abstract We give
More informationLS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More informationthe points are called control points approximating curve
Chapter 4 Spline Curves A spline curve is a mathematical representation for which it is easy to build an interface that will allow a user to design and control the shape of complex curves and surfaces.
More informationCS 295 (UVM) The LMS Algorithm Fall 2013 1 / 23
The LMS Algorithm The LMS Objective Function. Global solution. Pseudoinverse of a matrix. Optimization (learning) by gradient descent. LMS or Widrow-Hoff Algorithms. Convergence of the Batch LMS Rule.
More information6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
More informationPartial Fractions: Undetermined Coefficients
1. Introduction Partial Fractions: Undetermined Coefficients Not every F(s) we encounter is in the Laplace table. Partial fractions is a method for re-writing F(s) in a form suitable for the use of the
More informationISOMETRIES OF R n KEITH CONRAD
ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x
More informationIntroduction to Matrices for Engineers
Introduction to Matrices for Engineers C.T.J. Dodson, School of Mathematics, Manchester Universit 1 What is a Matrix? A matrix is a rectangular arra of elements, usuall numbers, e.g. 1 0-8 4 0-1 1 0 11
More informationMatrix Calculations: Applications of Eigenvalues and Eigenvectors; Inner Products
Matrix Calculations: Applications of Eigenvalues and Eigenvectors; Inner Products H. Geuvers Institute for Computing and Information Sciences Intelligent Systems Version: spring 2015 H. Geuvers Version:
More informationGrey Brownian motion and local times
Grey Brownian motion and local times José Luís da Silva 1,2 (Joint work with: M. Erraoui 3 ) 2 CCM - Centro de Ciências Matemáticas, University of Madeira, Portugal 3 University Cadi Ayyad, Faculty of
More informationThe Use of Matrix Algebra in the Simplification of Accounting Transactions Based on the Principle of Double Entry
Abstract The Use of Matrix Algebra in the Simplification of Accounting Transactions Based on the Principle of Double Entry Ajogbeje, Oke James Department of Mathematics, College of Education, Ikere Ekiti,
More informationBy choosing to view this document, you agree to all provisions of the copyright laws protecting it.
This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal
More informationLecture 7: Finding Lyapunov Functions 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1
More informationSystem Modelingg Models of Computation and their Applications Axel Jantsch Laboratory for Electronics and Computer Systems (LECS) Royal Institute of Technology, Stockholm, Sweden February 4, 2005 System
More information8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
More informationRecall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.
ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?
More informationGeneral Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1
A B I L E N E C H R I S T I A N U N I V E R S I T Y Department of Mathematics General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1 Dr. John Ehrke Department of Mathematics Fall 2012 Questions
More informationQuestion 2: How do you solve a matrix equation using the matrix inverse?
Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients
More information3.2 Sources, Sinks, Saddles, and Spirals
3.2. Sources, Sinks, Saddles, and Spirals 6 3.2 Sources, Sinks, Saddles, and Spirals The pictures in this section show solutions to Ay 00 C By 0 C Cy D 0. These are linear equations with constant coefficients
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More informationT ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More informationOn an algorithm for classification of binary self-dual codes with minimum distance four
Thirteenth International Workshop on Algebraic and Combinatorial Coding Theory June 15-21, 2012, Pomorie, Bulgaria pp. 105 110 On an algorithm for classification of binary self-dual codes with minimum
More informationMore than you wanted to know about quadratic forms
CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit
More information8. Linear least-squares
8. Linear least-squares EE13 (Fall 211-12) definition examples and applications solution of a least-squares problem, normal equations 8-1 Definition overdetermined linear equations if b range(a), cannot
More informationMath 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
More informationMATH 590: Meshfree Methods
MATH 590: Meshfree Methods Chapter 7: Conditionally Positive Definite Functions Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Fall 2010 fasshauer@iit.edu MATH 590 Chapter
More informationBrief Paper. of discrete-time linear systems. www.ietdl.org
Published in IET Control Theory and Applications Received on 28th August 2012 Accepted on 26th October 2012 Brief Paper ISSN 1751-8644 Temporal and one-step stabilisability and detectability of discrete-time
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationTHREE DIMENSIONAL GEOMETRY
Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,
More informationBanded Matrices with Banded Inverses and A = LPU
Banded Matrices with Banded Inverses and A = LPU Gilbert Strang Abstract. If A is a banded matrix with a banded inverse, then A = BC = F... F N is a product of block-diagonal matrices. We review this factorization,
More information