Matrix Polynomials in the Theory of Linear Control Systems


 Christian White
 1 years ago
 Views:
Transcription
1 Matrix Polynomials in the Theory of Linear Control Systems Ion Zaballa Departamento de Matemática Aplicada y EIO
2 Matrix Polynomials in Linear Control
3 Matrix Polynomials in Linear Control Coefficient Matrices of highorder systems ( ) d T dt y(t) = V ( d x(t) = U ( dt d dt ) x(t) + W ) u(t) ( d dt ) u(t)
4 Matrix Polynomials in Linear Control Coefficient Matrices of highorder systems Matrix Polynomials Representing systems ( ) d T dt y(t) = V ( d x(t) = U ( dt d dt ) x(t) + W ) u(t) ( d dt ) u(t) ẋ(t) = Ax(t) + Bu(t)
5 Matrix Polynomials in Linear Control Coefficient Matrices of highorder systems Matrix Polynomials Representing systems Behaviours (Poldermann & Willems) ( ) d T dt y(t) = V ( d x(t) = U ( dt d dt ) x(t) + W ) u(t) ( d dt ) u(t) ẋ(t) = Ax(t) + Bu(t) ( ) d ( ) d P dt y(t) = Q dt u(t)
6 Polynomial System Matrices T ( ) d dt y(t) = V x(t) = U ( d dt ( ) d dt ) x(t) + W u(t) ( d dt ) u(t)
7 state vector Polynomial System Matrices ( ) ( ) d d T x(t) = U u(t) dt ( ) dt d y(t) = V x(t) + W dt ( d dt ) u(t)
8 state vector Polynomial System Matrices input or control vector ( ) ( ) d d T x(t) = U u(t) dt ( ) dt ( ) d d y(t) = V x(t) + W u(t) dt dt
9 state vector Polynomial System Matrices input or control vector ( ) ( ) d d T x(t) = U u(t) dt ( ) dt ( ) d d y(t) = V x(t) + W u(t) dt dt output or measurement vector
10 state vector Polynomial System Matrices input or control vector ( ) ( ) d d T x(t) = U u(t) dt ( ) dt ( ) d d y(t) = V x(t) + W u(t) dt dt output or measurement vector If R(λ) = R p λ p + R p 1 λ p R 1 λ + R 0 is a matrix polynomial: ( ) d d p x(t) d p 1 x(t) dx(t) R x(t) = R p dt dt p + R p 1 dt p R 1 + R 0 x(t) dt
11 state vector Polynomial System Matrices input or control vector ( ) ( ) d d T x(t) = U u(t) dt ( ) dt ( ) d d y(t) = V x(t) + W u(t) dt dt output or measurement vector If R(λ) = R p λ p + R p 1 λ p R 1 λ + R 0 is a matrix polynomial: ( ) d d p x(t) d p 1 x(t) dx(t) R x(t) = R p dt dt p + R p 1 dt p R 1 + R 0 x(t) dt [ ] [ ] T (s) U(s) x(s) = V (s) W (s) ū(s) [ ] 0, det T (s) 0 ȳ(s)
12 state vector Polynomial System Matrices input or control vector ( ) ( ) d d T x(t) = U u(t) dt ( ) dt ( ) d d y(t) = V x(t) + W u(t) dt dt output or measurement vector If R(λ) = R p λ p + R p 1 λ p R 1 λ + R 0 is a matrix polynomial: ( ) d d p x(t) d p 1 x(t) dx(t) R x(t) = R p dt dt p + R p 1 dt p R 1 + R 0 x(t) dt [ ] [ ] T (s) U(s) x(s) = V (s) W (s) ū(s) [ ] 0, det T (s) 0 ȳ(s) Polynomial System Matrix
13 ȳ(s) = V (s) x(s) + W (s)ū(s) = Transfer Function Matrix
14 Transfer Function Matrix T (s) x(s) = U(s)ū(s) ȳ(s) = V (s) x(s) + W (s)ū(s) =
15 Transfer Function Matrix T (s) x(s) = U(s)ū(s) ȳ(s) = V (s) x(s) + W (s)ū(s) = (V (s)t (s) 1 U(s) + W (s)) ū(s)
16 Transfer Function Matrix T (s) x(s) = U(s)ū(s) Transfer Function Matrix ȳ(s) = V (s) x(s) + W (s)ū(s) = (V (s)t (s) 1 U(s) + W (s)) ū(s)
17 Transfer Function Matrix T (s) x(s) = U(s)ū(s) Transfer Function Matrix ȳ(s) = V (s) x(s) + W (s)ū(s) = (V (s)t (s) 1 U(s) + W (s)) ū(s) When do two polynomial system matrices yield the same Transfer Function Matrix?
18 Strict System Equivalence Unimodular: det = c 0 [ M(s) 0 X(s) I p ] n m [ ] [ ] [ ] n T 1 (s) U 1 (s) N(s) Y (s) T2 (s) U = 2 (s) p V 1 (s) W 1 (s) 0 I m V 2 (s) W 2 (s)
19 Strict System Equivalence Unimodular: det = c 0 [ ] [ M(s) 0 T 1 (s) X(s) I p V 1 (s) ] [ ] [ U 1 (s) N(s) Y (s) T = 2 (s) W 1 (s) 0 I m V 2 (s) ] U 2 (s) W 2 (s)
20 Strict System Equivalence Unimodular: det = c 0 [ ] [ M(s) 0 X(s) I p T 1 (s) V 1 (s) U 1 (s) W 1 (s) ] [N(s) ] Y (s) 0 I m = [ T 2 (s) V 2 (s) U 2 (s) W 2 (s) ]
21 Strict System Equivalence Unimodular: det = c 0 [ M(s) ] [ 0 T 1 (s) X(s) V 1 (s) I p U 1 (s) W 1 (s) ] [ ] [ N(s) Y (s) = 0 I m T 2 (s) V 2 (s) U 2 (s) W 2 (s) ]
22 Strict System Equivalence [ M(s) 0 X(s) I p ] n m [ ] [ ] [ ] n T 1 (s) U 1 (s) N(s) Y (s) T2 (s) U = 2 (s) p V 1 (s) W 1 (s) 0 I m V 2 (s) W 2 (s) V 1 (s)t 1 (s) 1 U 1 (s) + W 1 (s) = V 2 (s)t 2 (s) 1 U 2 (s) + W 2 (s)
23 Strict System Equivalence [ M(s) 0 X(s) I p ] n m [ ] [ ] [ ] n T 1 (s) U 1 (s) N(s) Y (s) T2 (s) U = 2 (s) p V 1 (s) W 1 (s) 0 I m V 2 (s) W 2 (s)? V 1 (s)t 1 (s) 1 U 1 (s) + W 1 (s) = V 2 (s)t 2 (s) 1 U 2 (s) + W 2 (s)
24 Coprime Matrix Polynomials A(s) = Ã(s) C(s) B(s) = B(s) C(s) Right Common Factor A(s), B(s) right coprime C(s) unimodular A(s), B(s) right coprime [ ] A(s) B(s) e [ ] In 0 A(s), B(s) left coprime [ A(s) B(s) ] e [ In 0 ]
25 Coprimeness and Strict System Equivalence If G(s) F(s) p m, (T (s), U(s), V (s), W (s)) Realization of G(s) if G(s) = W (s) + V (s)t (s) 1 U(s). order= deg(det T (s)) (T (s), U(s), V (s), W (s)) realization of least order (T (s), U(s)) left coprime and (T (s), V (s)) right coprime [ ] [ ] T1 (s) U If P 1 (s) = 1 (s) T2 (s) U, P V 1 (s) W 1 (s) 2 (s) = 2 (s) polynomial system matrices of least V 2 (s) W 2 (s) order P 1 (s) s.s.e. P 2 (s) V 1 (s)t 1 (s) 1 U 1 (s) + W 1 (s) = V 2 (s)t 2 (s) 1 U 2 (s) + W 2 (s)
26 Systems in StateSpace Form (Σ) { ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) P (s) = [ sin A ] B C 0
27 (Σ) Systems in StateSpace Form { ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) P (s) = [ sin A ] B C 0 Controllability in [t 0, t 1 ]: x 0, x 1 R n, u defined in [t 0, t 1 ] such that the solution of the I.V.P. ẋ(t) = Ax(t) + Bu(t), x(t 0 ) = x 0, satisfies x(t 1 ) = x 1. x x 1 x 0 t 0 t 1 t
28 (Σ) Systems in StateSpace Form { ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) P (s) = [ sin A ] B C 0 Controllability in [t 0, t 1 ]: x 0, x 1 R n, u defined in [t 0, t 1 ] such that the solution of the I.V.P. ẋ(t) = Ax(t) + Bu(t), x(t 0 ) = x 0, satisfies x(t 1 ) = x 1. x x 1 x 0 t 0 t 1 (Σ) controllable (A, B) controllable t
29 Controllability, Observability and Coprimeness (Σ) { ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) (A, B) controllable is equivalent to: P (s) = [ sin A ] B C 0 rank [ B AB A n 1 B ] = n, or ( [sin si n A and B are left coprime A B ] e [ In 0 ])
30 Controllability, Observability and Coprimeness (Σ) { ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) (A, B) controllable is equivalent to: P (s) = [ sin A ] B C 0 rank [ B AB A n 1 B ] = n, or ( [sin si n A and B are left coprime A B ] e [ In 0 ]) Observability in [t 0, t 1 ]: The value of y in [t 0, t 1 ] determines the state at t 0, x(t 0 ), and so the vector function x(t) in [t 0, t 1 ]. (Σ) observable (A, C) observable (A T, C T ) controllable.
31 Controllability, Observability and Coprimeness (Σ) { ẋ(t) = Ax(t) + Bu(t) y(t) = Cx(t) (A, B) controllable is equivalent to: P (s) = [ sin A ] B C 0 rank [ B AB A n 1 B ] = n, or ( [sin si n A and B are left coprime A B ] e [ In 0 ]) Observability in [t 0, t 1 ]: The value of y in [t 0, t 1 ] determines the state at t 0, x(t 0 ), and so the vector function x(t) in [t 0, t 1 ]. (Σ) observable (A, C) observable (A T, C T ) controllable. [ ] sin A B P (s) = is of least order if and only if (A, B) controllable C 0 and (A, C) observable.
32 Transfer Function Matrix From now on: { ẋ(t) = Ax(t) + Bu(t) (Σ) y(t) = I n x(t) P (s) = Transfer Function Matrix: G(s) = (si n A) 1 B [ sin A ] B I n 0 s 0
33 Transfer Function Matrix From now on: { ẋ(t) = Ax(t) + Bu(t) (Σ) y(t) = I n x(t) P (s) = Transfer Function Matrix: G(s) = (si n A) 1 B lcm{denominators of G(s)} [ sin A ] B I n 0 s 0 G(s) = Ñ(s)( d(s) I n) 1
34 Transfer Function Matrix From now on: { ẋ(t) = Ax(t) + Bu(t) (Σ) y(t) = I n x(t) P (s) = Transfer Function Matrix: G(s) = (si n A) 1 B lcm{denominators of G(s)} [ sin A ] B I n 0 s Remove right common factors from Ñ(s) and d(s)i n G(s) = Ñ(s)( d(s) I n) 1 = N(s)D(s) 1 0
35 Transfer Function Matrix From now on: { ẋ(t) = Ax(t) + Bu(t) (Σ) y(t) = I n x(t) P (s) = Transfer Function Matrix: G(s) = (si n A) 1 B lcm{denominators of G(s)} If (A, B) controllable [ sin A ] B I n 0 s Remove right common factors from Ñ(s) and d(s)i n G(s) = Ñ(s)( d(s) I n) 1 = N(s)D(s) 1 [ ] sin A B sse I n 0 I n m D(s) I m 0 N(s) 0 0 (n m)
36 Polynomial Matrix Representations [ ] [ ] [ ] U(s) 0 sin A B V (s) Y (s) = X(s) I n I n 0 0 I m I n m D(s) I m 0 N(s) 0 U(s) [ si n A B ] [ ] [ ] V (s) Y (s) In m 0 0 = 0 I m 0 D(s) I m ( )
37 Polynomial Matrix Representations [ ] [ ] [ ] U(s) 0 sin A B V (s) Y (s) = X(s) I n I n 0 0 I m I n m D(s) I m 0 N(s) 0 U(s) [ si n A B ] [ ] [ ] V (s) Y (s) In m 0 0 = 0 I m 0 D(s) I m X(s) = [ V 1 (s) Y (s) ] U(s), N(s) = Y (s)d(s) + V 2 (s), where V (s) = [ V 1 (s) V 2 (s) ], V 2 n m ( )
38 Polynomial Matrix Representations [ ] [ ] [ ] U(s) 0 sin A B V (s) Y (s) = X(s) I n I n 0 0 I m I n m D(s) I m 0 N(s) 0 U(s) [ si n A B ] [ ] [ ] V (s) Y (s) In m 0 0 = 0 I m 0 D(s) I m X(s) = [ V 1 (s) Y (s) ] U(s), N(s) = Y (s)d(s) + V 2 (s), where V (s) = [ V 1 (s) V 2 (s) ], V 2 n m ( ) A Polynomial Matrix Representation of a controllable system (A, B) is any nonsingular Matrix Polynomial D(s) such that ( ) is satisfied for some unimodular matrices U(s), V (s) and some matrix Y (s).
39 Some Consequences of the Definition. I If D(s) is a PMR of (A, B), D(s)W (s) is also a PMR of (A, B) for any unimodular W (s). U(s) [ si n A B ] [ ] [ ] V (s) Y (s) In m 0 0 = 0 I m 0 D(s) I m U(s) [ si n A B ] [ ] [ ]. V (s) W (s) Y (s) In m 0 0 = 0 I m 0 D(s)W (s) I m W (s) = Diag(I n m, W (s)).
40 Some Consequences of the Definition. I If D(s) is a PMR of (A, B), D(s)W (s) is also a PMR of (A, B) for any unimodular W (s). U(s) [ si n A B ] [ ] [ ] V (s) Y (s) In m 0 0 = 0 I m 0 D(s) I m U(s) [ si n A B ] [ ] [ ]. V (s) W (s) Y (s) In m 0 0 = 0 I m 0 D(s)W (s) I m W (s) = Diag(I n m, W (s)). If D 1 (s), D 2 (s) are PMRs of (A 1, B 1 ) and (A 2, B 2 ) : D 2 (s) = D 1 (s)w (s) (A 2, B 2 ) = (P 1 A 1 P, P 1 B 1 ). W (s) unimodular, P invertible.
41 Some Consequences of the Definition. II If D(s) = D l s l + D l 1 s l D 1 s + D 0 (si n A) 1 B = N(s)D(s) 1 (si n A) 1 BD(s) = N(s) A l BD l + A l 1 BD l ABD 1 + BD 0 = 0 ( ) (A, B) controllable and ( ) D(s) PMR of (A, B)
42 Some Consequences of the Definition. II If D(s) = D l s l + D l 1 s l D 1 s + D 0 (si n A) 1 B = N(s)D(s) 1 (si n A) 1 BD(s) = N(s) A l BD l + A l 1 BD l ABD 1 + BD 0 = 0 ( ) (A, B) controllable and ( ) D(s) PMR of (A, B) si n A is a linearization of D(s) [ ] [ ] [ V (s) Y (s) U(s) si n A B = 0 I m I n m 0 0 D(s) 0 I m ]
43 Some Consequences of the Definition. II If D(s) = D l s l + D l 1 s l D 1 s + D 0 (si n A) 1 B = N(s)D(s) 1 (si n A) 1 BD(s) = N(s) A l BD l + A l 1 BD l ABD 1 + BD 0 = 0 ( ) (A, B) controllable and ( ) D(s) PMR of (A, B) si n A is a linearization of D(s) [ ] [ ] [ V (s) Y (s) U(s) si n A B = 0 I m I n m 0 0 D(s) 0 I m ] Given D(s), nonsingular, is there, for any linearization si n A of D(s), a control matrix B such that (A, B) is controllable and D(s) is a PMR of (A, B)?
44 Controllability indices Assume (A, B) controllable: rank [ B AB A n 1 B ] = n
45 Controllability indices Assume (A, B) controllable: rank [ B AB A n 1 B ] = n n = 8, m = 5 B b 2 b 3 b 4
46 Controllability indices Assume (A, B) controllable: rank [ B AB A n 1 B ] = n n = 8, m = 5 B AB b 2 b 3 b 4 Ab 2 Ab 4
47 Controllability indices Assume (A, B) controllable: rank [ B AB A n 1 B ] = n n = 8, m = 5 B AB A 2 B b 2 b 3 b 4 Ab 2 Ab 4 A 2 b 4
48 Controllability indices Assume (A, B) controllable: rank [ B AB A n 1 B ] = n n = 8, m = 5 B AB A 2 B A 3 B b 2 b 3 b 4 Ab 2 Ab 4 A 2 b 4 A 3 b 4
49 Controllability indices Assume (A, B) controllable: rank [ B AB A n 1 B ] = n n = 8, m = 5 B AB A 2 B A 3 B A 4 B b 2 b 3 b 4 Ab 2 Ab 4 A 2 b 4 A 3 b 4 A 4 b 4
50 Controllability indices Assume (A, B) controllable: rank [ B AB A n 1 B ] = n n = 8, m = 5 B AB A 2 B A 3 B A 4 B A 5 B A 6 B A 7 B b 2 b 3 b 4 Ab 2 Ab 4 A 2 b 4 A 3 b 4 A 4 b 4
51 Controllability indices Assume (A, B) controllable: rank [ B AB A n 1 B ] = n n = 8, m = 5 B AB A 2 B A 3 B A 4 B A 5 B A 6 B A 7 B b 2 b 3 b 4 Ab 2 Ab 4 A 2 b 4 l 1 = 0, l 2 = 2, l 3 = 1, l 4 = 5, l 5 = 0 k 1 = 5, k 2 = 2, k 3 = 1, k 4 = 0, k 5 = 0 A 3 b 4 A 4 b 4 Controllability Indices
52 Controllability indices Assume (A, B) controllable: rank [ B AB A n 1 B ] = n n = 8, m = 5 B AB A 2 B A 3 B A 4 B A 5 B A 6 B A 7 B b 2 b 3 b 4 Ab 2 Ab 4 A 2 b 4 l 1 = 0, l 2 = 2, l 3 = 1, l 4 = 5, l 5 = 0 k 1 = 5, k 2 = 2, k 3 = 1, k 4 = 0, k 5 = 0 A 3 b 4 A 4 b 4 Controllability Indices Controllability indices of (A, B)= minimal indices of s [ I n 0 ] [ A B ] Matlab
53 PMRs and Controllability indices unordered controllability indeces s l s l 2 0 D(s) = D hc + D..... lc (s). 0 0 s l m det D hc 0 degree of ith column l i
54 PMRs and Controllability indices unordered controllability indeces s l s l 2 0 D(s) = D hc + D..... lc (s). 0 0 s l m det D hc 0 degree of ith column l i Matrix polynomials with this property are called column proper or column reduced
55 PMRs and Controllability indices unordered controllability indeces s l s l 2 0 D(s) = D hc + D..... lc (s). 0 0 s l m det D hc 0 degree of ith column l i Matrix polynomials with this property are called column proper or column reduced Let A(s) F[s] m m be a nonsingular matrix polynomial. Then there is U(s) F[s] m m, unimodular, such that D(s) = A(s)U(s) is a column proper matrix. In general, D(s) is not unique but all have the same column degrees up to reordering.
56 Column Proper Matrix Polynomials Given A(s), choose a linearization si n A
57 Column Proper Matrix Polynomials Given A(s), choose a linearization si n A D(s) = A(s)U(s) D(s) = D hc Diag(s l1, s l2,..., s lm )+ D lc (s) column proper
58 Column Proper Matrix Polynomials Given A(s), choose a linearization si n A D(s) = A(s)U(s) D(s) = D hc Diag(s l1, s l2,..., s lm )+ D lc (s) column proper D(s) = Diag(s l1, s l2,..., s lm ) + D 1 hc D lc(s) column proper
59 Column Proper Matrix Polynomials Given A(s), choose a linearization si n A D(s) = A(s)U(s) D(s) = D hc Diag(s l1, s l2,..., s lm )+ D lc (s) column proper (A c, B c ) MATLAB example D(s) = Diag(s l1, s l2,..., s lm ) + D 1 hc D lc(s) column proper
60 Column Proper Matrix Polynomials Given A(s), choose a linearization si n A D(s) = A(s)U(s) D(s) = D hc Diag(s l1, s l2,..., s lm )+ D lc (s) column proper (A c, B c ) MATLAB example D(s) = Diag(s l1, s l2,..., s lm ) + D 1 hc D lc(s) column proper (A c, B c D 1 hc )
61 Column Proper Matrix Polynomials Given A(s), choose a linearization si n A D(s) = A(s)U(s) D(s) = D hc Diag(s l1, s l2,..., s lm )+ D lc (s) column proper (A c, B c ) (A c, B c D 1 hc ) MATLAB example A and A c linearizations of A(s): A = P 1 A cp D(s) = Diag(s l1, s l2,..., s lm ) + D 1 hc D lc(s) column proper B = P 1 B c D 1 hc A(s) is a Polynomial Matrix Representation of (A, B) and l 1,... l m are its (unordered) controllability indices.
62 WienerHopf Factorization and Indices A 1 (s), A 2 (s) F[s] m n WienerHopf equivalent (at on the left): A 2 (s) = B(s) A 1 (s) U(s) Biproper: invertible lim B(s) s Unimodular: lim s a U(s) invertible, a C
63 WienerHopf Factorization and Indices A 1 (s), A 2 (s) F[s] m n WienerHopf equivalent (at on the left): A 2 (s) = B(s) A 1 (s) U(s) Biproper: invertible lim B(s) s Unimodular: lim s a U(s) invertible, a C A(s)U(s) = D hc Diag(s l 1,..., s lm ) + D lc (s) = [D hc + D lc (s) Diag(s l 1,..., s lm l )] Diag(s 1,..., s lm ) }{{} B(s) F pr(s) m m lim s B(s) = D hc invertible
64 WienerHopf Factorization and Indices A 1 (s), A 2 (s) F[s] m n WienerHopf equivalent (at on the left): A 2 (s) = B(s) A 1 (s) U(s) Biproper: invertible lim B(s) s Unimodular: lim s a U(s) invertible, a C A(s)U(s) = D hc Diag(s l 1,..., s lm ) + D lc (s) = [D hc + D lc (s) Diag(s l 1,..., s lm l )] Diag(s 1,..., s lm ) }{{} B(s) F pr(s) m m lim s B(s) = D hc invertible A(s) W H Diag(s k 1, s k 2,..., s km ) k 1 k 2 k m = WienerHopf factorization indices of A(s)
65 BrunovskyKronecker canonical form Diag(s k 1, s k 2,..., s km ) is a PMR of (A c, B c ) A c = Diag F k i k i, B c = Diag. F k i 1, 1 i m s s [ I n 0 ] [ 0 s ] se. A c B c Diag 0 0 s F k i (k i +1) : 1 i m s 1 Kronecker canonical form
66 BrunovskyKronecker canonical form Diag(s k 1, s k 2,..., s km ) is a PMR of (A c, B c ) A c = Diag F k i k i, B c = Diag. F k i 1, 1 i m s s [ I n 0 ] [ 0 s ] se. A c B c Diag 0 0 s F k i (k i +1) : 1 i m s 1 Kronecker canonical form (A c, B c ) = system in Brunovsky form
67 Feedback equivalence (A, B) has controllability indices k 1 k 2 k m s [ I n 0 ] [ A B ] se s [ I n 0 ] [ A c ] B c P (s [ I n 0 ] [ A B ] )T = s [ I n 0 ] [ A c ] B c P [ A B ] [ P 1 ] 0 = [ A R Q c ] B c (A c, B c ) = (P AP 1 + P BF, P BQ)
68 Feedback equivalence (A, B) has controllability indices k 1 k 2 k m s [ I n 0 ] [ A B ] se s [ I n 0 ] [ A c ] B c P (s [ I n 0 ] [ A B ] )T = s [ I n 0 ] [ A c ] B c P [ A B ] [ P 1 ] 0 = [ A R Q c ] B c (A c, B c ) = (P AP 1 + P BF, P BQ) Feedback equivalence Any controllable system is feedback equivalent to a system in Brunovsky form
69 Summarizing D(s) is a Polynomial Matrix Representation of a controllable (A, B) U(s) [ si n A B ] [ ] [ ] V (s) Y (s) In m 0 0 = 0 I m 0 D(s) I m (si n A) 1 B = N(s)D(s) 1, N(s), D(s) right coprime A l BD l + A l 1 BD l ABD 1 + BD 0 = 0 (D(s) = D l s l + D l 1 s l D 0 )
70 Summarizing D(s) is a Polynomial Matrix Representation of a controllable (A, B) U(s) [ si n A B ] [ ] [ ] V (s) Y (s) In m 0 0 = 0 I m 0 D(s) I m (si n A) 1 B = N(s)D(s) 1, N(s), D(s) right coprime A l BD l + A l 1 BD l ABD 1 + BD 0 = 0 (D(s) = D l s l + D l 1 s l D 0 ) (A, B) D(s) (P 1 AP, P 1 B) D(s)U(s) Similarity Right Equivalence (P 1 AP + P 1 BF, P 1 BQ) B(s)D(s)U(s) Feedback Equivalence WienerHopf Equivalence Controllability Indices Left WienerHopf indices
71 Matrix Polynomials with nonsingular Leading Coefficient nonsingular D(s) = D l s l + D l 1 s l D 1 s + D 0 B(s) = D l + D l 1 s D 1 s l+1 + D 0 s l biproper ( lim B(s) = D l ) s m {}}{ B(s) 1 D(s) = s l I m ( l, l,..., l) = WienerHopf indices of D(s)
72 Matrix Polynomials with nonsingular Leading Coefficient nonsingular D(s) = D l s l + D l 1 s l D 1 s + D 0 B(s) = D l + D l 1 s D 1 s l+1 + D 0 s l biproper ( lim B(s) = D l ) s m {}}{ B(s) 1 D(s) = s l I m ( l, l,..., l) = WienerHopf indices of D(s) D(s) Polynomial Matrix Representation of (A, B) rank [ B AB A lm 1 B ] = lm and A l BD l + A l 1 BD l ABD 1 + BD 0 = 0 m {}}{ Since ( l, l,..., l) = Controllability indices of (A, B) rank [ B AB A lm 1 B ] = rank [ B AB A l 1 B ] (A, B)= Standard Pair of D(s)
73 References H. H. Rosenbrock, Statespace and Multivariable Theory, Thomas Nelson and Sons, London, T. Kailath, Linear Systems, Prentice Hall, New Jersey, P. A. Fuhrmann, A Polynomial Approach to Linear Algebra, Springer, New York, 1996 J. W. Polderman, J. C. Willems, Introduction to Mathematical System Theory. A behavioral Approach Springer, New York, P. Fuhrmann, J. C. Willems, Factorization indices at infinity for rational matrix functions, Integral Equations Operator Theory, 2/3, (1979), pp
74 References I. Gohberg, M. A. Kaashoek, F. van Schagen, Partially Specified Matrices and Operators: Classification, Completion, Applications, Bikhäuser, Basel, K. Clancey, I. Gohberg, Factorization of Matrix Functions and Singular Integral Operators, Birkhäuser, Basel, I. Gohberg, P. Lancaster, L. Rodman, Matrix Polynomials, Academic Press, New York, 1982 and SIAM, Philadelphia, I. Zaballa, Controllability and Hermite indices of matrix pairs, Int. J. Control, 68 (1), (1997), pp
1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
More informationLinear Control Systems
Chapter 3 Linear Control Systems Topics : 1. Controllability 2. Observability 3. Linear Feedback 4. Realization Theory Copyright c Claudiu C. Remsing, 26. All rights reserved. 7 C.C. Remsing 71 Intuitively,
More informationCONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
More informationG(s) = Y (s)/u(s) In this representation, the output is always the Transfer function times the input. Y (s) = G(s)U(s).
Transfer Functions The transfer function of a linear system is the ratio of the Laplace Transform of the output to the Laplace Transform of the input, i.e., Y (s)/u(s). Denoting this ratio by G(s), i.e.,
More informationEE 580 Linear Control Systems VI. State Transition Matrix
EE 580 Linear Control Systems VI. State Transition Matrix Department of Electrical Engineering Pennsylvania State University Fall 2010 6.1 Introduction Typical signal spaces are (infinitedimensional vector
More information22 Matrix exponent. Equal eigenvalues
22 Matrix exponent. Equal eigenvalues 22. Matrix exponent Consider a first order differential equation of the form y = ay, a R, with the initial condition y) = y. Of course, we know that the solution to
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationR mp nq. (13.1) a m1 B a mn B
This electronic version is for personal use and may not be duplicated or distributed Chapter 13 Kronecker Products 131 Definition and Examples Definition 131 Let A R m n, B R p q Then the Kronecker product
More informationCONTROLLER DESIGN USING POLYNOMIAL MATRIX DESCRIPTION
CONTROLLER DESIGN USING POLYNOMIAL MATRIX DESCRIPTION Didier Henrion, Centre National de la Recherche Scientifique, France Michael Šebek, Center for Applied Cybernetics, Faculty of Electrical Eng., Czech
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More informationMatrix Differentiation
1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationMATRICES WITH DISPLACEMENT STRUCTURE A SURVEY
MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY PLAMEN KOEV Abstract In the following survey we look at structured matrices with what is referred to as low displacement rank Matrices like Cauchy Vandermonde
More informationMatlab and Simulink. Matlab and Simulink for Control
Matlab and Simulink for Control Automatica I (Laboratorio) 1/78 Matlab and Simulink CACSD 2/78 Matlab and Simulink for Control Matlab introduction Simulink introduction Control Issues Recall Matlab design
More informationPresentation 3: Eigenvalues and Eigenvectors of a Matrix
Colleen Kirksey, Beth Van Schoyck, Dennis Bowers MATH 280: Problem Solving November 18, 2011 Presentation 3: Eigenvalues and Eigenvectors of a Matrix Order of Presentation: 1. Definitions of Eigenvalues
More informationStructural Issues in Control Design for Linear Multivariable Systems
Structural Issues in Control Design for Linear Multivariable Systems by Steven Ronald Weller B.E. (Hons. I), M.E. A thesis submitted in partial fulfilment of the requirements for the degree of Doctor of
More information3.1 State Space Models
31 State Space Models In this section we study state space models of continuoustime linear systems The corresponding results for discretetime systems, obtained via duality with the continuoustime models,
More information4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns
L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows
More informationis in plane V. However, it may be more convenient to introduce a plane coordinate system in V.
.4 COORDINATES EXAMPLE Let V be the plane in R with equation x +2x 2 +x 0, a twodimensional subspace of R. We can describe a vector in this plane by its spatial (D)coordinates; for example, vector x 5
More informationLinear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices
MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two
More informationNormal forms of regular matrix polynomials via local rank factorization
Dipartimento di Scienze Statistiche Sezione di Statistica Economica ed Econometria Massimo Franchi Paolo Paruolo Normal forms of regular matrix polynomials via local rank factorization DSS Empirical Economics
More informationCOMMUTATIVITY DEGREES OF WREATH PRODUCTS OF FINITE ABELIAN GROUPS
Bull Austral Math Soc 77 (2008), 31 36 doi: 101017/S0004972708000038 COMMUTATIVITY DEGREES OF WREATH PRODUCTS OF FINITE ABELIAN GROUPS IGOR V EROVENKO and B SURY (Received 12 April 2007) Abstract We compute
More informationThe determinant of a skewsymmetric matrix is a square. This can be seen in small cases by direct calculation: 0 a. 12 a. a 13 a 24 a 14 a 23 a 14
4 Symplectic groups In this and the next two sections, we begin the study of the groups preserving reflexive sesquilinear forms or quadratic forms. We begin with the symplectic groups, associated with
More information1 Cubic Hermite Spline Interpolation
cs412: introduction to numerical analysis 10/26/10 Lecture 13: Cubic Hermite Spline Interpolation II Instructor: Professor Amos Ron Scribes: Yunpeng Li, Mark Cowlishaw, Nathanael Fillmore 1 Cubic Hermite
More informationx + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3
Math 24 FINAL EXAM (2/9/9  SOLUTIONS ( Find the general solution to the system of equations 2 4 5 6 7 ( r 2 2r r 2 r 5r r x + y + z 2x + y + 4z 5x + 6y + 7z 2 2 2 2 So x z + y 2z 2 and z is free. ( r
More informationEC9A0: Presessional Advanced Mathematics Course
University of Warwick, EC9A0: Presessional Advanced Mathematics Course Peter J. Hammond & Pablo F. Beker 1 of 55 EC9A0: Presessional Advanced Mathematics Course Slides 1: Matrix Algebra Peter J. Hammond
More informationASEN 3112  Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1
19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A TwoDOF MassSpringDashpot Dynamic System Consider the lumpedparameter, massspringdashpot dynamic system shown in the Figure. It has two point
More informationLecture 5 Rational functions and partial fraction expansion
S. Boyd EE102 Lecture 5 Rational functions and partial fraction expansion (review of) polynomials rational functions polezero plots partial fraction expansion repeated poles nonproper rational functions
More informationPart IB Paper 6: Information Engineering LINEAR SYSTEMS AND CONTROL Dr Glenn Vinnicombe HANDOUT 3. Stability and pole locations.
Part IB Paper 6: Information Engineering LINEAR SYSTEMS AND CONTROL Dr Glenn Vinnicombe HANDOUT 3 Stability and pole locations asymptotically stable marginally stable unstable Imag(s) repeated poles +
More information1 Introduction. 2 Matrices: Definition. Matrix Algebra. Hervé Abdi Lynne J. Williams
In Neil Salkind (Ed.), Encyclopedia of Research Design. Thousand Oaks, CA: Sage. 00 Matrix Algebra Hervé Abdi Lynne J. Williams Introduction Sylvester developed the modern concept of matrices in the 9th
More informationLinear Dependence Tests
Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks
More informationMAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =
MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationBindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
More informationMatrix Norms. Tom Lyche. September 28, Centre of Mathematics for Applications, Department of Informatics, University of Oslo
Matrix Norms Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 28, 2009 Matrix Norms We consider matrix norms on (C m,n, C). All results holds for
More informationNonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/?
Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability p. 1/? p. 2/? Definition: A p p proper rational transfer function matrix G(s) is positive
More informationLinearly Independent Sets and Linearly Dependent Sets
These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for inclass presentation
More informationSolution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
More informationGeneralized Inverse Computation Based on an Orthogonal Decomposition Methodology.
International Conference on Mathematical and Statistical Modeling in Honor of Enrique Castillo. June 2830, 2006 Generalized Inverse Computation Based on an Orthogonal Decomposition Methodology. Patricia
More informationLecture 5: Singular Value Decomposition SVD (1)
EEM3L1: Numerical and Analytical Techniques Lecture 5: Singular Value Decomposition SVD (1) EE3L1, slide 1, Version 4: 25Sep02 Motivation for SVD (1) SVD = Singular Value Decomposition Consider the system
More information2.1: MATRIX OPERATIONS
.: MATRIX OPERATIONS What are diagonal entries and the main diagonal of a matrix? What is a diagonal matrix? When are matrices equal? Scalar Multiplication 45 Matrix Addition Theorem (pg 0) Let A, B, and
More informationMathQuest: Linear Algebra. 1. Which of the following matrices does not have an inverse?
MathQuest: Linear Algebra Matrix Inverses 1. Which of the following matrices does not have an inverse? 1 2 (a) 3 4 2 2 (b) 4 4 1 (c) 3 4 (d) 2 (e) More than one of the above do not have inverses. (f) All
More informationLEVERRIERFADEEV ALGORITHM AND CLASSICAL ORTHOGONAL POLYNOMIALS
This is a reprint from Revista de la Academia Colombiana de Ciencias Vol 8 (106 (004, 3947 LEVERRIERFADEEV ALGORITHM AND CLASSICAL ORTHOGONAL POLYNOMIALS by J Hernández, F Marcellán & C Rodríguez LEVERRIERFADEEV
More informationNonlinear Algebraic Equations Example
Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities
More informationC 1 x(t) = e ta C = e C n. 2! A2 + t3
Matrix Exponential Fundamental Matrix Solution Objective: Solve dt A x with an n n constant coefficient matrix A x (t) Here the unknown is the vector function x(t) x n (t) General Solution Formula in Matrix
More informationA note on companion matrices
Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod
More informationMichigan State University Department of Electrical and Computer Engineering. ECE 280: ELECTRICAL ENGINEERING ANALYSIS (Spring 2008) Overview
Michigan State University Department of Electrical and Computer Engineering ECE 280: ELECTRICAL ENGINEERING ANALYSIS (Spring 2008) Overview Staff Lectures Instructor (First Half of Course: 01/07/08 02/22/08)
More informationReal Roots of Quadratic Interval Polynomials 1
Int. Journal of Math. Analysis, Vol. 1, 2007, no. 21, 10411050 Real Roots of Quadratic Interval Polynomials 1 Ibraheem Alolyan Mathematics Department College of Science, King Saud University P.O. Box:
More informationOptimal Control. Lecture Notes. February 13, 2009. Preliminary Edition
Lecture Notes February 13, 29 Preliminary Edition Department of Control Engineering, Institute of Electronic Systems Aalborg University, Fredrik Bajers Vej 7, DK922 Aalborg Ø, Denmark Page II of IV Preamble
More informationWHICH LINEARFRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?
WHICH LINEARFRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course
More informationCongestion Control of Active Queue Management Routers Based on LQServo Control
Congestion Control of Active Queue Management outers Based on LQServo Control Kang Min Lee, Ji Hoon Yang, and Byung Suhl Suh Abstract This paper proposes the LQServo controller for AQM (Active Queue
More informationChapter 12 Modal Decomposition of StateSpace Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 19967 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationMean value theorem, Taylors Theorem, Maxima and Minima.
MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and expressions. Permutations and Combinations.
More informationTimeDomain Solution of LTI State Equations. 2 StateVariable Response of Linear Systems
2.14 Analysis and Design of Feedback Control Systems TimeDomain Solution of LTI State Equations Derek Rowell October 22 1 Introduction This note examines the response of linear, timeinvariant models
More informationLow Dimensional Representations of the Loop Braid Group LB 3
Low Dimensional Representations of the Loop Braid Group LB 3 Liang Chang Texas A&M University UT Dallas, June 1, 2015 Supported by AMS MRC Program Joint work with Paul Bruillard, Cesar Galindo, SeungMoon
More informationIRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible
More informationCOMMUTATIVITY DEGREES OF WREATH PRODUCTS OF FINITE ABELIAN GROUPS
COMMUTATIVITY DEGREES OF WREATH PRODUCTS OF FINITE ABELIAN GROUPS IGOR V. EROVENKO AND B. SURY ABSTRACT. We compute commutativity degrees of wreath products A B of finite abelian groups A and B. When B
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationChapter 14: Analyzing Relationships Between Variables
Chapter Outlines for: Frey, L., Botan, C., & Kreps, G. (1999). Investigating communication: An introduction to research methods. (2nd ed.) Boston: Allyn & Bacon. Chapter 14: Analyzing Relationships Between
More informationECE 516: System Control Engineering
ECE 516: System Control Engineering This course focuses on the analysis and design of systems control. This course will introduce timedomain systems dynamic control fundamentals and their design issues
More informationMathematics of Cryptography Modular Arithmetic, Congruence, and Matrices. A Biswas, IT, BESU SHIBPUR
Mathematics of Cryptography Modular Arithmetic, Congruence, and Matrices A Biswas, IT, BESU SHIBPUR McGrawHill The McGrawHill Companies, Inc., 2000 Set of Integers The set of integers, denoted by Z,
More informationNumerical Methods for Differential Equations
Numerical Methods for Differential Equations Course objectives and preliminaries Gustaf Söderlind and Carmen Arévalo Numerical Analysis, Lund University Textbooks: A First Course in the Numerical Analysis
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More informationLinear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.
Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a subvector space of V[n,q]. If the subspace of V[n,q]
More informationLecture 2 Matrix Operations
Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrixvector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or
More informationThe Geometry of Polynomial Division and Elimination
The Geometry of Polynomial Division and Elimination Kim Batselier, Philippe Dreesen Bart De Moor Katholieke Universiteit Leuven Department of Electrical Engineering ESAT/SCD/SISTA/SMC May 2012 1 / 26 Outline
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationThe Matrix Elements of a 3 3 Orthogonal Matrix Revisited
Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, ThreeDimensional Proper and Improper Rotation Matrices, I provided a derivation
More informationExplicit inverses of some tridiagonal matrices
Linear Algebra and its Applications 325 (2001) 7 21 wwwelseviercom/locate/laa Explicit inverses of some tridiagonal matrices CM da Fonseca, J Petronilho Depto de Matematica, Faculdade de Ciencias e Technologia,
More information1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More informationCITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
More informationGeneralized Inverse of Matrices and its Applications
Generalized Inverse of Matrices and its Applications C. RADHAKRISHNA RAO, Sc.D., F.N.A., F.R.S. Director, Research and Training School Indian Statistical Institute SUJIT KUMAR MITRA, Ph.D. Professor of
More informationA Second Course in Elementary Differential Equations: Problems and Solutions. Marcel B. Finan Arkansas Tech University c All Rights Reserved
A Second Course in Elementary Differential Equations: Problems and Solutions Marcel B. Finan Arkansas Tech University c All Rights Reserved Contents 8 Calculus of MatrixValued Functions of a Real Variable
More informationBOUNDED, ASYMPTOTICALLY STABLE, AND L 1 SOLUTIONS OF CAPUTO FRACTIONAL DIFFERENTIAL EQUATIONS. Muhammad N. Islam
Opuscula Math. 35, no. 2 (215), 181 19 http://dx.doi.org/1.7494/opmath.215.35.2.181 Opuscula Mathematica BOUNDED, ASYMPTOTICALLY STABLE, AND L 1 SOLUTIONS OF CAPUTO FRACTIONAL DIFFERENTIAL EQUATIONS Muhammad
More informationINTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL
SOLUTIONS OF THEORETICAL EXERCISES selected from INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL Eighth Edition, Prentice Hall, 2005. Dr. Grigore CĂLUGĂREANU Department of Mathematics
More informationTypical Linear Equation Set and Corresponding Matrices
EWE: Engineering With Excel Larsen Page 1 4. Matrix Operations in Excel. Matrix Manipulations: Vectors, Matrices, and Arrays. How Excel Handles Matrix Math. Basic Matrix Operations. Solving Systems of
More informationDepartment of Chemical Engineering ChE101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI
Department of Chemical Engineering ChE101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI Solving a System of Linear Algebraic Equations (last updated 5/19/05 by GGB) Objectives:
More informationTechnische Universität Ilmenau Institut für Mathematik
Technische Universität Ilmenau Institut für Mathematik Preprint No. M 13/08 Zero dynamics and stabilization for linear DAEs Thomas Berger 2013 Impressum: Hrsg.: Leiter des Instituts für Mathematik Weimarer
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in twodimensional space (1) 2x y = 3 describes a line in twodimensional space The coefficients of x and y in the equation
More informationThe Method of Least Squares
The Method of Least Squares Steven J. Miller Mathematics Department Brown University Providence, RI 0292 Abstract The Method of Least Squares is a procedure to determine the best fit line to data; the
More informationHEAVISIDE CABLE, THOMSON CABLE AND THE BEST CONSTANT OF A SOBOLEVTYPE INEQUALITY. Received June 1, 2007; revised September 30, 2007
Scientiae Mathematicae Japonicae Online, e007, 739 755 739 HEAVISIDE CABLE, THOMSON CABLE AND THE BEST CONSTANT OF A SOBOLEVTYPE INEQUALITY Yoshinori Kametaka, Kazuo Takemura, Hiroyuki Yamagishi, Atsushi
More informationDynamic Eigenvalues for Scalar Linear TimeVarying Systems
Dynamic Eigenvalues for Scalar Linear TimeVarying Systems P. van der Kloet and F.L. Neerhoff Department of Electrical Engineering Delft University of Technology Mekelweg 4 2628 CD Delft The Netherlands
More informationORDINARY DIFFERENTIAL EQUATIONS
ORDINARY DIFFERENTIAL EQUATIONS GABRIEL NAGY Mathematics Department, Michigan State University, East Lansing, MI, 48824. SEPTEMBER 4, 25 Summary. This is an introduction to ordinary differential equations.
More informationA characterization of trace zero symmetric nonnegative 5x5 matrices
A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the
More informationArithmetic and Algebra of Matrices
Arithmetic and Algebra of Matrices Math 572: Algebra for Middle School Teachers The University of Montana 1 The Real Numbers 2 Classroom Connection: Systems of Linear Equations 3 Rational Numbers 4 Irrational
More informationCOMPOUND MATRICES AND THREE CELEBRATED THEOREMS
COMPOUND MATRICES AND THREE CELEBRATED THEOREMS K. K. NAMBIAR AND SHANTI SREEVALSAN ABSTRACT. Apart from the regular and adjugate compounds of a matrix an inverse compound of a matrix is defined. Theorems
More informationOPTIMAl PREMIUM CONTROl IN A NONliFE INSURANCE BUSINESS
ONDERZOEKSRAPPORT NR 8904 OPTIMAl PREMIUM CONTROl IN A NONliFE INSURANCE BUSINESS BY M. VANDEBROEK & J. DHAENE D/1989/2376/5 1 IN A OPTIMAl PREMIUM CONTROl NONliFE INSURANCE BUSINESS By Martina Vandebroek
More information7. LU factorization. factorsolve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
7. LU factorization EE103 (Fall 201112) factorsolve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse
More informationRoot Computations of Realcoefficient Polynomials using Spreadsheets*
Int. J. Engng Ed. Vol. 18, No. 1, pp. 89±97, 2002 0949149X/91 $3.00+0.00 Printed in Great Britain. # 2002 TEMPUS Publications. Root Computations of Realcoefficient Polynomials using Spreadsheets* KARIM
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 10 Boundary Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at UrbanaChampaign
More informationThe integrating factor method (Sect. 2.1).
The integrating factor method (Sect. 2.1). Overview of differential equations. Linear Ordinary Differential Equations. The integrating factor method. Constant coefficients. The Initial Value Problem. Variable
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LUdecomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction
More informationNonlinear Least Squares Data Fitting
Appendix D Nonlinear Least Squares Data Fitting D1 Introduction A nonlinear least squares problem is an unconstrained minimization problem of the form minimize f(x) = f i (x) 2, x where the objective function
More information5.3 Determinants and Cramer s Rule
290 5.3 Determinants and Cramer s Rule Unique Solution of a 2 2 System The 2 2 system (1) ax + by = e, cx + dy = f, has a unique solution provided = ad bc is nonzero, in which case the solution is given
More informationThe Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonaldiagonalorthogonal type matrix decompositions Every
More informationOn the Geometry and Parametrization of Almost Invariant Subspaces and Observer Theory
On the Geometry and Parametrization of Almost Invariant Subspaces and Observer Theory Dissertation zur Erlangung des naturwissenschaftlichen Doktorgrades der Bayerischen JuliusMaximiliansUniversität
More informationSolutions to Linear Algebra Practice Problems
Solutions to Linear Algebra Practice Problems. Find all solutions to the following systems of linear equations. (a) x x + x 5 x x x + x + x 5 (b) x + x + x x + x + x x + x + 8x Answer: (a) We create the
More informationMAT188H1S Lec0101 Burbulla
Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u
More information