Symbolic manipulation techniques for model simplification in object-oriented modelling of large scale continuous systems

Size: px
Start display at page:

Download "Symbolic manipulation techniques for model simplification in object-oriented modelling of large scale continuous systems"

Transcription

1 Mathematics and Computers in Simulation 48 (1998) 133±150 Symbolic manipulation techniques for model simplification in object-oriented modelling of large scale continuous systems Emanuele Carpanzano 1, Claudio Maffezzoni * Dip. di Elettronica de Informazione, Politecnico di Milano. P.za L. Da Vinci 32, Milan, Italy Received 24 March 1998; revised 14 July 1998; accepted 14 July 1998 Abstract In the present work, techniques for the symbolic manipulation of general nonlinear differential algebraic equation (DAE) systems are presented, and used for model simplification purposes, to support efficient simulation of large scale continuous systems in an object-oriented modelling environment. The specific problems addressed are efficient elimination of trivial equations by means of substitution, system block lower triangular (BLT) partitioning, and tearing, i.e. hiding of algebraic variables. Moreover, the weakening heuristic criterion for decoupling of large systems, via dynamic approximation, is studied. All these techniques have been successfully implemented and tested in MOSES (modular object-oriented software environment for simulation), in order to define a complete model simplification process. The results achieved by applying the discussed algorithms and criteria on serial multibody systems are illustrated. A brief overview on further known symbolic manipulation techniques is also given, comparing them with the proposed ones, throughout the paper. # 1998 IMACS/ Elsevier Science B.V. Keywords: Object-oriented modelling; Nonlinear DAE systems; Symbolic manipulation algorithms; Model simplification; Multibody systems 1. Introduction The object-oriented approach is becoming very popular in the field of modelling, especially with reference to complex physical system modelling. Following such an approach, a model is structured as closely as possible to the corresponding physical system; in particular, models are defined in acausal form, so that one software module is associated to one physical component, independently of the context in which it is used [12,15]. Complex models can be realised by aggregating more sub-models and their connections within a composite larger model, which may in turn be connected to other ÐÐÐÐ * Corresponding author. Tel.: ; fax: ; maffezzo@elet.polimi.it 1 carpanza@elet.polimi.it /98/$ ± see front matter # 1998 IMACS/Elsevier Science B.V. All rights reserved PII: S ( 9 8 )

2 134 E. Carpanzano, C. Maffezzoni / Mathematics and Computers in Simulation 48 (1998) 133±150 models. However, for continuous time modelling, by assembling the declarative equations of submodels, this form of model representation gives rise to larger scale nonlinear systems of differential algebraic equations (DAEs) of the form F t; y; _y; u; p ˆ0; (1) where F is a generic nonlinear n-vector function, y is the unknown variable n-vector, u is the input variable m-vector, p is the parameters vector and t is time. Once such a model is defined it is convenient to check the correctness 2 of the model before using it for analysis or design purposes, or before generating the simulation code. Moreover, especially for complex plants, the resulting DAE system is of very large dimension, so its numerical solution would require excessively long computation times. The simulation of a DAE system can be executed much more efficiently if a preliminary symbolic manipulation is performed in order to simplify the model [3,7,14]. The first problem, i.e. model verification, is not discussed in this work, the interested reader is therefore referred to the existing literature [2,14]. Consequently, it is here assumed that the considered DAE system (1) is mathematically correct. Instead, the problem of model simplification is dealt with, in particular, techniques for the symbolic manipulation of nonlinear DAE systems are presented to this aim. A new efficient substitution algorithm is introduced (Section 2), the block lower triangular (BLT) partitioning algorithm is briefly discussed (Section 3) and a flexible tearing algorithm is presented, which allows to easily implement both general and domain-specific heuristic rules, and which works in both the scalar and the vector cases (Section 4). In Section 5 the weakening heuristic criterion is outlined 3. Then, the whole model simplification process implemented in the modular object-oriented software environment for simulation (MOSES) is defined (Section 6), and, finally, the results achieved by applying the proposed techniques to serial multibody systems are discussed (Section 7). Remark 1. The minimum number of times that all or part of the equations of the DAE system (1) must be differentiated with respect to time in order to transform it into an explicit ODE form is defined the index of the system [1]. All known numerical DAE solvers have troubles with solving systems of index greater than 1. Unfortunately, high index problems are natural in object-oriented modelling [14], so symbolic manipulation techniques have been defined in order to reduce the index to 1 [3,15]. Since the object of the present paper is signally the study of symbolic manipulation techniques for model simplification purposes, it is here assumed that the considered DAE system (1) has index Eliminating trivial equations: the substitution algorithm The goal of the substitution algorithm is to reduce the dimension of the global DAE system, by identifying and eliminating all the trivial equations of the form X ˆW; X ˆW T (2) 2 In the general case a model is correct if it is syntactically and semantically correct, according to the rules of the modelling language, and if it represents a complete and consistent problem from a mathematical point of view [2]. 3 While the previous algorithms are harmless for the numerical integration, the weakening criterion, being a dynamic approximation, has an influence upon the numerical integration's accuracy, as pointed out in Remark 4.

3 E. Carpanzano, C. Maffezzoni / Mathematics and Computers in Simulation 48 (1998) 133± Fig. 1. The proxyvar data structure. where X is a matrix (scalar) variable, while the matrix (scalar) W can be either a variable or a constant. Such equations are eliminated by substituting one of the involved variables with the other (or with its constant value), using the proper sign and transposed operators, in all the equations of the system. The DAE system structural consistency 4 is preserved, since for every eliminated unknown variable also an equation is eliminated. Here an algorithm is proposed that performs the considered operation both in the scalar and in the matrix case. To efficiently implement the algorithm the data structure shown in Fig. 1 is defined. To every variable (scalar or matrix), a structure called proxyvar is associated, which is composed by three elements: var contains a pointer to the variable that is equivalent to the considered variable, once the change of sign and the transpose operation described in sign ( or ) and transp (true if the variable has to be transposed, false otherwise), respectively, are performed. With this simple structure, it is possible to store directly in the variables the effect of the substitution algorithm, providing also an easy way to determine its actual value even if it is not part of the system being numerically solved. This is essential in order to have a complete inspectability of the system, i.e. to keep trace of the substitutions. In the following, we call tail a variable whose field var of the relative proxyvar structure contains the variable itself, and whose fields sign and transp contain and false, respectively. The simplified pseudo-code of the proposed substitution algorithm is shown below. The sign and the transposition operators are omitted for the sake of simplicity. Consequently, the proxyvar data structure is simply a pointer to the equivalent variable, as shown in Fig. 2. The notation proxyvar X!Y means that the pointer proxyvar associated to X is set to point to Y. The method returntailvarof: X returns the first tail variable or constant found, starting from the variable X. For example, if we had: X! Y! Z! W! W, the method would return the variable W. If the starting variable is encountered again during the path, e.g. X! V! N! X! V!, then there is a cycle of assignments, which is cut by pointing the proxyvar of X to the variable itself (X! X, i.e. X becomes a tail variable), and by returning the variable X itself. Fig. 2. Simplified proxyvar data structure. 4 A DAE system is structurally consistent (or non-singular) if it is possible to form a set of ordered pairs of variables and equations, such that each variable y i and each equation F i are only members of one pair, and for each pair (y i, F i ) the variable y i appears in F i. Such a set is called an output set [15].

4 136 E. Carpanzano, C. Maffezzoni / Mathematics and Computers in Simulation 48 (1998) 133±150 The method substitut: X with: (returntailvarof: X) substitutes all the occurrences of X in the system with the tail variable or the constant equivalent to X, and then removes X from the set of unknown variables (if a cycle is found by the method returntailvarof: X then no substitution is performed). Whenever a variable is substituted, its proxyvar pointer is set to point to the tail variable or constant that substitutes it, this is necessary to keep inspectability. The proposed algorithm is composed of the following phases: 1. initialisation: the proxyvar pointer of every variable is initialised; 2. main cycle: this cycle is executed as long as trivial equations are found in the system, and it consists of two minor cycles: proxyvar setting cycle: the equations of the system are analysed, when a trivial equation is found, i.e. an equation in one of the forms (2), the proxyvar pointers are properly updated; variables substitution cycle: again, all the equations are considered, and a variable is replaced by its equivalent variable whenever necessary.

5 E. Carpanzano, C. Maffezzoni / Mathematics and Computers in Simulation 48 (1998) 133± It is possible to verify that the proposed algorithm allows to resolve the following tricky cases. Multiple assignments. If the algorithm is applied to the equations: Y ˆ H; Q ˆ K; Y ˆ Q after the analysis of the first two equations the proxyvar pointers of the variables Y and Q point to H and K, respectively, i.e. Y! H and Q! K. Problems could arise when the third equation is considered. This problem is solved by the algorithm by setting the pointers as follows: Y! H! Q! K. The substitution algorithm partitions the set of variables in equivalence categories, i.e. groups of equivalent variables, which are represented by one of them in the system to be numerically solved (or by a constant). In the shown example all the variables belong to the same category, whose tail variable is K. Generation of new trivial equations. Consider the set of equations: X ˆ Y Z Q; Y ˆ 1; Z ˆ 1 By removing the two trivial equations and replacing variables Y and Z with the constants 1 and 1, respectively, the new trivial equation X ˆ Q is generated. It is easy to verify that the proposed algorithm manages to eliminate such trivial equations generated during the substitution's sequence. Cycles. If the cycle of assignments is found: X ˆ Y; Y ˆ Z; Z ˆ X then by applying the substitution algorithm, the proxyvar pointers are set as follows: X! Y; Y! Z; Z! X. In this case, there is not a tail variable in the equivalence category. This problem is solved by the returntailvarof: X method, which cuts the cycle by setting the proxyvar pointer of a variable of the cycle to point the variable itself, as previously discussed. The illustrated simplified version of the algorithm can easily be extended in order to properly deal with the sign and transposition operators too, by introducing the methods for their correct updating whenever required. 3. Partitioning the system: the BLT algorithm When dealing with systems of very large dimensions, an important symbolic manipulation step is the BLT-partitioning of the system [12], which allows to decompose the overall system into subsystems which can be solved in sequence. The incident matrix I of a given DAE system is defined as follows: I is a matrix whose rows and columns represent the equations and the variables of the system, respectively, and whose element i; j is not zero if the jth variable (or its derivative) is present in equation i and is 0 otherwise. The BLT-partitioning algorithm can be performed directly on the incidence matrix, by permuting its columns (unknowns) and rows (equations) so as to make it block lower triangular, as depicted in Fig. 3. This operation is important since it allows to deal with smaller subsystems when performing the subsequent symbolic and numerical operations. In particular, the tearing algorithm can be applied to each single block, as shown in Section 4.

6 138 E. Carpanzano, C. Maffezzoni / Mathematics and Computers in Simulation 48 (1998) 133±150 Fig. 3. BLT-partitioned system. Techniques exist for permuting directly to BLT form, but there is not any known advantage over the usual two-stage approach: 1. permute entries onto the diagonal (usually called finding a transversal [8]); 2. use symmetric permutations to find the BLT form itself. Very economical procedures exist for constructing BLT-partitions with minimum-sized diagonal blocks, typically requiring O(n) O() operations for a matrix of order n with non-zero elements [8]. In MOSES Tarjan's algorithm has been implemented [17]. Remark 2. The first step of the BLT-partitioning algorithm, i.e. to permute the rows of the incidence matrix to make all diagonal elements non-zero, can also be viewed as a procedure which assigns to each variable y i a unique equation F j such that y i appears in F j, i.e. as a procedure to identify an output set. Consequently, if it is impossible to pair variables and equations in this way, then the system is structurally singular (see 4 ). Whenever a system turns out to be structurally singular, symbolic manipulation techniques can be used in order to give the user precise hints about what is wrong [2,14]. This point is not discussed in the paper, since it has been assumed that the considered DAE system (1) is mathematically correct, thus structurally non-singular. 4. Hiding of algebraic variables: the tearing algorithm It has been shown in the previous section that efficient algorithms exist to transform a DAE system to block lower triangular form, where some blocks of dimensions 1 are present along the diagonal. The algorithm guarantees that the dimensions of the diagonal blocks are kept as small as possible. Non-trivial blocks on the diagonal correspond to systems of equations that have to be solved simultaneously. To simplify the numerical solution of the overall DAE system, tearing is executed in a symbolic way, and consists of solving each block of the BLT-partitioned system for as many algebraic variables as possible; this way variables and equations of each block are split into two sets, so that the variables in the first set can be explicitly computed if the variables of the other set are known. In the following, the tearing problem will be studied with reference to a general DAE system of the form (1), which can represent either the whole DAE system or a subsystem corresponding to a block of the BLT-partitioned system. Consider the unknown variables set y of the system as two subsets x and z, x being the state variables (i.e. those variables which are present in the system together with their derivatives) and z being the algebraic variables. It is possible to rewrite the considered

7 E. Carpanzano, C. Maffezzoni / Mathematics and Computers in Simulation 48 (1998) 133± system in the form: F t; x; _x; z ˆ0 (3) omitting u and p for the sake of simplicity. The symbolic tearing of the DAE system is obtained by choosing the q-sub-vector ~z of z of minimum dimension such that Eq. (3) can be written as: z 1 ˆ g 1 t; x; _x;~z z 2 ˆ g 2 t; x; _x;~z; z 1.. z k ˆ g k t; x; _x;~z; z 1 ;...; z k 1 G t; x; _x;~z; z 1 ;...; z k ˆ0 (4) where z 1 ; z 2 ;...; z k are the remaining elements of z and once ~z has been extracted, g 1 ; g 2 ;...; g k are suitable scalar functions and G is a vector function of dimension n k. Moreover, each assignments z j ˆ g j t; x; _x;~z; z 1 ;...; z j 1 has to be obtained directly by solving one equation of Eq. (3) for z j, once j 1 equations of Eq. (3) have been solved for z 1 ; z 2 ;...; z j 1. After substitution of variables z 1 ; z 2 ;...; z k into G, the problem is turned to the solution of a reduced DAE system 5 G t; x; _x;~z ˆ0. In particular, we are interested in dividing the equations of the given DAE system into two subsets so that the first one (assignments) is as large as possible, and the second one (implicit equations) is as small as possible. It has been proven in [5] that the stated tearing problem is NP-complete. This means that the optimal solution for the tearing problem cannot be guaranteed in polynomial time, consequently approximation algorithms have to be considered. Numerous approximation tearing algorithms have been proposed in different contexts, but among them there is not a clear winner [8,13]. In this paper an efficient and flexible algorithm is summarised, which allows the application of both general and domain-specific heuristic rules, and which works in the vector case as well. Since the use of bipartite graphs allows dealing with the problem in a simple and efficient way, we define the associated bipartite graph (see Fig. 4) as follows. In the bipartite graph there are two sets of nodes: E-nodes (squares), one for each equation of (3), and V-nodes (circles), one for each algebraic variable of (3). There is an edge from an E-node e i to a V-node v j if and only if the equation associated to e i contains the variable associated to v j, and this edge is bold if and only if e i can be solved for v j. The bipartite graph can be used to work out a tearing algorithm, which turns out to be simple and efficient to implement directly on the incidence matrix of the DAE system 6. 5 Since DAE solvers produce a solution within the user-specified error bounds only for x and ~z (and not for _x), hidden variables' errors are not directly controlled [15]. If accuracy of z j is a real concern, the first j assignments must at least have the form: z i ˆ g i t; x; ~z; z 1 ;...;z i 1, for iˆ1, 2,..., j, i.e. they cannot contain derivatives. 6 A suitable data structure for the incidence matrix of a DAE system allows several symbolic manipulation algorithms to be efficiently performed; so it is reasonable to assume that this data structure is available in the symbolic manipulation environment [3], as discussed in Appendix A.

8 140 E. Carpanzano, C. Maffezzoni / Mathematics and Computers in Simulation 48 (1998) 133±150 Fig. 4. Reduced incident matrix (a) and corresponding bipartite graph (b). Since an assignment is easily obtained whenever there is an equation with only one unknown algebraic variable z k and the equation is solvable with respect to z k, an assignment is defined whenever, in the associated bipartite graph, an E-node e k is found having only one incident edge, and this edge is bold. Thus, to obtain as many assignments as possible, the graph has to be manipulated so as to get as many E-nodes with only one incident and bold edge as possible. To this aim the following algorithm is proposed.

9 E. Carpanzano, C. Maffezzoni / Mathematics and Computers in Simulation 48 (1998) 133± Every time the cycle is executed one of the following two operations is performed: 1. V-node v l, E-node e i and all their incident edges are removed from the bipartite graph; 2. V-node v r and all its incident edges are removed from the bipartite graph. In case 1 an assignment is identified, while in case 2 a choice is taken which gives rise to an implicit equation. In fact, in case 1 equation e i can be solved for the unknown algebraic variable v l, to obtain an assignment. In case 2, on the contrary, an implicit equation is automatically generated by removing V- node v r from the graph, since there will be one more E-node remaining at the end of the algorithm, so there will be one more implicit equation left at the end of the algorithm. The number of assignments obtained by performing the illustrated algorithm depends on the way V- node v r is chosen in the underlined statement. This choice can be done by making use of both general heuristic rules and domain-specific heuristic rules. A simple rule is, e.g., the following one: among the existing V-nodes, select a node v r, if any, that is connected by a non-bold edge to one of the E-nodes having minimum number of incident edges. Efficient general heuristic rules, and domain-specific rules for multibody systems are presented in [5]. Since the data structure for the incidence matrix of a DAE system does not depend on the equations and variables' type, the illustrated tearing algorithm can directly be executed on the incidence matrix of DAE system in vector form. Performing the tearing algorithm on vector equations can give better results than performing the same algorithm on the corresponding set of scalar equations, obtained by expanding every vector equation. In Section 7 it will be shown that, for serial multibody systems, the minimum number of implicit equations can be achieved by using only general heuristic rules, provided that the tearing algorithm is first performed on the equations written in vector form. Remark 3. The presented tearing algorithm can be applied to general nonlinear DAE systems, corresponding to models of whatever physical domain, since no hypothesis is done as regards the linearity of the system, and since general heuristic rules can be used, when no domain-specific rules are available. So, it does not require informations in the model library to direct the manipulation process. Tearing algorithms based on domain-specific heuristic rules can be defined, as for example in [9]. Such algorithms can give better results for a specific application domain, but cannot be applied to multidisciplinary systems, unless the necessary domain-specific heuristic rules are defined and implemented in the model library when new physical domains are considered. When dealing with DAE systems which are linear in the unknowns, it is possible to perform the computation in a fully symbolic way. This can be done either by symbolic Gaussian elimination or by applying Cramer's rule. When using standard Gaussian elimination, the sequence of the elimination process must take into account the numerical values of the matrix elements, in order to avoid divisions by zero. Alternatively, this sequence can be defined symbolically by the modeller through a technique called relaxing. Relaxing has the advantage that the sequence of computation can be determined before a simulation run starts. In particular, the computation can be done in a fully symbolic way, which would not otherwise be possible. More about relaxing, and its application in Dymola, can be found in [16]. When solving systems which are linear in the unknowns through the relaxing technique, it must be guaranteed that the predefined elimination sequence does not lead to divisions by zero, for all the possible values of the matrix elements. As is well known, systems which are linear in the unknowns can

10 142 E. Carpanzano, C. Maffezzoni / Mathematics and Computers in Simulation 48 (1998) 133±150 always be solved symbolically through Cramer's rule, i.e. using determinants. However, this method may be extremely inefficient for medium to large system sizes. An efficient linear system solver, using Cramer's rule, has been realised and implemented in Omola [4]. The techniques briefly discussed in this remark require restrictions on the linearity and on the dimensions of the considered problem (symbolic solution through Cramer's rule), or practical experience of the modeller (use of domain-specific heuristic rules), or both of them (use of the relax operator in the model library). On the contrary, the algorithm discussed in this section is general, in the sense that it requires no restrictions on the considered DAE system and no specific skill of the modeller or of the user. 5. The weakening heuristic criterion When dealing with physical systems, it may frequently happen that a certain variable has a weak dynamical influence in a certain equation or group of equations, typically for two reasons: (a) its absolute relevance is small; (b) the dynamics associated to that variable is relatively slow with respect to the dynamics associated to the variables dominating the said equation(s). As an example of the first type, the case of the momentum equation for a liquid flowing in a pipe can be considered: the fluid temperature has a negligible effect in the equation, which consists of causing ``small'' variations of the liquid density and viscosity. An example of the second type is the electrical part of a direct-current motor, in whose equations the back-electromotive force depends on the rotor angular speed: here the angular speed, being a mechanical variable, may change only slowly with respect to the dynamics of the motor's electrical part. It should be noticed that a weakness declaration can have various causes; e.g. it can be due to different dynamics in the system, but it can also be caused by specific structure and sizing of certain components in a plant. So, the weakening criterion can be defined as a problem-dependent heuristic criterion based on dynamic decoupling. A variable can be declared weak by the model developer 7 in the considered equation(s) if either one (or both) of the reasons (a) and (b) apply. When a variable y j is declared weak in an equation F i, the numerical solution of the DAE system will be performed by using in F i, in any integration interval, the value of y j evaluated at the interval initial time. This means that if a variable is declared weak in an equation, then at a given integration step, its value can be considered as known; it follows that the 1- element i; j of the incidence matrix can be substituted by a 0-element, when performing the previously discussed symbolic manipulation algorithms. In particular, the weakness attribute may strongly improve the results of the BLT-partition (by allowing to decouple the global DAE system into more subsystems), and/or of the tearing algorithm (by allowing a more efficient splitting in assignments and implicit equations for single BLT-blocks). The use of the weakening criterion in object-oriented modelling, and its applications in MOSES, are also discussed in [6,11]. Remark 4. The manipulation techniques discussed in the previous sections have no influence upon the numerical integration process; on the contrary, the weakening heuristic criterion introduces extra-errors 7 A variable may be declared weak in an equation both while building the model library and while assembling a plant model.

11 E. Carpanzano, C. Maffezzoni / Mathematics and Computers in Simulation 48 (1998) 133± into the numerical integration. In particular, such errors are due to handling the weakened variables as constants while other variables of the system are being computed. As a consequence, the choice of weak variables has to be performed by an expert modeller, in order to guarantee that weakening does not significantly affect the accuracy of the numerical integration process. Remark 5. It has to be checked by the symbolic manipulation environment that by substituting the 1- elements of the incidence matrix corresponding to weak relations with 0-elements, the DAE systems' structural consistency is preserved, otherwise a suitable message has to be given to the modeller, pointing out the improper weak declarations [3]. Remark 6. In this paper only pure symbolic approaches have been considered, but, for the sake of completeness, it has to be pointed out that recently a new method for solving DAE systems efficiently, called inline integration, using a mixed symbolic and numeric approach, has been proposed. Following this method, discretisation formulae representing the numerical integration algorithm are symbolically inserted into the DAE model, and the symbolic manipulation algorithms treat these additional equations in the same way as the physical equations of the model itself. It has been shown that this uniform treatment of physical equations and discretisation formulae often leads to a significant model simplification. More about inline integration, and its implementation in Dymola, can be found in [10]. 6. The model simplification process Now that the used symbolic manipulation algorithms have been illustrated, the model simplification process, whose Petri net representation is shown in Fig. 5, can be defined. This consists of the following steps: 1. all the vector equations are collected in a set and the substitution algorithm is applied on it; 2. the tearing algorithm is applied on the reduced set of vector equations; 3. every vector equation is expanded into scalar ones; 4. the substitution algorithm is applied on the set of scalar equations; 5. the BLT-partition of the scalar equations is performed by applying Tarjan's algorithm; 6. first the assignments of each block obtained through the vector tearing are put in the final set of assignments of the block 8, then the tearing algorithm is applied to the remaining scalar equations of the considered block. When performing the BLT-partition and the tearing algorithm, the 1-elements of the incidence matrix corresponding to relations declared weak by the modeller are replaced by 0-elements, as discussed in Section 5. Throughout the model simplification process, zeros are exploited whenever an equation is modified, or a vector equation is expanded, in order to reduce the complexity of the equations. 8 An equation is an assignment for a block only if it is solved for one of the variables of the considered block, as a consequence not necessarily all the assignments obtained through the vector tearing are put in the final sets of assignments.

12 144 E. Carpanzano, C. Maffezzoni / Mathematics and Computers in Simulation 48 (1998) 133±150 Fig. 5. Petri net representing the model simplification process. The proposed model simplification process has been implemented in MOSES [11], and the results achieved by applying it to serial multibody systems are presented in the following section. 7. Application to serial multibody systems in MOSES In this section, the whole model simplification process is applied to two serial multibody systems (Section 7.1), different tearing rules are compared (Section 7.2) and the weakening criterion is explored (Section 7.3).

13 E. Carpanzano, C. Maffezzoni / Mathematics and Computers in Simulation 48 (1998) 133± Applications of the whole model simplification process The first example illustrates how the entire manipulation process operates, by considering simple 3- link and 6-link serial multibody systems, without motors and controllers and with no flexibility of the transmission gears. The starting numbers of equations of the considered models are: 3-link robot: vector equations and 36 scalar equations, 6-link robot: vector equations and 72 scalar equations. The results obtained for each manipulation step are the following ones: Step 1 ± vector substitution 3-link robot: 48 trivial vector equations are eliminated (63 vector equations remain), 6-link robot: 96 trivial vector equations are eliminated (129 vector equations remain). Step 2 ± vector tearing 3-link robot: all the remaining 63 vector equations are transformed into assignments, 6-link robot: all the remaining 129 vector equations are transformed into assignments. Step 3 ± expansion of the vector equations 3-link robot: total number of scalar equations obtained is 255 (189 are assignments), 6-link robot: total number of scalar equations are eliminated is 519 (387 are assignments). Step 4 ± scalar substitution 3-link robot: 153 trivial scalar equations are eliminated (102 scalar equations remain), 6-link robot: 279 trivial scalar equations are eliminated (240 scalar equations remain). Step 5 ± BLT-partition 3-link robot: 35 blocks are obtained (1 of size 68 (the first) and 34 of size 1), 6-link robot: 74 blocks are obtained (1 of size 167 (the first) and 73 of size 1). Step 6 ± scalar tearing 3-link robot: the first block is split into five implicit equations and 63 assignments; as regards the other 34 blocks, one contains one implicit equation, while the remaining contain one assignment each, 6-link robot: the first block is split into 11 implicit equations and 156 assignments; as regards the other 73 blocks, one contains one implicit equation, while the remaining contain one assignment each. Notice that in the outlined model simplification process the tearing algorithm has been executed, both in the scalar case and in the vector case, by using only general heuristic rules, as is discussed in the sequel Comparison among different heuristic tearing rules In order to compare the results obtained by applying the proposed tearing algorithm, with different heuristic rules, to the systems considered in the previous section, Table 1 can be analysed, where the

14 146 E. Carpanzano, C. Maffezzoni / Mathematics and Computers in Simulation 48 (1998) 133±150 Table 1 Results achieved by performing the proposed tearing algorithm Considered cases Rules used Robots Vector Scalar General Specific 3-Link 6-Link 1 No Yes Yes No No Yes Yes Yes Yes Yes Yes No 6 12 number of implicit equations obtained after the application of the algorithm is shown. First, the algorithm is applied directly on the expanded system, i.e. treating only scalar equations, by using only general heuristic rules (row 1) and by using also domain-specific rules (row 2). Then, the algorithm is applied also on the vector equations, before expanding them (row 3), by applying only general heuristic rules. From the results shown in Table 1, it can be noticed that, if we perform the algorithm only on the expanded system (scalar case, rows 1 and 2), the use of domain-specific rules is very important, since by using these rules the optimal solution, i.e. the minimum number of implicit equations, is achieved. In fact, the number of resulting equations is equal to the number of state variables of the corresponding model (row 2). On the other hand, when the algorithm is first applied to the DAE system in vector form and then to the expanded scalar form, we achieve the optimal solution even without domain-specific heuristic rules (row 3). A more in-depth discussion on this topic can be found in [5] Testing the weakening criterion In this section, the possible benefit of using the weakening criterion is illustrated through a simple example, and the results obtained by applying it to the model of a six degree of freedom robot are reported. Consider a simple brushless motor directly applied to a load inertia. A control loop using an analogue PI controller is used to control the motor torque and to improve its dynamical response. In this system, the electrical dynamics is much faster than the mechanical one. The motor model is constituted by: (a) the equations of the mechanical part: J _! m ˆ m l f (5) f ˆ 0 k f! m (6) m ˆ k m i (b) the equation of the electrical part: (7) v ˆ L Ri k v f! m g (c) the PI controller equation: v ˆ k P i 0 i k I Z i 0 i dt (8) (9)

15 E. Carpanzano, C. Maffezzoni / Mathematics and Computers in Simulation 48 (1998) 133± Fig. 6. Control scheme of the motor model ((- - -) weak relation). where! m is the rotor angular velocity, m, l, and f are the active torque, the load torque and the friction torque, respectively, i and v are the terminal current and voltage, J, 0, k f, k m, L, R and k v are suitable constants, while k P and k I are the PI parameters, and i 0 is the current setpoint. In Eq. (8) the ``mechanical variable''! m is put within curly parentheses to indicate that! m is a weak variable in that equation; in fact! m, being a mechanical variable, is subject to slow variations with respect to the electrical dynamics. This way, the symbolic manipulation algorithms split the system of equations into two almost independent subsystems with the causal structure of Fig. 6. As a consequence, if the external perturbation is such that the fast electrical dynamics is excited, then the solver will integrate the electrical part equation with the last available value of! m, generally using a smaller integration step than the one for the mechanical part. Once the value of the current i is available, the mechanical subsystem will be solved afterwards with its own integration step. Table 2 summarises the results of a system simulation 9, in response to a square wave excitation of i 0, in two cases: in the first one the DAE system is solved globally (row 1), in the second one! m is declared weak in (8) and, consequently, two cascaded DAE subsystems are solved (row 2). It is apparent that the step-size of the mechanical part is larger in the second case (while having the same tolerances on the system variables). Here, the application of the weakening criterion does not give a definite advantage in terms of simulation time: this is due to the little overload necessary to realise this technique in computation, which is more visible in small size systems. The benefit of weakening may be very relevant to industrial robot simulation, where the electrical part is made by a few simple equations (to be solved with small integration steps), while the mechanical part is made by complex 3D mechanical equations (to be solved almost independently with larger integration steps). In MOSES, the weakening criterion has been applied to the six link robot SMART 6.12.R, which is a serial multibody system equipped with brushless motors and PI controllers. The global system has been split by the model simplification process in two subsystems, one related to the electrical and control part of the six joints, and one containing all the equations of the mechanical part. Table 2 Weakening: simulation results with the simple motor model Number of steps of the mechanical part Number of steps of the electrical part Global system Decoupled system Computation time (s) 9 Performed with the numerical DAE-solver DASSL [1].

16 148 E. Carpanzano, C. Maffezzoni / Mathematics and Computers in Simulation 48 (1998) 133±150 Table 3 Weakening: simulation results with the SMART 6.12.R model Number of steps of the mechanical part Number of steps of the electrical part Global system Decoupled system Computation time (s) The electrical and control subsystem consists of 12 state equations and 12 state variables (the motor current and the controller state of each link), while the mechanical one consists of 24 equations and 24 state variables (12 describing the joints' variables and 12 representing the flexibility of the transmission gears). The simulation results are presented in Table 3, where it can be noticed how the use of the weakening criterion allows a significant reduction of the computation time. 8. Concluding remarks In the paper symbolic manipulation techniques have been presented for decoupling and order reduction of general nonlinear DAE systems, which have been assumed to be mathematically correct and of index 1. These techniques have been applied to simplify object-oriented models of large scale continuous systems, in order to perform efficient simulations of their behaviour. In particular, a new efficient substitution algorithm, for eliminating (scalar and matrix) trivial equations, has been introduced; the decomposition of the overall system into subsystems, which can be solved in sequence, has been discussed; and a flexible tearing algorithm has been proposed, which allows the use of both general and domain-specific heuristic rules and which works in the vector case too. Moreover, the weakening criterion has been studied. The complete model simplification process implemented in MOSES has been outlined, and applications to robotic systems have been shown, where the proposed manipulation techniques have been used to reduce the computational burden of the simulation process. Since there is a growing consent on using the object-oriented approach to model large scale and heterogeneous systems, and mathematical simulation is the means to practically use such models, model simplification techniques, like the ones considered in this paper, are becoming of great importance. In the next future it will be necessary to extend such techniques in order to deal with hybrid systems, which incorporate both continuous time and discrete-event dynamics [2]. In particular, the use of heuristic criteria, as for example weakening, will become a crucial point to simplify large complex models. Appendix A A. Efficient data structures for equations and incidence matrices In order to execute the proposed manipulation algorithms, symbolic operations have to be performed on the equations of the considered DAE system and on its incidence matrix. For example, it is necessary to establish which variables can be made explicit in a certain equation, to identify the bold

17 E. Carpanzano, C. Maffezzoni / Mathematics and Computers in Simulation 48 (1998) 133± Fig. 7. Equation and corresponding binary tree. elements in the reduced incidence matrix, and to solve equations for one of their variables, to obtain the assignments. These operations are performed both in the scalar and in the vector case. The efficiency of the discussed algorithms, as regards the model simplification, the storage requirements and the computing time, strongly depends on the efficiency of the data structures implemented to represent equations and matrices. The symbolic manipulation of equations is quite easy, both in the scalar case and in the vector case, if an equation is represented as a binary tree, like in the example shown in Fig. 7. To tree nodes that are not leaves there are associated operators, e.g. ˆ,,,, /, der, sin, cos, etc. To the leaves, there are associated variables, parameters, or numerical constants [12]. As the incidence matrix is concerned, since this is usually sparse for large systems, different data structures can be used for storing, accessing and manipulating without undue overheads [8]. In MOSES Fig. 8. Data structure for the incidence matrix.

18 150 E. Carpanzano, C. Maffezzoni / Mathematics and Computers in Simulation 48 (1998) 133±150 a sparse matrix is stored through two arrays [3], one, say R, for the rows (equations) and one, say C, for the columns (variables). Each element of R, e.g. R m, represents a row of the matrix and contains two pointers: one points to the equation associated to the row (Eq m ), while the other one points to an array (AR m ), whose elements point to the variables appearing in Eq m. A similar description can be given for each element of C, as outlined in Fig. 8. References [1] K.E. Brenan, S.L. Campbell, L.R. Petzold, The Numerical Solution of Initial-Value Problems in Differential-Algebraic Equations, SIAM Series, Classics in Applied Mathematics, [2] E. Carpanzano, Model Verification in Object-Oriented Modelling of Large Scale Nonlinear Hybrid Systems, CESA'98, IMACS Multiconference on Computational Engineering in Systems Applications, Nabeul-Hammamet, Tunisia, 1±4 April [3] E. Carpanzano, F. Formenti, Symbolic manipulation of DAE systems, Thesis (in Italian), Politecnico di Milano, April [4] E. Carpanzano, F. Formenti, Solution of symbolic linear systems in Omsim using Cramer's rule, Internal report, number ISRN LUTFD2/TFRT-7524-SE, Department of Automatic Control, Lund Institute of Technology, October [5] E. Carpanzano, R. Girelli, The tearing problem: definition, algorithm and application to generate efficient computational code from DAE systems, Second MATHMOD, IMACS Symposium on Mathematical Modelling, 5±7 February 1997, Technical University Vienna, Austria, pp. 1039±1046. [6] F. Casella, C. Maffezzoni, Exploiting Weak Interactions in Object Oriented Modelling, EUROSIM Simulation News Europe, February [7] F.E. Cellier, H. Elmqvist, Automated Formula Manipulation Supports Object Oriented Continuous System Modelling, IEEE Control System, April 1993, pp. 28±38. [8] I.S. Duff, A.M. Erisman, J. Reid, Direct Methods for Sparse Matrices, Clarendon Press, Oxford, 1986, pp. 252±260. [9] H. Elmqvist, M. Otter, Methods for tearing systems of equations in object oriented modelling, in: Guasch, Huber (Eds.), Proceedings of the Conference on Modelling and Simulation, 1994, pp. 326±332. [10] H. Elmqvist, M. Otter, F.E. Cellier, Inline integration: a new mixed symbolic/numeric approach for solving differentialalgebraic equation systems, Keynote Address, Proceedings of ESM'95, European Simulation Multiconference, Prague, Czech Republic, 5±8 June 1995, pp. 23±34. [11] C. Maffezzoni, R. Girelli, MOSES: modular modelling in an object oriented database, Mathematical and Computer Modelling of Dynamical Systems, Vol. 4, No. 2, 1998, pp. 121±147. [12] C. Maffezzoni, R. Girelli, P. Lluka, Generating Efficient Computational Procedures from Declarative Models, Simulation Practice and Theory, vol. 4, 1996, pp. 303±317. [13] R.S. Mah, Chemical Process Structures and Information Flow, Butterworths Series in Chemical Engineering, 1993, pp. 151±180. [14] S.E. Mattsson, Simulation of object-oriented continuous time models, Math. Comput. Simulation 39(5)(6) (1995) 513± 518. [15] S.E. Mattsson, M. Andersson, K.J. AÊ stroèm, Object oriented modelling and simulation, in: D.A. Linkens (Ed.), CAD for Control Systems, Marcel Dekker, New York, 1993, pp. 56±66. [16] M. Otter, H. Elmqvist, F.E. Cellier, ``Relaxing'' ± a symbolic sparse matrix method exploiting the model structure in generating efficient simulation code, in: Proceedings of the Symposium on Modelling, Analysis, and Simulation, CESA'96, IMACs MultiConference on Computational Engineering in Systems Applications, Lille, France, 1996, vol. 1, pp. 1±12. [17] R.E. Tarjan, Depth-first search and linear graph algorithms, SIAM J. Comput. 1(2) (1972) 146±160.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

A Direct Numerical Method for Observability Analysis

A Direct Numerical Method for Observability Analysis IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 15, NO 2, MAY 2000 625 A Direct Numerical Method for Observability Analysis Bei Gou and Ali Abur, Senior Member, IEEE Abstract This paper presents an algebraic method

More information

3.1 State Space Models

3.1 State Space Models 31 State Space Models In this section we study state space models of continuous-time linear systems The corresponding results for discrete-time systems, obtained via duality with the continuous-time models,

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Operation Count; Numerical Linear Algebra

Operation Count; Numerical Linear Algebra 10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point

More information

Solving Mass Balances using Matrix Algebra

Solving Mass Balances using Matrix Algebra Page: 1 Alex Doll, P.Eng, Alex G Doll Consulting Ltd. http://www.agdconsulting.ca Abstract Matrix Algebra, also known as linear algebra, is well suited to solving material balance problems encountered

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

General Framework for an Iterative Solution of Ax b. Jacobi s Method

General Framework for an Iterative Solution of Ax b. Jacobi s Method 2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,

More information

MAT188H1S Lec0101 Burbulla

MAT188H1S Lec0101 Burbulla Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Typical Linear Equation Set and Corresponding Matrices

Typical Linear Equation Set and Corresponding Matrices EWE: Engineering With Excel Larsen Page 1 4. Matrix Operations in Excel. Matrix Manipulations: Vectors, Matrices, and Arrays. How Excel Handles Matrix Math. Basic Matrix Operations. Solving Systems of

More information

Solving Systems of Linear Equations Using Matrices

Solving Systems of Linear Equations Using Matrices Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

Dynamic Eigenvalues for Scalar Linear Time-Varying Systems

Dynamic Eigenvalues for Scalar Linear Time-Varying Systems Dynamic Eigenvalues for Scalar Linear Time-Varying Systems P. van der Kloet and F.L. Neerhoff Department of Electrical Engineering Delft University of Technology Mekelweg 4 2628 CD Delft The Netherlands

More information

SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING

SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING AAS 07-228 SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING INTRODUCTION James G. Miller * Two historical uncorrelated track (UCT) processing approaches have been employed using general perturbations

More information

5 Homogeneous systems

5 Homogeneous systems 5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

2) Write in detail the issues in the design of code generator.

2) Write in detail the issues in the design of code generator. COMPUTER SCIENCE AND ENGINEERING VI SEM CSE Principles of Compiler Design Unit-IV Question and answers UNIT IV CODE GENERATION 9 Issues in the design of code generator The target machine Runtime Storage

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Section 1.7 22 Continued

Section 1.7 22 Continued Section 1.5 23 A homogeneous equation is always consistent. TRUE - The trivial solution is always a solution. The equation Ax = 0 gives an explicit descriptions of its solution set. FALSE - The equation

More information

Matrix Differentiation

Matrix Differentiation 1 Introduction Matrix Differentiation ( and some other stuff ) Randal J. Barnes Department of Civil Engineering, University of Minnesota Minneapolis, Minnesota, USA Throughout this presentation I have

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Matrix Multiplication

Matrix Multiplication Matrix Multiplication CPS343 Parallel and High Performance Computing Spring 2016 CPS343 (Parallel and HPC) Matrix Multiplication Spring 2016 1 / 32 Outline 1 Matrix operations Importance Dense and sparse

More information

Symbol Tables. Introduction

Symbol Tables. Introduction Symbol Tables Introduction A compiler needs to collect and use information about the names appearing in the source program. This information is entered into a data structure called a symbol table. The

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1 19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A Two-DOF Mass-Spring-Dashpot Dynamic System Consider the lumped-parameter, mass-spring-dashpot dynamic system shown in the Figure. It has two point

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

1.2 Solving a System of Linear Equations

1.2 Solving a System of Linear Equations 1.. SOLVING A SYSTEM OF LINEAR EQUATIONS 1. Solving a System of Linear Equations 1..1 Simple Systems - Basic De nitions As noticed above, the general form of a linear system of m equations in n variables

More information

Different Energetic Techniques for Modelling Traction Drives

Different Energetic Techniques for Modelling Traction Drives Different Energetic Techniques for Modelling Traction Drives Roberto Zanasi DII- University of Modena and Reggio Emilia, Italy E-mail: roberto.zanasi@unimo.it Technical University of Dresden (Germany)

More information

Method To Solve Linear, Polynomial, or Absolute Value Inequalities:

Method To Solve Linear, Polynomial, or Absolute Value Inequalities: Solving Inequalities An inequality is the result of replacing the = sign in an equation with ,, or. For example, 3x 2 < 7 is a linear inequality. We call it linear because if the < were replaced with

More information

Linear Codes. Chapter 3. 3.1 Basics

Linear Codes. Chapter 3. 3.1 Basics Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

dspace DSP DS-1104 based State Observer Design for Position Control of DC Servo Motor

dspace DSP DS-1104 based State Observer Design for Position Control of DC Servo Motor dspace DSP DS-1104 based State Observer Design for Position Control of DC Servo Motor Jaswandi Sawant, Divyesh Ginoya Department of Instrumentation and control, College of Engineering, Pune. ABSTRACT This

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

F Matrix Calculus F 1

F Matrix Calculus F 1 F Matrix Calculus F 1 Appendix F: MATRIX CALCULUS TABLE OF CONTENTS Page F1 Introduction F 3 F2 The Derivatives of Vector Functions F 3 F21 Derivative of Vector with Respect to Vector F 3 F22 Derivative

More information

26. Determinants I. 1. Prehistory

26. Determinants I. 1. Prehistory 26. Determinants I 26.1 Prehistory 26.2 Definitions 26.3 Uniqueness and other properties 26.4 Existence Both as a careful review of a more pedestrian viewpoint, and as a transition to a coordinate-independent

More information

Design-Simulation-Optimization Package for a Generic 6-DOF Manipulator with a Spherical Wrist

Design-Simulation-Optimization Package for a Generic 6-DOF Manipulator with a Spherical Wrist Design-Simulation-Optimization Package for a Generic 6-DOF Manipulator with a Spherical Wrist MHER GRIGORIAN, TAREK SOBH Department of Computer Science and Engineering, U. of Bridgeport, USA ABSTRACT Robot

More information

Refractive Index Measurement Principle

Refractive Index Measurement Principle Refractive Index Measurement Principle Refractive index measurement principle Introduction Detection of liquid concentrations by optical means was already known in antiquity. The law of refraction was

More information

METHODS FOR TEARING SYSTEMS OF EQUATIONS IN OBJECT-ORIENTED MODELING

METHODS FOR TEARING SYSTEMS OF EQUATIONS IN OBJECT-ORIENTED MODELING METHODS FOR TEARING SYSTEMS OF EQUATIONS IN OBJECT-ORIENTED MODELING ABSTRACT Hilding Elmvist Dynasim AB Research Park Ideon S-223 70 Lund, Sweden E-mail: Elmvist@Dynasim.se Modeling of continuous systems

More information

PETRI NET BASED SUPERVISORY CONTROL OF FLEXIBLE BATCH PLANTS. G. Mušič and D. Matko

PETRI NET BASED SUPERVISORY CONTROL OF FLEXIBLE BATCH PLANTS. G. Mušič and D. Matko PETRI NET BASED SUPERVISORY CONTROL OF FLEXIBLE BATCH PLANTS G. Mušič and D. Matko Faculty of Electrical Engineering, University of Ljubljana, Slovenia. E-mail: gasper.music@fe.uni-lj.si Abstract: The

More information

The Basics of FEA Procedure

The Basics of FEA Procedure CHAPTER 2 The Basics of FEA Procedure 2.1 Introduction This chapter discusses the spring element, especially for the purpose of introducing various concepts involved in use of the FEA technique. A spring

More information

SYSM 6304: Risk and Decision Analysis Lecture 5: Methods of Risk Analysis

SYSM 6304: Risk and Decision Analysis Lecture 5: Methods of Risk Analysis SYSM 6304: Risk and Decision Analysis Lecture 5: Methods of Risk Analysis M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu October 17, 2015 Outline

More information

Lecture 2 Matrix Operations

Lecture 2 Matrix Operations Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

MATH BOOK OF PROBLEMS SERIES. New from Pearson Custom Publishing!

MATH BOOK OF PROBLEMS SERIES. New from Pearson Custom Publishing! MATH BOOK OF PROBLEMS SERIES New from Pearson Custom Publishing! The Math Book of Problems Series is a database of math problems for the following courses: Pre-algebra Algebra Pre-calculus Calculus Statistics

More information

Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008

Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008 A tutorial on: Iterative methods for Sparse Matrix Problems Yousef Saad University of Minnesota Computer Science and Engineering CRM Montreal - April 30, 2008 Outline Part 1 Sparse matrices and sparsity

More information

Chemical Process Simulation

Chemical Process Simulation Chemical Process Simulation The objective of this course is to provide the background needed by the chemical engineers to carry out computer-aided analyses of large-scale chemical processes. Major concern

More information

3 Orthogonal Vectors and Matrices

3 Orthogonal Vectors and Matrices 3 Orthogonal Vectors and Matrices The linear algebra portion of this course focuses on three matrix factorizations: QR factorization, singular valued decomposition (SVD), and LU factorization The first

More information

Lecture 2 Linear functions and examples

Lecture 2 Linear functions and examples EE263 Autumn 2007-08 Stephen Boyd Lecture 2 Linear functions and examples linear equations and functions engineering examples interpretations 2 1 Linear equations consider system of linear equations y

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

A note on companion matrices

A note on companion matrices Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

2.1 Introduction. 2.2 Terms and definitions

2.1 Introduction. 2.2 Terms and definitions .1 Introduction An important step in the procedure for solving any circuit problem consists first in selecting a number of independent branch currents as (known as loop currents or mesh currents) variables,

More information

2. Spin Chemistry and the Vector Model

2. Spin Chemistry and the Vector Model 2. Spin Chemistry and the Vector Model The story of magnetic resonance spectroscopy and intersystem crossing is essentially a choreography of the twisting motion which causes reorientation or rephasing

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Chapter 3. Distribution Problems. 3.1 The idea of a distribution. 3.1.1 The twenty-fold way

Chapter 3. Distribution Problems. 3.1 The idea of a distribution. 3.1.1 The twenty-fold way Chapter 3 Distribution Problems 3.1 The idea of a distribution Many of the problems we solved in Chapter 1 may be thought of as problems of distributing objects (such as pieces of fruit or ping-pong balls)

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

discuss how to describe points, lines and planes in 3 space.

discuss how to describe points, lines and planes in 3 space. Chapter 2 3 Space: lines and planes In this chapter we discuss how to describe points, lines and planes in 3 space. introduce the language of vectors. discuss various matters concerning the relative position

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

Excel supplement: Chapter 7 Matrix and vector algebra

Excel supplement: Chapter 7 Matrix and vector algebra Excel supplement: Chapter 7 atrix and vector algebra any models in economics lead to large systems of linear equations. These problems are particularly suited for computers. The main purpose of this chapter

More information

Unified Lecture # 4 Vectors

Unified Lecture # 4 Vectors Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

HSL and its out-of-core solver

HSL and its out-of-core solver HSL and its out-of-core solver Jennifer A. Scott j.a.scott@rl.ac.uk Prague November 2006 p. 1/37 Sparse systems Problem: we wish to solve where A is Ax = b LARGE Informal definition: A is sparse if many

More information

On the representability of the bi-uniform matroid

On the representability of the bi-uniform matroid On the representability of the bi-uniform matroid Simeon Ball, Carles Padró, Zsuzsa Weiner and Chaoping Xing August 3, 2012 Abstract Every bi-uniform matroid is representable over all sufficiently large

More information

MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY

MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY PLAMEN KOEV Abstract In the following survey we look at structured matrices with what is referred to as low displacement rank Matrices like Cauchy Vandermonde

More information

Introduction. 1.1 Motivation. Chapter 1

Introduction. 1.1 Motivation. Chapter 1 Chapter 1 Introduction The automotive, aerospace and building sectors have traditionally used simulation programs to improve their products or services, focusing their computations in a few major physical

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Reconciliation and Rectification of Process Flow and Inventory Data

Reconciliation and Rectification of Process Flow and Inventory Data Reconciliation and Rectification of Process Flow and Inventory Data Richard S. Mah, Gregory M. Stanley*, and Dennis M. Downing Northwestern University, Evanston, Illinois 60201 This paper shows how information

More information

1 Review of Newton Polynomials

1 Review of Newton Polynomials cs: introduction to numerical analysis 0/0/0 Lecture 8: Polynomial Interpolation: Using Newton Polynomials and Error Analysis Instructor: Professor Amos Ron Scribes: Giordano Fusco, Mark Cowlishaw, Nathanael

More information

2 SYSTEM DESCRIPTION TECHNIQUES

2 SYSTEM DESCRIPTION TECHNIQUES 2 SYSTEM DESCRIPTION TECHNIQUES 2.1 INTRODUCTION Graphical representation of any process is always better and more meaningful than its representation in words. Moreover, it is very difficult to arrange

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix 7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse

More information

Towards a Benchmark Suite for Modelica Compilers: Large Models

Towards a Benchmark Suite for Modelica Compilers: Large Models Towards a Benchmark Suite for Modelica Compilers: Large Models Jens Frenkel +, Christian Schubert +, Günter Kunze +, Peter Fritzson *, Martin Sjölund *, Adrian Pop* + Dresden University of Technology,

More information

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM

FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM International Journal of Innovative Computing, Information and Control ICIC International c 0 ISSN 34-48 Volume 8, Number 8, August 0 pp. 4 FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT

More information

Power Electronics. Prof. K. Gopakumar. Centre for Electronics Design and Technology. Indian Institute of Science, Bangalore.

Power Electronics. Prof. K. Gopakumar. Centre for Electronics Design and Technology. Indian Institute of Science, Bangalore. Power Electronics Prof. K. Gopakumar Centre for Electronics Design and Technology Indian Institute of Science, Bangalore Lecture - 1 Electric Drive Today, we will start with the topic on industrial drive

More information

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition

More information

The Quantum Harmonic Oscillator Stephen Webb

The Quantum Harmonic Oscillator Stephen Webb The Quantum Harmonic Oscillator Stephen Webb The Importance of the Harmonic Oscillator The quantum harmonic oscillator holds a unique importance in quantum mechanics, as it is both one of the few problems

More information