Feedback Systems. An Introduction for Scientists and Engineers. Karl Johan Åström

Size: px
Start display at page:

Download "Feedback Systems. An Introduction for Scientists and Engineers. Karl Johan Åström"

Transcription

1 Feedback Systems An Introduction for Scientists and Engineers Karl Johan Åström Automatic Control LTH Lund University Control, Dynamical Systems and Computation University of California Santa Barbara Richard M. Murray Control and Dynamical Systems California Institute of Technology DRAFT v2.7a (17 July 27) 27 Karl Johan Åström and Richard M. Murray All rights reserved. This manuscript is for review purposes only and may not be reproduced, in whole or in part, without written consent from the authors.

2

3 Contents Preface vii Chapter 1. Introduction What is Feedback? What is Control? Feedback Examples Feedback Properties Simple Forms of Feedback Further Reading 25 Exercises 26 Chapter 2. System Modeling Modeling Concepts State Space Models Modeling Methodology Modeling Examples Further Reading 64 Exercises 65 Chapter 3. Examples Cruise Control Bicycle Dynamics Operational Amplifier Circuits Computing Systems and Networks Atomic Force Microscopy Drug Administration Population Dynamics 93 Exercises 97 Chapter 4. Dynamic Behavior Solving Differential Equations Qualitative Analysis Stability Lyapunov Stability 117

4 iv CONTENTS 4.5 Parametric and Non-Local Behavior Further Reading 132 Exercises 133 Chapter 5. Linear Systems Basic Definitions The Matrix Exponential Input/Output Response Linearization Further Reading 169 Exercises 17 Chapter 6. State Feedback Reachability Stabilization by State Feedback State Feedback Design Integral Action Further Reading 25 Exercises 25 Chapter 7. Output Feedback Observability State Estimation Control using Estimated State Kalman Filtering Feedforward and Implementation Further Reading 232 Exercises 232 Chapter 8. Transfer Functions Frequency Domain Modeling Derivation of the Transfer Function Block Diagrams and Transfer Functions The Bode Plot Laplace Transforms Further Reading 268 Exercises 268 Chapter 9. Frequency Domain Analysis The Loop Transfer Function The Nyquist Criterion Stability Margins Bode s Relations and Minimum Phase Systems The Notions of Gain and Phase 291

5 CONTENTS v 9.6 Further Reading 295 Exercises 296 Chapter 1. PID Control Basic Control Functions Simple Controllers for Complex Systems PID Tuning Integrator Windup Implementation Further Reading 319 Exercises 319 Chapter 11. Frequency Domain Design Sensitivity Functions Feedforward Design Performance Specifications Feedback Design via Loop Shaping Fundamental Limitations Design Example Further Reading 349 Exercises 35 Chapter 12. Robust Performance Modeling Uncertainty Stability in the Presence of Uncertainty Performance in the Presence of Uncertainty Robust Pole Placement Design for Robust Performance Further Reading 38 Exercises 381 Glossary 383 Notation 385 Bibliography 387 Index 397

6

7 Preface This book provides an introduction to the basic principles and tools for design and analysis of feedback systems. It is intended to serve a diverse audience of scientists and engineers who are interested in understanding and utilizing feedback in physical, biological, information and social systems. We have attempted to keep the mathematical prerequisites to a minimum while being careful not to sacrifice rigor in the process. We have also attempted to make use of examples from a variety of disciplines, illustrating the generality of many of the tools while at the same time showing how they can be applied in specific application domains. This book was originally developed for use in an experimental course at Caltech involving students from a wide set of backgrounds. The course consisted of undergraduates at the junior and senior level in traditional engineering disciplines, as well as first and second year graduate students in engineering and science. This latter group included graduate students in biology, computer science and physics, requiring a broad approach that emphasized basic principles and did not focus on applications in any one given area. Over the course of several years, the text has been classroom tested at Caltech and at Lund University and the feedback from many students and colleagues has been incorporated to help improve the readability and accessibility of the material. Because of its intended audience, this book is organized in a slightly unusual fashion compared to many other books on feedback and control. In particular, we introduce a number of concepts in the text that are normally reserved for second year courses on control and hence often not available to students who are not control systems majors. This has been done at the expense of certain traditional topics, which we felt that the astute student could learn independently and are often explored through the exercises. Examples of topics that we have included are nonlinear dynamics, Lyapunov stability, reachability and observability, and fundamental limits of performance and robustness. Topics that we have de-emphasized include root locus techniques, lead/lag compensation and detailed rules for generating Bode and Nyquist plots by hand. Several features of the book are designed to facilitate its dual function as a basic engineering text and as an introduction for researchers in natural, information and social sciences. The bulk of the material is intended to be used regardless of the audience and covers the core principles and tools in the analysis and design of feedback systems. Advanced sections, marked by the dangerous bend symbol shown to the right, contain material that requires a slightly more technical background, of the sort that would be expected of senior undergraduates in engineering.

8 viii PREFACE A few sections are marked by two dangerous bend symbols and are intended for readers with more specialized backgrounds, identified at the beginning of the section. To keep the length of the text down, several standard results and extensions are given in the exercises, with appropriate hints toward their solutions. Finally, we have included a glossary and a notation section at the end of the book in which we define some of the terminology and notation that may not be familiar to all readers. To further augment the printed material contained here, a companion web site has been developed: murray/amwiki The web site contains a database of frequently asked questions, supplemental examples and exercises, and lecture materials for courses based on this text. The material is organized by chapter and includes a summary of the major points in the text as well as links to external resources. The web site also contains the source code for many examples in the book, as well as utilities to implement the techniques described in the text. Most of the code was originally written using MATLAB M-files, but was also tested with LabVIEW MathScript to ensure compatibility with both packages. Many files can also be run using other scripting languages such as Octave, SciLab, SysQuake and Xmath. [Author s note: the web site is under construction as of this writing and some features described in the text may not yet be available.] The first half of the book focuses almost exclusively on so-called state-space control systems. We begin in Chapter 2 with a description of modeling of physical, biological and information systems using ordinary differential equations and difference equations. Chapter 3 presents a number of examples in some detail, primarily as a reference for problems that will be used throughout the text. Following this, Chapter 4 looks at the dynamic behavior of models, including definitions of stability and more complicated nonlinear behavior. We provide advanced sections in this chapter on Lyapunov stability, because we find that it is useful in a broad array of applications (and is frequently a topic that is not introduced until later in ones studies). The remaining three chapters of the first half of the book focus on linear systems, beginning with a description of input/output behavior in Chapter 5. In Chapter 6, we formally introduce feedback systems by demonstrating how state space control laws can be designed. This is followed in Chapter 7 by material on output feedback and estimators. Chapters 6 and 7 introduce the key concepts of reachability and observability, which give tremendous insight into the choice of actuators and sensors, whether for engineered or natural systems. The second half of the book presents material that is often considered to be from the field of classical control. This includes the transfer function, introduced in Chapter 8, which is a fundamental tool for understanding feedback systems. Using transfer functions, one can begin to analyze the stability of feedback systems using frequency domain analysis, including the ability to reason about the closed

9 PREFACE ix loop behavior of a system from its open loop characteristics. This is the subject of Chapter 9, which revolves around the Nyquist stability criterion. In Chapters 1 and 11, we again look at the design problem, focusing first on proportional-integral-derivative (PID) controllers and then on the more general process of loop shaping. PID control is by far the most common design technique in control systems and a useful tool for any student. The chapter on frequency domain design introduces many of the ideas of modern control theory, including the sensitivity function. In Chapter 12, we pull together the results from the second half of the book to analyze some of the fundamental tradeoffs between robustness and performance. This is also a key chapter illustrating the power of the techniques that have been developed and serving as an introduction for more advanced studies. The book is designed for use in a 1 15 week course in feedback systems that provides many of the key concepts needed in a variety of disciplines. For a 1 week course, Chapters 1 2, 4 6 and 8 11 can each be covered in a week s time, with some dropping of topics from the final chapters. A more leisurely course, spread out over weeks, could cover the entire book, with two weeks on modeling (Chapters 2 and 3) particularly for students without much background in ordinary differential equations and two weeks on robust performance (Chapter 12). The mathematical pre-requisites for the book are modest and in keeping with our goal of providing an introduction that serves a broad audience. We assume familiarity with the basic tools of linear algebra, including matrices, vectors and eigenvalues. These are typically covered in a sophomore level course in the subject and the textbooks by Apostol [Apo69], Arnold [Arn87] or Strang [Str88] serve as good references. Similarly, we assume basic knowledge of differential equations, including the concepts of homogeneous and particular solutions for linear ordinary differential equations in one variable. Apostol [Apo69] or Boyce and DiPrima [BD4] cover this material well. Finally, we also make use of complex numbers and functions and, in some of the advanced sections, more detailed concepts in complex variables that are typically covered in a junior level engineering or physics course in mathematical methods. Apostol [Apo67] or Stewart [Ste2] can be used for the basic material, with Ahlfors [Ahl66], Marsden and Hoffman [MH99] or Saff and Snider [SS2] being good references for the more advanced material. We have chosen not to include appendices summarizing these various topics since there are a number of good books available and we believe that most readers will be familiar with material at this level. One additional choice that we felt was important was the decision not to rely on knowledge of Laplace transforms in the book. While their use is by far the most common approach to teaching feedback systems in engineering, many students in the natural and information sciences may lack the necessary mathematical background. Since Laplace transforms are not required in any essential way, we have only included them in an advanced section intended to tie things together for students with that background. Of course, we make tremendous use of transfer functions, which we introduce through the notion of response to exponential

10 x PREFACE inputs, an approach we feel is more accessible to a broad array of scientists and engineers. For courses in which students have already had Laplace transforms, it should be quite natural to build on this background in the appropriate sections of the text. Acknowledgments The authors would like to thank the many people who helped during the preparation of this book. The idea for writing this book came in part from a report on future directions in control [Mur3] to which Stephen Boyd, Roger Brockett, John Doyle and Gunter Stein were major contributors. Kristi Morgansen and Hideo Mabuchi helped teach early versions of the course at Caltech on which much of the text is based and Steve Waydo served as the head TA for the course taught at Caltech in 23 4 and provide numerous comments and corrections. Charlotta Johnsson and Anton Cervin taught from early versions of the manuscript in Lund in and gave very useful feedback. Other colleagues and students who provided feedback and advice include John Carson, K. Mani Chandy, Michel Charpentier, Per Hagander, Joseph Hellerstein, George Hines, Tore Hägglund, and Dawn Tilbury. The reviewers for Princeton University Press and Tom Robbins at NI Press also provided valuable comments that significantly improved the organization, layout and focus of the book. Our editor, Vickie Kearn, was a great source of encouragement and help throughout the publishing process. Finally, we would like to thank Caltech, Lund University and the University of California at Santa Barbara for providing many resources, stimulating colleagues and students, and a pleasant working environment that greatly aided in the writing of this book. Karl Johan Åström Lund, Sweden Santa Barbara, California Richard M. Murray Pasadena, California

11 Chapter One Introduction Feedback is a central feature of life. The process of feedback governs how we grow, respond to stress and challenge, and regulate factors such as body temperature, blood pressure and cholesterol level. The mechanisms operate at every level, from the interaction of proteins in cells to the interaction of organisms in complex ecologies. Mahlon B. Hoagland and B. Dodson, The Way Life Works, 1995 [HD95]. In this chapter we provide an introduction to the basic concept of feedback and the related engineering discipline of control. We focus on both historical and current examples, with the intention of providing the context for current tools in feedback and control. Much of the material in this chapter is adopted from [Mur3] and the authors gratefully acknowledge the contributions of Roger Brockett and Gunter Stein for portions of this chapter. 1.1 WHAT IS FEEDBACK? The term feedback is used to refer to a situation in which two (or more) dynamical systems are connected together such that each system influences the other and their dynamics are thus strongly coupled. By dynamical system, we refer to a system whose behavior changes over time, often in response to external stimulation or forcing. Simple causal reasoning about a feedback system is difficult because the first system influences the second and the second system influences the first, leading to a circular argument. This makes reasoning based on cause and effect tricky and it is necessary to analyze the system as a whole. A consequence of this is that the behavior of feedback systems is often counter-intuitive and it is therefore necessary to resort to formal methods to understand them. Figure 1.1 illustrates in block diagram form the idea of feedback. We often use the terms open loop and closed loop when referring to such systems. A system is said to be a closed loop system if the systems are interconnected in a cycle, as shown in Figure 1.1a. If we break the interconnection, we refer to the configuration as an open loop system, as shown in Figure 1.1b. As the quote at the beginning of this chapter illustrates, a major source of examples for feedback systems is from biology. Biological systems make use of feedback in an extraordinary number of ways, on scales ranging from molecules to cells to organisms to ecosystems. One example is the regulation of glucose in the bloodstream through the production of insulin and glucagon by the pancreas. The body attempts to maintain a constant concentration of glucose, which is used

12 2 CHAPTER 1. INTRODUCTION System 1 u System 2 y r System 1 u System 2 y (a) Closed loop (b) Open loop Figure 1.1: Open and closed loop systems. (a) The output of system 1 is used as the input of system 2 and the output of system 2 becomes the input of system 1, creating a closed loop system. (b) The interconnection between system 2 and system 1 is removed and the system is said to be open loop. by the body s cells to produce energy. When glucose levels rise (after eating a meal, for example), the hormone insulin is released and causes the body to store excess glucose in the liver. When glucose levels are low, the pancreas secretes the hormone glucagon, which has the opposite effect. Referring to Figure 1.1, we can view the liver as system 1 and the pancreas as system 2. The output from the liver is the glucose concentration in the blood and the output from the pancreas is the amount of insulin or glucagon produced. The interplay between insulin and glucagon secretions throughout the day helps to keep the blood-glucose concentration constant, at about 9 mg per 1 ml of blood. An early engineering example of a feedback system is the centrifugal governor, in which the shaft of a steam engine is connected to a flyball mechanism that is itself connected to the throttle of the steam engine, as illustrated in Figure 1.2. The system is designed so that as the speed of the engine increases (perhaps due to a lessening of the load on the engine), the flyballs spread apart and a linkage causes the throttle on the steam engine to be closed. This in turn slows down the engine, which causes the flyballs to come back together. We can model this system as a closed loop system by taking system 1 as the steam engine and system 2 as the governor. When properly designed, the flyball governor maintains a constant speed of the engine, roughly independent of the loading conditions. The centrifugal governor was an enabler of the successful Watt steam engine, which fueled the industrial revolution. Feedback has many interesting properties that can be exploited in designing systems. As in the case of glucose regulation or the flyball governor, feedback can make a system resilient towards external influences. It can also be used to create linear behavior out of nonlinear components, a common approach in electronics. More generally, feedback allows a system to be insensitive both to external disturbances and to variations in its individual elements. Feedback has potential disadvantages as well. It can create dynamic instabilities in a system, causing oscillations or even runaway behavior. Another drawback, especially in engineering systems, is that feedback can introduce unwanted sensor noise into the system, requiring careful filtering of signals. It is for these reasons that a substantial portion of the study of feedback systems is devoted to developing an understanding of dynamics and mastery of techniques in dynamical systems. Feedback systems are ubiquitous in both natural and engineered systems. Con-

13 1.2. WHAT IS CONTROL? 3 Figure 1.2: The centrifugal governor and the Watt steam engine. The centrifugal governor on the left consists of a set of flyballs that spread apart as the speed of the engine increases. The Watt engine on the right uses a centrifugal governor (above and to the left of the fly wheel) to regulate its speed. Figures courtesy Richard Adamek (copyright 1999) and Cambridge University. trol systems maintain the environment, lighting and power in our buildings and factories; they regulate the operation of our cars, consumer electronics and manufacturing processes; they enable our transportation and communications systems; and they are critical elements in our military and space systems. For the most part they are hidden from view, buried within the code of embedded microprocessors, executing their functions accurately and reliably. Feedback has also made it possible to increase dramatically the precision of instruments such as atomic force microscopes and telescopes. In nature, homeostasis in biological systems maintains thermal, chemical and biological conditions through feedback. At the other end of the size scale, global climate dynamics depend on the feedback interactions between the atmosphere, oceans, land and the sun. Ecosystems are filled with examples of feedback due to the complex interactions between animal and plant life. Even the dynamics of economies are based on the feedback between individuals and corporations through markets and the exchange of goods and services. 1.2 WHAT IS CONTROL? The term control has many meanings and often varies between communities. In this book, we define control to be the use of algorithms and feedback in engineered systems. Thus, control includes such examples as feedback loops in electronic amplifiers, setpoint controllers in chemical and materials processing, fly-by-wire systems on aircraft and even router protocols that control traffic flow on the Inter-

14 4 CHAPTER 1. INTRODUCTION noise external disturbances noise Σ Actuators System Sensors Σ Output Process Clock D/A Computer A/D operator input Controller Figure 1.3: Components of a computer-controlled system. The upper dashed box represents the process dynamics, which includes the sensors and actuators in addition to the dynamical system being controlled. Noise and external disturbances can perturb the dynamics of the process. The controller is shown in the lower dashed box. It consists of analog-to-digital (A/D) and digital-to-analog (D/A) converters, as well as a computer that implements the control algorithm. A system clock controls the operation of the controller, synchronizing the A/D, D/A and computing processes. The operator input is also fed to the computer as an external input. net. Emerging applications include high confidence software systems, autonomous vehicles and robots, real-time resource management systems and biologically engineered systems. At its core, control is an information science, and includes the use of information in both analog and digital representations. A modern controller senses the operation of a system, compares that against the desired behavior, computes corrective actions based on a model of the system s response to external inputs and actuates the system to effect the desired change. This basic feedback loop of sensing, computation and actuation is the central concept in control. The key issues in designing control logic are ensuring that the dynamics of the closed loop system are stable (bounded disturbances give bounded errors) and that they have additional desired behavior (good disturbance rejection, fast responsiveness to changes in operating point, etc). These properties are established using a variety of modeling and analysis techniques that capture the essential dynamics of the system and permit the exploration of possible behaviors in the presence of uncertainty, noise and component failures. A typical example of a modern control system is shown in Figure 1.3. The basic elements of sensing, computation and actuation are clearly seen. In modern control systems, computation is typically implemented on a digital computer, requiring the use of analog-to-digital (A/D) and digital-to-analog (D/A) converters. Uncertainty enters the system through noise in sensing and actuation subsystems, external disturbances that affect the underlying system operation and uncertain dy-

15 1.3. FEEDBACK EXAMPLES 5 namics in the system (parameter errors, unmodeled effects, etc). The algorithm that computes the control action as a function of the sensor values is often called a control law. The system can be influenced externally by an operator who introduces command signals to the system. Control engineering relies on and shares tools from physics (dynamics and modeling), computer science (information and software) and operations research (optimization, probability theory and game theory), but it is also different from these subjects in both insights and approach. Perhaps the strongest area of overlap between control and other disciplines is in modeling of physical systems, which is common across all areas of engineering and science. One of the fundamental differences between control-oriented modeling and modeling in other disciplines is the way in which interactions between subsystems are represented. Control relies on a type of input/output modeling that allows many new insights into the behavior of systems, such as disturbance rejection and stable interconnection. Model reduction, where a simpler (lower-fidelity) description of the dynamics is derived from a high fidelity model, is also naturally described in an input/output framework. Perhaps most importantly, modeling in a control context allows the design of robust interconnections between subsystems, a feature that is crucial in the operation of all large engineered systems. Control is also closely associated with computer science, since virtually all modern control algorithms for engineering systems are implemented in software. However, control algorithms and software can be very different from traditional computer software due to the central role of the dynamics of the system and the real-time nature of the implementation. 1.3 FEEDBACK EXAMPLES Feedback has many interesting and useful properties. It makes it possible to design precise systems from imprecise components and to make relevant quantities in a system change in a prescribed fashion. An unstable system can be stabilized using feedback and the effects of external disturbances can be reduced. Feedback also offers new degrees of freedom to a designer by exploiting sensing, actuation and computation. In this section we survey some of the important applications and trends for feedback in the world around us. Early Technological Examples The proliferation of control in engineered systems has occurred primarily in the latter half of the 2th century. There are some important exceptions, such as the centrifugal governor described earlier and the thermostat (Figure 1.4a), designed at the turn of the century to regulate temperature of buildings. The thermostat, in particular, is a simple example of feedback control that everyone is familiar with. The device measures the temperature in a building, compares that temperature to a desired setpoint, and uses the feedback error between

16 6 CHAPTER 1. INTRODUCTION (a) (b) Figure 1.4: Early control devices. (a) Honeywell T86 thermostat, originally introduced in The thermostat controls whether a heater is turned on by comparing the current temperature in a room to a desired value that is set using a dial. (b) Chrysler cruise control system, introduced in the 1958 Chrysler Imperial [Row58]. A centrifugal governor is used to detect the speed of the vehicle and actuate the throttle. The reference speed is specified through an adjustment spring. these two to operate the heating plant, e.g. to turn heating on when the temperature is too low and to turn if off when the temperature is too high. This explanation captures the essence of feedback, but it is a bit too simple even for a basic device such as the thermostat. Actually, because lags and delays exist in the heating plant and sensor, a good thermostat does a bit of anticipation, turning the heater off before the error actually changes sign. This avoids excessive temperature swings and cycling of the heating plant. This interplay between the dynamics of the process and the operation of the controller is a key element in modern control systems design. There are many other control system examples that have developed over the years with progressively increasing levels of sophistication. An early system with broad public exposure was the cruise control option introduced on automobiles in 1958 (see Figure 1.4b). Cruise control illustrates the dynamic behavior of closed loop feedback systems in action the slowdown error as the system climbs a grade, the gradual reduction of that error due to integral action in the controller, the small overshoot at the top of the climb, etc. Later control systems on automobiles such as emission controls and fuel metering systems have achieved major reductions of pollutants and increases in fuel economy. Power Generation and Transmission Access to electrical power has been one of the major drivers of technological progress in modern society. Much of the early development of control was driven by generation and distribution of electric power. Control is mission critical for power systems and there are many control loops in individual power stations. Control is also important for the operation of the whole power network since it

17 1.3. FEEDBACK EXAMPLES 7 Figure 1.5: The European Power Network. By 27 the European power suppliers will operate a single interconnected network covering a region from the Arctic to the Mediterranean and from the Atlantic to the Ural. In 24 the installed power was more than 7 GW ( W). is difficult to store energy and is thus necessary to match production to consumption. Power management is a straightforward regulation problem for a system with one generator and one power consumer, but it is more difficult in a highly distributed system with many generators and long distances between consumption and generation. Power demand can change rapidly in an unpredictable manner and combining generators and consumers into large networks makes it possible to share loads among many suppliers and to average consumption among many customers. Large transcontinental and transnational power systems have therefore been built, such as the one show in Figure 1.5. Most electricity is distributed by alternating current (AC) because the transmission voltage can be changed with small power losses using transformers. Alternating current generators can only deliver power if the generators are synchronized to the voltage variations in the network. This means that the rotors of all generators in a network must be synchronized. To achieve this with local decentralized controllers and a small amount of interaction is a challenging problem. Sporadic low frequency oscillations between distant regions have been observed when regional power grids have been interconnected [KW5]. Safety and reliability are major concerns in power systems. There may be disturbances due to trees falling down on power lines, lightning or equipment failures. There are sophisticated control systems that attempt to keep the system operating

18 8 CHAPTER 1. INTRODUCTION (a) (b) Figure 1.6: Military aerospace systems. (a) The F-18 aircraft is one of the first production military fighters to use fly-by-wire technology. (b) The X-45 (UCAV) unmanned aerial vehicle is capable of autonomous flight, using inertial measurement sensors and the global positioning system (GPS) to monitor its position relative to a desired trajectory. Photographs courtesy of NASA Dryden Flight Research Center. even when there are large disturbances. The control actions can be to reduce voltage, to break up the net into subnets or to switch off lines and power users. These safety systems are an essential element of power distribution systems, but in spite of all precautions there are occasionally failures in large power systems. The power system is thus a nice example of a complicated distributed system where control is executed on many levels and in many different ways. Aerospace and Transportation In aerospace, control has been a key technological capability tracing back to the beginning of the 2th century. Indeed, the Wright brothers are correctly famous not simply for demonstrating powered flight but controlled powered flight. Their early Wright Flyer incorporated moving control surfaces (vertical fins and canards) and warpable wings that allowed the pilot to regulate the aircraft s flight. In fact, the aircraft itself was not stable, so continuous pilot corrections were mandatory. This early example of controlled flight is followed by a fascinating success story of continuous improvements in flight control technology, culminating in the high performance, highly reliable automatic flight control systems we see on modern commercial and military aircraft today. Similar success stories for control technology have occurred in many other application areas. Early World War II bombsights and fire control servo systems have evolved into today s highly accurate radar-guided guns and precision-guided weapons. Early failure-prone space missions have evolved into routine launch operations, manned landings on the moon, permanently manned space stations, robotic vehicles roving Mars, orbiting vehicles at the outer planets and a host of commercial and military satellites serving various surveillance, communication, navigation and earth observation needs. Cars have advanced from manually-tuned

19 1.3. FEEDBACK EXAMPLES 9 Figure 1.7: Materials processing. Modern materials are processed at carefully controlled conditions, using reactors such as the metal organic chemical vapor deposition (MOCVD) reactor shown on the left, which was for manufacturing superconducting thin films. Using lithography, chemical etching, vapor deposition and other techniques, complex devices can be built, such as the IBM cell processor shown on the right. Photographs courtesy of Caltech and IBM. mechanical/pneumatic technology to computer-controlled operation of all major functions, including fuel injection, emission control, cruise control, braking and cabin comfort. Current research in aerospace and transportation systems is investigating the application of feedback to higher levels of decision making, including logical regulation of operating modes, vehicle configurations, payload configurations and health status. These have historically been performed by human operators, but today that boundary is moving and control systems are increasingly taking on these functions. Another dramatic trend on the horizon is the use of large collections of distributed entities with local computation, global communication connections, little regularity imposed by the laws of physics and no possibility of imposing centralized control actions. Examples of this trend include the national airspace management problem, automated highway and traffic management, and command and control for future battlefields. Materials and Processing The chemical industry is responsible for the remarkable progress in developing new materials that are key to our modern society. In addition to the continuing need to improve product quality, several other factors in the process control industry are drivers for the use of control. Environmental statutes continue to place stricter limitations on the production of pollutants, forcing the use of sophisticated pollution control devices. Environmental safety considerations have led to the design of smaller storage capacities to diminish the risk of major chemical leakage, requiring tighter control on upstream processes and, in some cases, supply chains. And large increases in energy costs have encouraged engineers to design

20 1 CHAPTER 1. INTRODUCTION plants that are highly integrated, coupling many processes that used to operate independently. All of these trends increase the complexity of these processes and the performance requirements for the control systems, making the control system design increasingly challenging. As in many other application areas, new sensor technology is creating new opportunities for control. Online sensors including laser backscattering, video microscopy, and ultraviolet, infrared and Raman spectroscopy are becoming more robust and less expensive and are appearing in more manufacturing processes. Many of these sensors are already being used by current process control systems, but more sophisticated signal processing and control techniques are needed to use more effectively the real-time information provided by these sensors. Control engineers can also contribute to the design of even better sensors, which are still needed, for example, in the microelectronics industry. As elsewhere, the challenge is making use of the large amounts of data provided by these new sensors in an effective manner. In addition, a control-oriented approach to modeling the essential physics of the underlying processes is required to understand fundamental limits on observability of the internal state through sensor data. Instrumentation Measurement of physical variables is of prime interest in science and engineering. Consider for example an accelerometer, where early instruments consisted of a mass suspended on a spring with a deflection sensor. The precision of such an instrument depends critically on accurate calibration of the spring and the sensor. There is also a design compromise because a weak spring gives high sensitivity but also low bandwidth. A different way of measuring acceleration is to use force feedback. The spring is then replaced by a voice coil that is controlled so that the mass remains at a constant position. The acceleration is proportional to the current through the voice coil. In such an instrument, the precision depends entirely on the calibration of the voice coil and does not depend on the sensor, which is only used as the feedback signal. The sensitivity/bandwidth compromise is also avoided. This way of using feedback has been applied to many different engineering fields and has resulted in instruments with dramatically improved performance. Force feedback is also used in haptic devices for manual control. Feedback is widely used to measure ion currents in cells using a device called the voltage clamp, which is illustrated in Figure 1.8. Hodgkin and Huxley used the voltage clamp to investigate propagation of action potentials in the axon of the giant squid. In 1963 they shared the Nobel Prize in Medicine with Eccles for their discoveries concerning the ionic mechanisms involved in excitation and inhibition in the peripheral and central portions of the nerve cell membrane. A refinement of the voltage clamp called the patch clamp later made it possible to measure exactly when a single ion channel is opened or closed. This was developed by Neher and Sakmann, who received the 1991 Nobel Prize in Medicine for their discoveries

21 1.3. FEEDBACK EXAMPLES 11 Figure 1.8: The voltage clamp method for measuring ion currents in cells. A pipet is used to place an electrode in a cell (left and middle) and maintain the potential of the cell at a fixed level. The internal voltage in the cell is v i and the voltage of the external fluid is v e. The feedback system (right) controls the current I into the cell so that the voltage drop across the cell membrane v = v i v e is equal to its reference value v r. The current I is then equal to the ion current. concerning the function of a single ion channels in cells. There are many other interesting and useful applications of feedback in scientific instruments. The development of the mass spectrometer is an early example. In a 1935 paper, Nier observed that the deflection of the ions depends on both the magnetic and the electric fields [Nie35]. Instead of keeping both fields constant, Nier let the magnetic field fluctuate and the electric field was controlled to keep the ratio of the fields constant. The feedback was implemented using vacuum tube amplifiers. The scheme was crucial for the development of mass spectroscopy. The Dutch Engineer van der Meer invented a clever way to use feedback to maintain a good quality, high density beam in a particle accelerator [MPTvdM8]. The idea is to sense particle displacement at one point in the accelerator and apply a correcting signal at another point. The scheme, called stochastic cooling, was awarded the Nobel Prize in Physics in The method was essential for the successful experiments at CERN where the existence of the particles W and Z associated with the weak force was first demonstrated. The 1986 Nobel Prize in Physics awarded to Binnig and Rohrer for their design of the scanning tunneling microscope is another example of an innovative use of feedback. The key idea is to move a narrow tip on a cantilever beam across the surface and to register the forces on the tip [BR86]. The deflection of the tip is measured using tunneling. The tunneling current is used by a feedback system to control the position cantilever base so that the tunneling current is constant, an example of force feedback. The accuracy is so high that individual atoms can be registered. A map of the atoms is obtained by moving the base of the cantilever horizontally. The performance of the control system is directly reflected n the image quality and scanning speed. This example is described in additional detail in Chapter 3. Robotics and Intelligent Machines The goal of cybernetic engineering, already articulated in the 194s and even before, has been to implement systems capable of exhibiting highly flexible or intelligent responses to changing circumstances. In 1948, the MIT mathematician Norbert Wiener gave a widely read account of cybernetics [Wie48]. A more math-

22 12 CHAPTER 1. INTRODUCTION Figure 1.9: Robotic systems. (a) Spirit, one of the two Mars Exploratory Rovers that landed on the Mars in January 24. (b) The Sony AIBO Entertainment Robot, one of the first entertainment robots to be mass marketed. Both robots make use of feedback between sensors, actuators and computation to function in unknown environments. Photographs courtesy of Jet Propulsion Laboratory and Sony. ematical treatment of the elements of engineering cybernetics was presented by H.S. Tsien in 1954, driven by problems related to control of missiles [Tsi54]. Together, these works and others of that time form much of the intellectual basis for modern work in robotics and control. Two accomplishments that demonstrate the successes of the field are the Mars Exploratory Rovers and entertainment robots such as the Sony AIBO, shown in Fig The two Mars Exploratory Rovers, launched by the Jet Propulsion Laboratory (JPL), maneuvered on the surface of Mars for over three years starting in January 24 and sent back pictures and measurements of their environment. The Sony AIBO robot debuted in June of 1999 and was the first entertainment robot to be mass marketed by a major international corporation. It was particularly noteworthy because of its use of AI technologies that allowed it to act in response to external stimulation and its own judgment. This higher level of feedback is a key element in robotics, where issues such as obstacle avoidance, goal seeking, learning and autonomy are prevalent. Despite the enormous progress in robotics over the last half century, in many ways the field is still in its infancy. Today s robots still exhibit simple behaviors compared with humans, and their ability to locomote, interpret complex sensory inputs, perform higher level reasoning and cooperate together in teams is limited. Indeed, much of Wiener s vision for robotics and intelligent machines remains unrealized. While advances are needed in many fields to achieve this vision including advances in sensing, actuation and energy storage the opportunity to combine the advances of the AI community in planning, adaptation and learning with the techniques in the control community for modeling, analysis and design of feedback systems presents a renewed path for progress.

23 1.3. FEEDBACK EXAMPLES 13 Figure 1.1: A multi-tier system system for services on the Internet. In the complete system is shown schematically on the left, users request information from a set of computers (tier 1), which in turn collect information from other computers (tiers 2 and 3). The individual server shown on the right has a set of reference parameters set by a (human) system operator, with feedback used to maintain the operation of the system in the presence of uncertainty (based on Hellerstein et al. [HDPT4]. Networks and Computing Systems Control of networks is a large research area spanning many topics, including congestion control, routing, data caching and power management. Several features of these control problems make them very challenging. The dominant feature is the extremely large scale of the system; the Internet is probably the largest feedback control system humans have ever built. Another is the decentralized nature of the control problem: decisions must be made quickly and based only on local information. Stability is complicated by the presence of varying time lags, as information about the network state can only be observed or relayed to controllers after a delay, and the effect of a local control action can be felt throughout the network only after substantial delay. Uncertainty and variation in the network, through network topology, transmission channel characteristics, traffic demand and available resources, may change constantly and unpredictably. Other complicating issues are the diverse traffic characteristics in terms of arrival statistics at both the packet and flow time scales and the different requirements for quality of service that the network must support. Related to control of networks is control of the servers that sit on these networks. Computers are key components of the systems of routers, web servers and database servers that are used for communication, electronic commerce, advertisement and information storage. While hardware costs for computing have decreased dramatically, the cost of operating these systems has increased due to the difficulty in managing and maintaining these complex, interconnected systems. The situation is similar to the early phases of process control when feedback was first introduced to control industrial processes. As in process control, there are interesting possibilities for increasing performance and decreasing costs by applying feedback. Several promising uses of feedback in operation of computer systems are described in the book by Hellerstein et al. [HDPT4]. A typical example of a multi-layer system for e-commerce is shown in Figure 1.1a. The system has several tiers of servers. The edge server accepts incoming requests and routes them to the HTTP server tier where they are parsed and distributed to the application servers. The processing for different requests can

24 14 CHAPTER 1. INTRODUCTION vary widely and the application servers may also access external servers managed by other organizations. Control of an individual server in a layer is illustrated in Figure 1.1b. A quantity representing the quality of service or cost of operation such as response time, throughput, service rate or memory usage is measured in the computer. The control variables might represent incoming messages accepted, priorities in the operating system or memory allocation. The feedback loop then attempts to maintain quality-of-service variables within a target range of values. Economics The economy is a large dynamical system with many actors: governments, organizations, companies and individuals. Governments control the economy through laws and taxes, the central banks by setting interest rates and companies by setting prices and making investments. Individuals control the economy through purchases, savings and investments. Many efforts have been made to model the system both at the macro level and at the micro level, but this modeling is difficult because the system is strongly influenced by the behaviors of the different actors in the system. Keynes [Key36] developed a simple model to understand relations between gross national product, investment, consumption and government spending. One of Keynes observations was that under certain conditions, like during the 193s depression, an increase of investment of government spending could lead to a larger increase in the gross national product. This idea was used by several governments to try to alleviate the depression. Keynes ideas can be captured by a simple model that is discussed in Exercise 2.4. A perspective on modeling and control of economic systems can be obtained from the work of some economists who received the the Sveriges Riksbank Prize in Economics in Memory of Alfred Nobel, popularly called the Nobel Prize in Economics. Paul A. Samuelson received the prize in 197 for the scientific work through which he has developed static and dynamic economic theory and actively contributed to rising the level of analysis in economic science. Lawrence Klein received the prize in 198 for development of large dynamical models with many parameters that were fitted to historical data [KG55], for example a model of the US economy in the period Other researchers have modeled other countries and other periods. In 1997 Myron Scholes shared the prize with Robert Merton for a new method to determine the value of derivatives. A key ingredient was a dynamic model for variation of stock prices that is widely used by banks and investment companies. In 24 Finn E. Kydland and Edward C. Prestcott shared the economics prize for their contributions to dynamic macroeconomics: the time consistency of economic policy and the driving forces behind business cycles, a topic that is clearly related to dynamics and control. One of the reasons why it is difficult to model economic systems is that there are no conservation laws. A typical example is that the value of a company as ex-

25 1.3. FEEDBACK EXAMPLES 15 Figure 1.11: Supply chain dynamics (after Forrester [For61]). Products flow from the producer to the customer through distributors and retailers as indicated by the solid lines. The dashed lines show the upward flow of orders. The numbers in the circles represent the delays in the flow of information or materials. Multiple feedback loops are present as each agent tries to maintain the proper inventory levels. pressed by its stock can change rapidly and erratically. There are, however, some areas with conservation laws that permit accurate modeling. One example is the flow of products from a manufacturer to a retailer as illustrated in Figure The products are physical quantities that obey a conservation law and the system can be modeled simply by accounting for the number of products in the different inventories. There are considerable economic benefits in controlling supply chains so that products are available to the customers while minimizing the products that are in storage. The real problems are more complicated than indicated in the figure because there may be many different products, different factories that are geographically distributed and the factories require raw material or sub-assemblies. Control of supply chains was proposed by Forrester in 1961 [For61]. Considerable economic benefits can be obtained by using models to minimize inventories. Their use accelerated dramatically when information technology was applied to predict sales, keep track of products and enable just-in-time manufacturing. Supply chain management has contributed significantly to the growing success of global distributors. Advertising on the Internet is an emerging application of control. With networkbased advertising it is easy to measure the effect of different marketing strategies quickly. The response of customers can then be modeled and feedback strategies can be developed.

26 16 CHAPTER 1. INTRODUCTION Figure 1.12: The wiring diagram of the growth signaling circuitry of the mammalian cell [HW]. The major pathways that are thought to be play a role in cancer are indicated in the diagram. Lines represent interaction between genes and proteins in the cell. Lines ending in arrow heads indicated activation of the given gene or pathway; lines ending in a T-shaped head indicate repression. Feedback in Nature Many problems in the natural sciences involve understanding aggregate behavior in complex large-scale systems. This behavior emerges from the interaction of a multitude of simpler systems, with intricate patterns of information flow. Representative examples can be found in fields ranging from embryology to seismology. Researchers who specialize in the study of specific complex systems often develop an intuitive emphasis on analyzing the role of feedback (or interconnection) in facilitating and stabilizing aggregate behavior. While sophisticated theories have been developed by domain experts for the analysis of various complex systems, the development of rigorous methodology that can discover and exploit common features and essential mathematical structure is just beginning to emerge. Advances in science and technology are creating new understanding of the underlying dynamics and the importance of feedback in a wide variety of natural and technological systems We briefly highlight three application areas here. Biological Systems. A major theme currently underway in the biology community is the science of reverse (and eventually forward) engineering of biological control networks such as the one shown in Figure There are a wide variety of biological phenomena that provide a rich source of examples for control, includ-

27 1.4. FEEDBACK PROPERTIES 17 ing gene regulation and signal transduction; hormonal, immunological and cardiovascular feedback mechanisms; muscular control and locomotion; active sensing, vision and proprioception; attention and consciousness; and population dynamics and epidemics. Each of these (and many more) provide opportunities to figure out what works, how it works, and what we can do to affect it. One interesting feature of biological systems is the frequent use of positive feedback to shape the dynamics of the system. Positive feedback can be used to create switch-like behavior through auto-regulation of a genes, and to create oscillations such as those present in the cell cycle, central pattern generators or circadian rhythm. Ecosystems. In contrast to individual cells and organisms, emergent properties of aggregations and ecosystems inherently reflect selection mechanisms that act on multiple levels, and primarily on scales well below that of the system as a whole. Because ecosystems are complex, multiscale dynamical systems, they provide a broad range of new challenges for modeling and analysis of feedback systems. Recent experience in applying tools from control and dynamical systems to bacterial networks suggests that much of the complexity of these networks is due to the presence of multiple layers of feedback loops that provide robust functionality to the individual cell. Yet in other instances, events at the cell level benefit the colony at the expense of the individual. Systems level analysis can be applied to ecosystems with the goal of understanding the robustness of such systems and the extent to which decisions and events affecting individual species contribute to the robustness and/or fragility of the ecosystem as a whole. Environmental Science. It is now indisputable that human activities have altered the environment on a global scale. Problems of enormous complexity challenge researchers in this area and first among these is to understand the feedback systems that operate on the global scale. One of the challenges in developing such an understanding is the multiscale nature of the problem, with detailed understanding of the dynamics of microscale phenomena such as microbiological organisms being a necessary component of understanding global phenomena, such as the carbon cycle. 1.4 FEEDBACK PROPERTIES Feedback is a powerful idea which, as we have seen, is used extensively in natural and technological systems. The principle of feedback is simple: base correcting actions on the difference between desired and actual performance. In engineering, feedback has been rediscovered and patented many times in many different contexts. The use of feedback has often resulted in vast improvements in system capability and these improvements have sometimes been revolutionary, as discussed above. The reason for this is that feedback has some truly remarkable properties. In this section we will discuss some of the properties of feedback that can be understood intuitively. This intuition will be formalized in the subsequent chapters.

28 18 CHAPTER 1. INTRODUCTION Actuate Throttle Sense Speed Speed [m/s] 3 m 25 Compute 5 1 Time [s] Figure 1.13: A feedback system for controlling the speed of a vehicle. In the block diagram on the left, the speed of the vehicle is measured and compared to the desired speed within the compute block. Based on the difference in the actual and desired speed, the throttle (or brake) is used to modify the force applied to the vehicle by the engine, drivetrain and wheels. The figure on the right shows the response of the control system to a commanded change in speed from 25 m/s to 3 m/s. The three different curves correspond to differing masses of the vehicle, between 1 and 3 kg, demonstrating the robustness of the closed loop system to a very large change in the vehicle characteristics. Robustness to Uncertainty One of the key uses of feedback is to provide robustness to uncertainty. By measuring the difference between the sensed value of a regulated signal and its desired value, we can supply a corrective action. If the system undergoes some change that affects the regulated signal, then we sense this change and try to force the system back to the desired operating point. This is precisely the effect that Watt exploited in his use of the centrifugal governor on steam engines. As an example of this principle, consider the simple feedback system shown in Figure In this system, the speed of a vehicle is controlled by adjusting the amount of gas flowing to the engine. A simple proportional plus integral feedback is used to to make the amount of gas depend on both the error between the current and desired speed, and the integral of that error. The plot on the right shows the results of this feedback for a step change in the desired speed and a variety of different masses for the car, which might result from having a different number of passengers or towing a trailer. Notice that independent of the mass (which varies by a factor of 3!), the steady state speed of the vehicle always approaches the desired speed and achieves that speed within approximately 5 seconds. Thus the performance of the system is robust with respect to this uncertainty. Another early example of the use of feedback to provide robustness is the negative feedback amplifier. When telephone communications were developed, amplifiers were used to compensate for signal attenuation in long lines. The vacuum tube was a component that could be used to build amplifiers. Distortion caused by the nonlinear characteristics of the tube amplifier together with amplifier drift were obstacles that prevented development of line amplifiers for a long time. A major breakthrough was the invention of the feedback amplifier in 1927 by Harold S. Black, an electrical engineer at the Bell Telephone Laboratories. Black used negative feedback, which reduces the gain but makes the amplifier insensitive to

29 1.4. FEEDBACK PROPERTIES 19 variations in tube characteristics. This invention made it possible to build stable amplifiers with linear characteristics despite nonlinearities of the vacuum tube amplifier. Design of Dynamics Another use of feedback is to change the dynamics of a system. Through feedback, we can alter the behavior of a system to meet the needs of an application: systems that are unstable can be stabilized, systems that are sluggish can be made responsive and systems that have drifting operating points can be held constant. Control theory provides a rich collection of techniques to analyze the stability and dynamic response of complex systems and to place bounds on the behavior of such systems by analyzing the gains of linear and nonlinear operators that describe their components. An example of the use of control in the design of dynamics comes from the area of flight control. The following quote, from a lecture by Wilbur Wright to the Western Society of Engineers in 191 [McF53], illustrates the role of control in the development of the airplane: Men already know how to construct wings or airplanes, which when driven through the air at sufficient speed, will not only sustain the weight of the wings themselves, but also that of the engine, and of the engineer as well. Men also know how to build engines and screws of sufficient lightness and power to drive these planes at sustaining speed... Inability to balance and steer still confronts students of the flying problem.... When this one feature has been worked out, the age of flying will have arrived, for all other difficulties are of minor importance. The Wright brothers thus realized that control was a key issue to enable flight. They resolved the compromise between stability and maneuverability by building an airplane, the Wright Flyer, that was unstable but maneuverable. The Flyer had a rudder in the front of the airplane, which made the plane very maneuverable. A disadvantage was the necessity for the pilot to keep adjusting the rudder to fly the plane: if the pilot let go of the stick the plane would crash. Other early aviators tried to build stable airplanes. These would have been easier to fly, but because of their poor maneuverability they could not be brought up into the air. By using their insight and skillful experiments the Wright brothers made the first successful flight at Kitty Hawk in 195. Since it was quite tiresome to fly an unstable aircraft, there was strong motivation to find a mechanism that would stabilize an aircraft. Such a device, invented by Sperry, was based on the concept of feedback. Sperry used a gyro-stabilized pendulum to provide an indication of the vertical. He then arranged a feedback mechanism that would pull the stick to make the plane go up if it was pointing down and vice versa. The Sperry autopilot is the first use of feedback in aeronautical engineering and Sperry won a prize in a competition for the safest airplane

30 2 CHAPTER 1. INTRODUCTION Figure 1.14: Aircraft autopilot system. The 1912 Curtiss (left) used an autopilot to stabilize the pitch of the aircraft. The Sperry Autopilot (right) contained a set of four gyros coupled to a set of air valves that controlled the wing surfaces. The Sperry Autopilot was able to correct for errors in roll, pitch and yaw [Hug93]. in Paris in Figure 1.14 shows the Curtiss seaplane and the Sperry autopilot. The autopilot is a good example of how feedback can be used to stabilize an unstable system and hence design the dynamics of the aircraft. One of the other advantages of designing the dynamics of a device is that it allows for increased modularity in the overall system design. By using feedback to create a system whose response matches a desired profile, we can hide the complexity and variability that may be present inside a subsystem. This allows us to create more complex systems by not having to simultaneously tune the response of a large number of interacting components. This was one of the advantages of Black s use of negative feedback in vacuum tube amplifiers: the resulting device had a well-defined linear input/output response that did not depend on the individual characteristics of the vacuum tubes being used. Higher Levels of Automation A major trend in the use of feedback is its application to higher levels of situational awareness and decision making. This includes not only traditional logical branching based on system conditions, but optimization, adaptation, learning and even higher levels of abstract reasoning. These problems are in the domain of the artificial intelligence (AI) community, with an increasing role of dynamics, robustness and interconnection in many applications. An example of this trend is the DARPA Grand Challenge, a series of competitions sponsored by the US government to build vehicles that can autonomously drive themselves in desert and urban environments. Caltech competed in the 25 and 27 Grand Challenges using a modified Ford E-35 offroad van, nicknamed Alice. It was fully automated, including electronically-controlled steering, throttle, brakes, transmission and ignition. Its sensing systems included multiple video cameras scanning at 1 3 Hz, several laser ranging units scanning at 1 Hz, and

31 1.4. FEEDBACK PROPERTIES 21 Path Planner Supervisory Control Path Follower Vehicle Actuation Road Finding Cost Map State Estimator Vehicle Terrain Sensors Elevation Map Figure 1.15: DARPA Grand Challenge. Alice, Team Caltech s entry in the 25 and 27 competitions and its networked control architecture [CFG+6]. The feedback system fuses data from terrain sensors (cameras and laser range finders) to determine a digital elevation map. This map is used to compute the vehicle s potential speed over the terrain and an optimization-based path planner then commands a trajectory for the vehicle to follow. A supervisory control module performs higher level tasks such as handling sensor and actuator failures. an inertial navigation package capable of providing position and orientation estimates at 2.5 ms temporal resolution. Computational resources included 7 high speed servers connected together through a 1 Gb/s Ethernet switch. A picture of the vehicle is shown in Figure 1.15, along with a block diagram of its control architecture. The software and hardware infrastructure that was developed enabled the vehicle to traverse long distances at substantial speeds. In testing, Alice drove itself over 5 kilometers in the Mojave Desert of California, with the ability to follow dirt roads and trails (if present) and avoid obstacles along the path. Speeds of over 5 km/hr were obtained in fully autonomous mode. Substantial tuning of the algorithms was done during desert testing, in part due to the lack of systemslevel design tools for systems of this level of complexity. Other competitors in the race (including Stanford, which won the competition) used algorithms for adaptive control and learning, increasing the capabilities of their systems in unknown environments. Together, the competitors in the Grand Challenge demonstrated some of the capabilities for the next generation of control systems and highlighted many research directions in control at higher levels of decision making. Drawbacks of Feedback While feedback has many advantages, it also has some drawbacks. Chief among these is the possibility for instability if the system is not designed properly. We are all familiar with the effects of positive feedback when the amplification on a microphone is turned up too high in a room. This is an example of a feedback instability, something that we obviously want to avoid. This is tricky because we must not only design the system to be stable under nominal conditions, but to remain stable under all possible perturbations of the dynamics. In addition to the potential for instability, feedback inherently couples different

32 22 CHAPTER 1. INTRODUCTION parts of a system. One common problem is that feedback often injects measurement noise into the system. Measurements must be carefully filtered so that the actuation and process dynamics do not respond to them, while at the same time ensuring that the measurement signal from the sensor is properly coupled into the closed loop dynamics (so that the proper levels of performance are achieved). Another potential drawback of control is the complexity of embedding a control system into a product. While the cost of sensing, computation and actuation has decreased dramatically in the past few decades, the fact remains that control systems are often complicated and hence one must carefully balance the costs and benefits. An early engineering example of this is the use of microprocessor-based feedback systems in automobiles. The use of microprocessors in automotive applications began in the early 197s and was driven by increasingly strict emissions standards, which could only be met through electronic controls. Early systems were expensive and failed more often than desired, leading to frequent customer dissatisfaction. It was only through aggressive improvements in technology that the performance, reliability and cost of these systems allowed them to be used in a transparent fashion. Even today, the complexity of these systems is such that it is difficult for an individual car owner to fix problems. Feedforward Feedback is reactive: there must be an error before corrective actions are taken. However, in some circumstances it is possible to measure a disturbance before it enters the system and this information can be used to take corrective action before the disturbance has influenced the system. The effect of the disturbance is thus reduced by measuring it and generating a control signal that counteracts it. This way of controlling a system is called feedforward. Feedforward is particularly useful to shape the response to command signals because command signals are always available. Since feedforward attempts to match two signals, it requires good process models; otherwise the corrections may have the wrong size or may be badly timed. The ideas of feedback and feedforward are very general and appear in many different fields. In economics, feedback and feedforward are analogous to a marketbased economy versus a planned economy. In business a feedforward strategy corresponds to running a company based on extensive strategic planning while a feedback strategy corresponds to a reactive approach. Experience indicates that it is often advantageous to combine feedback and feedforward. Feedforward is particularly useful when disturbances can be measured or predicted. A typical example is in chemical process control where disturbances in one process may be due to other processes upstream. The correct balance of the approaches requires insight and understanding of their properties.

33 1.5. SIMPLE FORMS OF FEEDBACK 23 Positive Feedback In most of this text, we will consider the role of negative feedback, in which we attempt to regulate the system by reacting to disturbances in a way that decreases the effect of those disturbances. In some systems, particularly biological systems, positive feedback can play an important role. In a system with positive feedback, the increase in some variable or signal leads to a situation in which that quantity is further increased through its dynamics. This has a destabilizing effect and is usually accompanied by a saturation that limits the growth of the quantity. Although often considered undesirable, this behavior is used in biological (and engineering) systems to obtain a very fast response to a condition or signal. One example of the use of positive feedback is to create switching behavior, in which a system maintains a given state until some input has cross a threshold. Hysteresis is often present so that noisy inputs near the threshold do not cause the system to jitter. This type of behavior is called bistability and is often associated with memory devices. 1.5 SIMPLE FORMS OF FEEDBACK The idea of feedback to make corrective actions based on the difference between the desired and actual values of a quantity can be implemented in many different ways. The benefits of feedback can be obtained by very simple feedback laws such as on-off control, proportional control and PID control. In this section we provide a brief preview of some of the topics that will be studied more formally in the remainder of the text. On-off Control A simple feedback mechanism can be described as follows: { u max if e > u = u min if e < (1.1) where e = r y is the difference between the reference signal r and the output of the system y, and u is the actuation command. Figure 1.16a shows the relation between error and control. This control law implies that maximum corrective action is always used. The feedback in equation (1.1) is called on-off control. One of its chief advantages is that it is simple and there are no parameters to choose. On-off control often succeeds in keeping the process variable close to the reference, such as the use of a simple thermostat to maintain the temperature of a room. It typically results in a system where the controlled variables oscillate, which is often acceptable if the oscillation is sufficiently small. Notice that in equation (1.1) the control variable is not defined when the error is zero. It is common to make modifications either by introducing hysteresis or a

34 24 CHAPTER 1. INTRODUCTION u u u e e e (a) On-off control (b) Dead zone (c) Hysteresis Figure 1.16: Input-output characteristics of on-off controllers. Each plot shows the input on the horizontal axis and the corresponding output on the vertical axis. Ideal on-off control is shown in (a), with modifications for a dead zone (b) or hysteresis (c). Note that for on-off control with hysteresis, the output depends on the value of past inputs. dead zone (see Figure 1.16b and 1.16c). PID Control The reason why on-off control often gives rise to oscillations is that the system overreacts since a small change in the error will make the actuated variable change over the full range. This effect is avoided in proportional control, where the characteristic of the controller is proportional to the control error for small errors. This can be achieved with the control law u max if e e max u = k p e if e min < e < e max (1.2) u min if e e min, where where k p is the controller gain, e min = u min /k p, and e max = u max /k p. The interval (e min,e max ) is called the proportional band because the behavior of the controller is linear when the error is in this interval: u = k p (r y) = k p e if e min e e max. (1.3) While a vast improvement over on-off control, proportional control has the drawback that the process variable often deviates from its reference value. In particular, if some level of control signal is required for the system to maintain a desired value, then we must have e in order to generate the requisite input. This can be avoided by making the control action proportional to the integral of the error: u(t) = k i t e(τ)dτ. (1.4) This control form is called integral control and k i is the integral gain. It can be shown through simple arguments that a controller with integral action will have zero steady state error (Exercise 1.6). The catch is that there may not always be a steady state because the system may be oscillating.

35 1.6. FURTHER READING 25 Error Present Past Future t t + T d Time Figure 1.17: Action of a PID controller. At time t, the proportional term depends on the instantaneous value of the error. The integral portion of the feedback is based on the integral of the error up to time t (shaded portion). The derivative term provides an estimate of the growth or decay of the error over time by looking at the rate of change of the error. T d represents the approximate amount of time in which the error is projected forward (see text). An additional refinement is to provide the controller with an anticipative ability by using a prediction of the error. A simple prediction is given by the linear extrapolation de(t) e(t + T d ) e(t)+t d, dt which predicts the error T d time units ahead. Combining proportional, integral and derivative control we obtain a controller that can be expressed mathematically as follows: t de(t) u(t) = k p e(t)+k i e(τ)dτ + k d (1.5) dt The control action is thus a sum of three terms: the past as represented by the integral of the error, the present as represented by the proportional term and the future as represented by a linear extrapolation of the error (the derivative term). This form of feedback is called a proportional-integral-derivative (PID) controller and its action is illustrated in Figure The PID controller is very useful and is capable of solving a wide range of control problems. Over 95% of all industrial control problems are solved by PID control, although many of these controllers are actually PI controllers because derivative action is often not included [DM2]. There are also more advanced controllers, which differ from the PID controller by using more sophisticated methods for prediction. 1.6 FURTHER READING The material in this section draws heavily from the report of the Panel on Future Directions on Control, Dynamics and Systems [Mur3]. Several additional papers and reports have highlighted successes of control [NS99] and new vistas

36 26 CHAPTER 1. INTRODUCTION in control [Bro, Kum1]. The early development of control is described by Mayr [May7] and the books by Bennett [Ben86a, Ben86b], which cover the period A fascinating examination of some of the early history of control in the United States has been written by Mindell [Min2]. A popular book that describes many control concepts across a wide range of disciplines is Out of Control by Kelly [Kel94]. There are many textbooks available that describe control systems in the context of specific disciplines. For engineers, the textbooks by Franklin, Powell and Emami-Naeini [FPEN5], Dorf and Bishop [DB4], Kuo and Golnaraghi [KG2], and Seborg, Edgar and Mellichamp [SEM3] are widely used. More mathematically oriented treatments of control theory include Sontag [Son98] and Lewis [Lew3]. The book by Hellerstein et al. [HDPT4] provides a description of the use of feedback control in computing systems. A number of books look at the role of dynamics and feedback in biological systems, including Milhorn [Mil66] (now out of print), J. D. Murray [Mur4] and Ellner and Guckenheimer [EG5]. The book by Fradkov [Fra7] and tutorial article by Bechhoefer [Bec5] cover many specific topics of interest to the physics community. EXERCISES 1.1 Perform the following experiment and explain your results: Holding your head still, move your right or left hand back and forth in front of your face, following it with your eyes. Record how quickly you can move your hand before you begin to lose track of your hand. Now hold your hand still and move your head back and forth, once again recording how quickly you can move before loosing track. 1.2 Identify 5 feedback systems that you encounter in your everyday environment. For each system, identify the sensing mechanism, actuation mechanism and control law. Describe the uncertainty with respect to which the feedback system provides robustness and/or the dynamics that are changed through the use of feedback. 1.3 Balance yourself on one foot with your eyes closed for 15 seconds. Using Figure 1.3 as a guide, describe the control system responsible for keeping you from falling down. Note that the controller will differ from the diagram (unless you are an android reading this in the far future). 1.4 Make a schematic picture of the system for supplying milk from the cow to your table. Discuss the impact of refrigerated storage. 1.5 Download the MATLAB code used to produce the simulations for the cruise control system in Figure 1.13 from the companion web site. Using trial and error, change the parameters of the control law so that the overshoot in the speed is not more than 1 m/s for a vehicle with mass m = 1 kg.

37 1.6. FURTHER READING We say that a system with a constant input reaches steady state if the output of the system approaches a constant value as time increases. Show that a controller with integral action, such as those given in equations (1.4) and (1.5), gives zero error if the closed loop system reaches steady state. 1.7 Search for the term voltage clamp on the Internet and explore why it is so advantageous to use feedback to measure the ion current in cells. You may also enjoy reading about the Nobel Prizes of Hodgkin and Huxley 1963 and Neher and Sakmann (see Search for the term force feedback and explore its use in haptics and sensing.

38

39 Chapter Two System Modeling... I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers. He replied, How many arbitrary parameters did you use for your calculations? I thought for a moment about our cut-off procedures and said, Four. He said, I remember my friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk. Freeman Dyson on describing the predictions of his model for meson-proton scattering to Enrico Fermi in 1953 [Dys4]. A model is a precise representation of a system s dynamics used to answer questions via analysis and simulation. The model we choose depends on the questions we wish to answer, and so there may be multiple models for a single dynamical system, with different levels of fidelity depending on the phenomena of interest. In this chapter we provide an introduction to the concept of modeling, and provide some basic material on two specific methods that are commonly used in feedback and control systems: differential equations and difference equations. 2.1 MODELING CONCEPTS A model is a mathematical representation of a physical, biological or information system. Models allow us to reason about a system and make predictions about how a system will behave. In this text, we will mainly be interested in models of dynamical systems describing the input/output behavior of systems and we will often work in so-called state space form. Roughly speaking, a dynamical system is one in which the effects of actions do not occur immediately. For example, the velocity of a car does not change immediately when the gas pedal is pushed nor does the temperature in a room rise instantaneously when a heater is switched on. Similarly, a headache does not vanish right after an aspirin is taken, requiring time to take effect. In business systems, increased funding for a development project does not increase revenues in the short term, although it may do so in the long term (if it was a good investment). All of these are examples of dynamical systems, in which the behavior of the system evolves with time. In the remainder of this section we provide an overview of some of the key concepts in modeling. The mathematical details introduced here are explored more fully in the remainder of the chapter.

40 3 CHAPTER 2. SYSTEM MODELING q c( q) m k Figure 2.1: Spring-mass system, with nonlinear damping. The position of the mass is denoted by q, with q = corresponding to the rest position of the spring. The forces on the mass are generated by a linear spring with spring constant k and a damper with force dependent on the velocity q. The Heritage of Mechanics The study of dynamics originated in the attempts to describe planetary motion. The basis was detailed observations of the planets by Tycho Brahe and the results of Kepler, who found empirically that the orbits of the planets could be well described by ellipses. Newton embarked on an ambitious program to try to explain why the planets move in ellipses and he found that the motion could be explained by his law of gravitation and the formula that force equals mass times acceleration. In the process he also invented calculus and differential equations. One of the triumphs of Newton s mechanics was the observation that the motion of the planets could be predicted based on the current positions and velocities of all planets. It was not necessary to know the past motion. The state of a dynamical system is a collection of variables that characterizes the motion of a system completely for the purpose of predicting future motion. For a system of planets the state is simply the positions and the velocities of the planets. We call the set of all possible states the state space. A common class of mathematical models for dynamical systems is ordinary differential equations (ODEs). In mechanics, one of the simplest such differential equation is that of a spring-mass system, with damping: m q+c( q)+kq =. (2.1) This system is illustrated in Figure 2.1. The variable q R represents the position of the mass m with respect to its rest position. We use the notation q to denote the derivative of q with respect to time (i.e., the velocity of the mass) and q to represent the second derivative (acceleration). The spring is assumed to satisfy Hooke s law, which says that the force is proportional to the displacement. The friction element (damper) is taken as a nonlinear function, c( q), which can model effects such as stiction and viscous drag. The position q and velocity q represent the instantaneous state of the system. We say that this system is a second order system since the dynamics depend on the second derivative of q. The evolution of the position and velocity can be described using either a time plot or a phase plot, both of which are shown in Figure 2.2. The time plot, on the

41 2.1. MODELING CONCEPTS 31 Position [m], velocity [m/s] Position Velocity Time [s] Velocity [m/s] Position [m] Figure 2.2: Illustration of a state model. A state model gives the rate of change of the state as a function of the state. The plot on the left shows the evolution of the state as a function of time. The plot on the right shows the evolution of the states relative to each other, with the velocity of the state denoted by arrows. left, shows the values of the individual states as a function of time. The phase plot, on the right, shows the vector field for the system, which gives the state velocity (represented as an arrow) at every point in the state space. In addition, we have superimposed the traces of some of the states from different conditions. The phase plot gives a strong intuitive representation of the equation as a vector field or a flow. While systems of second order (two states) can be represented in this way, it is unfortunately difficult to visualize equations of higher order using this approach. The differential equation (2.1) is called an autonomous system because there are no external influences. Such a model is natural to use for celestial mechanics, because it is difficult to influence the motion of the planets. In many examples, it is useful to model the effects of external disturbances or controlled forces on the system. One way to capture this is to replace equation (2.1) by m q+c( q)+kq = u (2.2) where u represents the effect of external inputs. The model (2.2) is called a forced or controlled differential equation. The model implies that the rate of change of the state can be influenced by the input, u(t). Adding the input makes the model richer and allows new questions to be posed. For example, we can examine what influence external disturbances have on the trajectories of a system. Or, in the case when the input variable is something that can be modulated in a controlled way, we can analyze whether it is possible to steer the system from one point in the state space to another through proper choice of the input. The Heritage of Electrical Engineering A different view of dynamics emerged from electrical engineering, where the design of electronic amplifiers led to a focus on input/output behavior. A system was considered as a device that transformed inputs to outputs, as illustrated in Figure 2.3. Conceptually an input/output model can be viewed as a giant table

42 32 CHAPTER 2. SYSTEM MODELING Input System Output Figure 2.3: Illustration of the input/output view of a dynamical system. The figure on the left shows a detailed circuit diagram for an electronic amplifier; the one of the right its representation as a block diagram. of inputs and outputs. Given an input signal u(t) over some interval of time, the model should produce the resulting output y(t). The input/output framework is used in many engineering systems since it allows us to decompose a problem into individual components, connected through their inputs and outputs. Thus, we can take a complicated system such as a radio or a television and break it down into manageable pieces, such as the receiver, demodulator, amplifier and speakers. Each of these pieces has a set of inputs and outputs and, through proper design, these components can be interconnected to form the entire system. The input/output view is particularly useful for the special class of linear timeinvariant systems. This term will be defined more carefully later in this chapter, but roughly speaking a system is linear if the superposition (addition) of two inputs yields an output which is the sum of the outputs that would correspond to individual inputs being applied separately. A system is time-invariant if the output response for a given input does not depend on when that input is applied. Many electrical engineering systems can be modeled by linear, time-invariant systems and hence a large number of tools have been developed to analyze them. One such tool is the step response, which describes the relationship between an input that changes from zero to a constant value abruptly (a step input) and the corresponding output. As we shall see in the latter part of the text, the step response is very useful in characterizing the performance of a dynamical system and it is often used to specify the desired dynamics. A sample step response is shown in Figure 2.4a. Another possibility to describe a linear, time-invariant system is to represent the system by its response to sinusoidal input signals. This is called the frequency response and a rich, powerful theory with many concepts and strong, useful results has emerged. The results are based on the theory of complex variables and Laplace transforms. The basic idea behind frequency response is that we can completely

43 2.1. MODELING CONCEPTS 33 Input, output input output Time [s] (a) Step response Gain Phase [deg] Frequency [rad/s] (b) Frequency response Figure 2.4: Input/output response of a linear system. The step response (a) shows the output of the system due to an input that changes from to 1 at time t = 5 s. The frequency response (b) shows the amplitude gain and phase change due to a sinusoidal input at different frequencies. characterize the behavior of a system by its steady state response to sinusoidal inputs. Roughly speaking, this is done by decomposing any arbitrary signal into a linear combination of sinusoids (e.g., by using the Fourier transform) and then using linearity to compute the output by combining the response to the individual frequencies. A sample frequency response is shown in Figure 2.4b. The input/output view lends itself naturally to experimental determination of system dynamics, where a system is characterized by recording its response to a particular input, e.g. a step or a sweep across a range of frequencies. The Control View When control theory emerged as a discipline in the 194s, the approach to dynamics was strongly influenced by the electrical engineering (input/output) view. A second wave of developments in control, starting in the late 195s, was inspired by mechanics, where the state space perspective was used. The emergence of space flight is a typical example, where precise control of the orbit of a spacecraft is essential. These two points of view gradually merged into what is today the state space representation of input/output systems. The development of state space models involved modifying the models from mechanics to include external actuators and sensors, and utilizing more general forms of equations. In control, the model given by equation (2.2) was replaced by dx dt = f(x,u), y = h(x,u), (2.3) where x is a vector of state variables, u is a vector of control signals, and y a vector of measurements. The term dx/dt represents the derivative of x with respect to time, now considered as a vector, and f and h are mappings of their arguments to vectors of the appropriate dimension. For mechanical systems, the state consists of

44 34 CHAPTER 2. SYSTEM MODELING the position and velocity of the system, so that x = (q, q) in the case of a damped spring-mass system. Note that in the control formulation we model dynamics as first order differential equations, but we will see that this can capture the dynamics of higher order differential equations by appropriate definition of the state and the maps f and h. Adding inputs and outputs has added to the richness of the classical problems and led to many new concepts. For example it is natural to ask if possible states x can be reached with the proper choice of u (reachability) and if the measurement y contains enough information to reconstruct the state (observability). These topics will be addressed in greater detail in Chapters 6 and 7. A final development in building the control point of view was the emergence of disturbance and model uncertainty as critical elements in the theory. The simple way of modeling disturbances as deterministic signals like steps and sinusoids has the drawback that such signals can be predicted precisely. A more realistic approach is to model disturbances as random signals. This viewpoint gives a natural connection between prediction and control. The dual views of input/output representations and state space representations are particularly useful when modeling uncertainty, since state models are convenient to describe a nominal model but uncertainties are easier to describe using input/output models (often via a frequency response description). Uncertainty will be a constant theme throughout the text and will be studied in particular detail in Chapter 12. An interesting experience in design of control systems is that feedback systems can often be analyzed and designed based on comparatively simple models. The reason for this is the inherent robustness of feedback systems. However, other uses of models may require more complexity and more accuracy. One example is feedforward control strategies, where one uses a model to precompute the inputs that will cause the system to respond in a certain way. Another area is in system validation, where one wishes to verify that the detailed response of the system performs as it was designed. Because of these different uses of models, it is common to use a hierarchy of models having different complexity and fidelity. Multi-Domain Modeling Modeling is an essential element of many disciplines, but traditions and methods from individual disciplines can be different from each other, as illustrated by the previous discussion of mechanical and electrical engineering. A difficulty in systems engineering is that it is frequently necessary to deal with heterogeneous systems from many different domains, including chemical, electrical, mechanical and information systems. To model such multi-domain systems, we start by partitioning a system into smaller subsystems. Each subsystem is represented by balance equations for mass, energy and momentum, or by appropriate descriptions of the information processing in the subsystem. The behavior at the interfaces is captured by describing how the variables of the subsystem behave when the subsystems are interconnected.

45 2.1. MODELING CONCEPTS 35 These interfaces act by constraining variables within the individual subsystems to be equal (such as mass, energy or momentum fluxes). The complete model is then obtained by combining the descriptions of the subsystems and the interfaces. Using this methodology it is possible to build up libraries of subsystems that correspond to physical, chemical and informational components. The procedure mimics the engineering approach where systems are built from subsystems that are themselves built from smaller components. As experience is gained, the components and their interfaces can be standardized and collected in model libraries. In practice, it takes several iterations to obtain a good library that can be reused for many applications. State models or ordinary differential equations are not suitable for component based modeling of this form because states may disappear when components are connected. This implies that the internal description of a component may change when it is connected to other components. As an illustration we consider two capacitors in an electrical circuit. Each capacitor has a state corresponding to the voltage across the capacitors, but one of the states will disappear if the capacitors are connected in parallel. A similar situation happens with two rotating inertias, each of which are individually modeled using the angle of rotation and the angular velocity. Two states will disappear when the inertias are joined by a rigid shaft. This difficulty can be avoided by replacing differential equations by differential algebraic equations, which have the form where z R n. A simple special case is F(z,ż) =, ẋ = f(x,y) g(x,y) =, (2.4) where z = (x,y) and F = (ẋ f(x,y),g(x,y)). The key property is that the derivative ż is not given explicitly and there may be pure algebraic relations between the components of the vector z. The model (2.4) captures the examples of the parallel capacitors and the linked rotating inertias. For example, when two capacitors are connected we simply add the algebraic equation expressing that the voltages across the capacitors are the same. Modelica is a language that has been developed to support component-based modeling. Differential algebraic equations are used as the basic description and object-oriented programming is used to structure the models. Modelica is used to model the dynamics of technical systems in domains such as mechanical, electrical, thermal, hydraulic, thermo-fluid and control subsystems. Modelica is intended to serve as a standard format so that models arising in different domains can be exchanged between tools and users. A large set of free and commercial Modelica component libraries are available and are used by a growing number of people in industry, research and academia. For further information about Modelica, see

46 36 CHAPTER 2. SYSTEM MODELING 2.2 STATE SPACE MODELS In this section we introduce the two primary forms of models that we use in this text: differential equations and difference equations. Both make use of the notions of state, inputs, outputs and dynamics to describe the behavior of a system. Ordinary Differential Equations The state of a system is a collection of variables that summarize the past of a system for the purpose of predicting the future. For a physical system the state is composed of the variables required to account for storage of mass, momentum and energy. A key issue in modeling is to decide how accurately this storage has to be represented. The state variables are gathered in a vector, x R n, called the state vector. The control variables are represented by another vector u R p and the measured signal by the vector y R q. A system can then be represented by the differential equation dx dt = f(x,u), y = h(x,u), (2.5) where f : R n R p R n and h : R n R p R q are smooth mappings. We call a model of this form a state space model. The dimension of the state vector is called the order of the system. The system (2.5) is called time-invariant because the functions f and g do not depend explicitly on time t; there are more general time-varying systems where the functions do depend on time. The model consists of two functions: the function f gives the rate of change of the state vector as a function of state x and control u, and the function g gives the measured values as functions of state x and control u. A system is called a linear state space system if the functions f and g are linear in x and u. A linear state space system can thus be represented by dx dt = Ax+Bu, y = Cx+Du, (2.6) where A, B, C and D are constant matrices. Such a system is said to be linear and time-invariant, or LTI for short. The matrix A is called the dynamics matrix, the matrix B is called the control matrix, the matrix C is called the sensor matrix and the matrix D is called the direct term. Frequently systems will not have a direct term, indicating that the control signal does not influence the output directly. A different form of linear differential equations, generalizing the second order dynamics from mechanics, is an equation of the form d n y dt n + a d n 1 y 1 dt n 1 + +a ny = u, (2.7) where t is the independent (time) variable, y(t) is the dependent (output) variable, and u(t) is the input. The notation d k y/dt k is used to denote the kth derivative of y with respect to t, sometimes also written as y (k). The system (2.7) is said to be an

47 2.2. STATE SPACE MODELS 37 nth order system. This system can be converted into state space form by defining d n 1 y/dt n 1 d dt x = x 1 x 2. x n 1 x n and the state space equations become x 1 x 2. x n 1 x n = = a 1 x 1 a n x n x 2. x n 2 x n 1 d n 2 y/dt n 2. dy/dt y u +., y = x n. With the appropriate definition of A, B, C and D, this equation is in linear state space form. An even more general system is obtained by letting the output be a linear combination of the states of the system, i.e. y = b 1 x 1 + b 2 x 2 + +b n x n + du This system can be modeled in state space as x 1 a 1 a 2... a n 1 a n 1 x d x 3 = 1 x+ u dt x n 1 y = b 1 b 2... b n x+du. (2.8) This particular form of a linear state space system is called reachable canonical form and will be studied in more detail in later chapters. Example 2.1 Balance systems An example of a class of systems that can be modeled using ordinary differential equations is the class of balance systems. A balance system is a mechanical system in which the center of mass is balanced above a pivot point. Some common examples of balance systems are shown in Figure 2.5. The Segway human transportation system (Figure 2.5a) uses a motorized platform to stabilize a person standing on top of it. When the rider leans forward, the vehicle propels itself along the ground, but maintains its upright position. Another example is a rocket (Figure 2.5b), in which a gimbaled nozzle at the bottom of the rocket is used to stabilize the body of the rocket above it. Other examples of balance systems include humans or other animals standing upright or a person balancing a stick on

48 38 CHAPTER 2. SYSTEM MODELING m θ l F M (a) Segway (b) Saturn rocket p (c) Cart-pendulum system Figure 2.5: Balance systems. (a) Segway human transportation system, (b) Saturn rocket and (c) inverted pendulum on a cart. Each of these examples uses forces at the bottom of the system to keep it upright. their hand. Balance systems are a generalization of the spring-mass system we saw earlier. We can write the dynamics for a mechanical system in the general form M(q) q+c(q, q)+k(q) = B(q)u, where M(q) is the inertia matrix for the system, C(q, q) represents the Coriolis forces as well as the damping, K(q) gives the forces due to potential energy and B(q) describes how the external applied forces couple into the dynamics. The specific form of the equations can be derived using Newtonian mechanics. Note that each of the terms depends on the configuration of the system q and these terms are often nonlinear in the configuration variables. Figure 2.5c shows a simplified diagram for a balance system. To model this system, we choose state variables that represent the position and velocity of the base of the system, p and ṗ, and the angle and angular rate of the structure above the base, θ and θ. We let F represent the force applied at the base of the system, assumed to be in the horizontal direction (aligned with p), and choose the position and angle of the system as outputs. With this set of definitions, the dynamics of the system can be computed using Newtonian mechanics and has the form (M + m) ml cosθ ml cosθ (J + ml 2 p θ + cṗ+ml sinθ θ 2 = ) γ θ F, (2.9) mgl sinθ where M is the mass of the base, m and J are the mass and moment of inertia of the system to be balanced, l is the distance from the base to the center of mass of the balanced body, c and γ are coefficients of viscous friction, and g is the acceleration due to gravity. We can rewrite the dynamics of the system in state space form by defining the state as x = (p,θ, ṗ, θ), the input as u = F and the output as y = (p,θ). If we

49 2.2. STATE SPACE MODELS 39 define the total mass and total inertia as the equations of motion then become d dt M t = M + m J t = J + ml 2, ṗ θ mls θ θ 2 + mg(ml 2 /J t )s θ c θ cṗ γlmc θ θ + u M t m(ml 2 /J t )c 2 θ p θ = ṗ θ ml 2 s θ c θ θ 2 + M t gls θ clc θ ṗ γ(m t /m) θ + lc θ u J t (M t /m) m(lc θ ) 2 y = p, θ where we have used the shorthand c θ = cosθ and s θ = sinθ. In many cases, the angle θ will be very close to and hence we can use the approximations sinθ θ and cosθ 1. Furthermore, if θ is small, we can ignore quadratic and higher terms in θ. Substituting these approximations into our equations, we see that we are left with a linear state space equation d dt p 1 p θ 1 θ = m ṗ θ 2 l 2 + g/µ cj t /µ γj t lm/µ J t /µ M t mgl/µ clm/µ γm t /µ ṗ θ lm/µ y = 1 x, 1 where µ = M t J t m 2 l 2. Example 2.2 Inverted pendulum A variation of this example is one in which the location of the base, p, does not need to be controlled. This happens, for example, if we are only interested in stabilizing a rocket s upright orientation, without worrying about the location of base of the rocket. The dynamics of this simplified system are given by d dt θ θ = mgl J t u θ sinθ γ θ + l cosθ u, y = θ, (2.1) J t J t where γ is the coefficient of rotational friction, J t = J + ml 2 and u is the force applied at the base. This system is referred to as an inverted pendulum. Difference Equations In some circumstances, it is more natural to describe the evolution of a system at discrete instants of time rather than continuously in time. If we refer to each

50 4 CHAPTER 2. SYSTEM MODELING of these times by an integer k =,1,2,..., then we can ask how the state of the system changes for each k. Just as in the case of differential equations, we define the state to be those sets of variables that summarize the past of the system for the purpose of predicting its future. Systems described in this manner are referred to as discrete time systems. The evolution of a discrete time system can written in the form x[k+ 1] = f(x[k],u[k]), y[k] = h(x[k],u[k]) (2.11) where x[k] R n is the state of the system at time k (an integer), u[k] R p is the input and y[k] R q is the output. As before, f and h are smooth mappings of the appropriate dimension. We call equation (2.11) a difference equation since it tells us now x[k + 1] differs from x[k]. The state x[k] can either be a scalar or a vector valued quantity; in the case of the latter we write x j [k] for the value of the jth state at time k. Just as in the case of differential equations, it will often be the case that the equations are linear in the state and input, in which case we can write the system as x[k+ 1] = Ax[k]+Bu[k], y[k] = Cx[k]+Du[k]. As before, we refer to the matrices A, B, C and D as the dynamics matrix, the control matrix, the sensor matrix and the direct term. The solution of a linear difference equation with initial condition x[] and input u[],...,u[t] is given by x[k] = A k k 1 x + j= y[k] = CA k k 1 x + j= A k j 1 Bu[ j] CA k j 1 Bu[ j]+du[k] k >. (2.12) Difference equations are also useful as an approximation of differential equations, as we will show later. Example 2.3 Predator-prey As an example of a discrete time system, consider a simple model for a predatorprey system. The predator-prey problem refers to an ecological system in which we have two species, one of which feeds on the other. This type of system has been studied for decades and is known to exhibit interesting dynamics. Figure 2.6 shows a historical record taken over 5 years in a population of lynxes versus hares [Mac37]. As can been seen from the graph, the annual records of the populations of each species are oscillatory in nature. A simple model for this situation can be constructed using a discrete time model by keeping track of the rate of births and deaths of each species. Letting H represent the population of hares and L represent the population of lynxes, we can describe the state in terms of the populations at discrete periods of time. Letting k

51 2.2. STATE SPACE MODELS 41 Figure 2.6: Predator versus prey. The photograph on the left shows a Canadian lynx and a snowshoe hare, the lynx s primary prey. The graph on the right shows the populations of hares and lynxes between 1845 and 1935 in a section of the Canadian Rockies [Mac37, MS93]. The data were collected on an annual basis over a period of 9 years. Photograph courtesy Rudolfo s Usenet Animal Pictures Gallery. be the discrete time index (e.g., the day number), we can write H[k+ 1] = H[k]+b r (u)h[k] al[k]h[k] L[k+ 1] = L[k] d f L[k]+cL[k]H[k], (2.13) where b r (u) is the hare birth rate per unit period and as a function of the food supply u, d f is the lynx death rate, and a and c are the interaction coefficients. The interaction term al[k]h[k] models the rate of predation, which is assumed to be proportional to the rate at which predators and prey meet and is hence given by the product of the population sizes. The interaction term cl[k]h[k] in the lynx dynamics has a similar form and represents the rate of growth of the lynx population. This model makes many simplifying assumptions such as the fact that hares only decrease in numbers through predation by lynxes but it often is sufficient to answer basic questions about the system. To illustrate the usage of this system, we can compute the number of lynxes and hares at each time point from some initial population. This is done by starting with x[] = (H,L ) and then using equation (2.13) to compute the populations in the following period. By iterating this procedure, we can generate the population over time. The output of this process for a specific choice of parameters and initial conditions is shown in Figure 2.7. While the details of the simulation are different from the experimental data (to be expected given the simplicity of our assumptions), we see qualitatively similar trends and hence we can use the model to help explore the dynamics of the system. Example 2.4 Server The IBM Lotus server is an collaborative software system that administers users , documents and notes. Client machines interact with end users to provide access to data and applications. The server also handles other administrative tasks. In the early development of the system it was observed that the performance was poor when the CPU was overloaded because of too many service requests and mechanisms to control the load were therefore introduced. The interaction between the client and the server is in the form of remote procedure calls (RPCs). The server maintains a log of statistics of completed requests.

52 42 CHAPTER 2. SYSTEM MODELING 15 Hares Lynxes Population Year Figure 2.7: Discrete time simulation of the predator-prey model (2.13). Using the parameters a = c =.7, b r (u) =.7 and d =.5 in equation (2.13), the period and magnitude of the lynx and hare population cycles approximately match the data in Figure 2.6. The total number of requests being served, called RIS (RPCs in server), is also measured. The load on the server is controlled by a parameter called MaxUsers, which sets the total number of client connections to the server. This parameter is controlled by the system administrator. The server can be regarded as a dynamical system with MaxUsers as input and RIS as the output. The relationship between input and output was first investigated by exploring the steady state performance and was found to be linear. In [HDPT4] a dynamic model in the form of a first order difference equation is used to capture the dynamic behavior of this system. Using system identification techniques they construct a model of the form y[k+ 1] = ay[k]+bu[k], where u = MaxUsers MaxUsers and y = RIS RIS. The parameters a =.43 and b =.47 are parameters that describe the dynamics of the system around the operating point and MaxUsers = 165 and RIS = 135 represent the nominal operating point of the system. The number of requests was averaged of the sampling period which was 6 s. Simulation and Analysis State space models can be used to answer many questions. One of the most common, as we have seen in the previous examples, is to predict the evolution of the system state from a given initial condition. While for simple models this can be done in closed form, more often it is accomplished through computer simulation. One can also use state space models to analyze the overall behavior of the system, without making direct use of simulation. Consider again the damped spring-mass system from Section 2.1, but this time with an external force applied, as shown in Figure 2.8. We wish to predict the motion of the system for a periodic forcing function, with a given initial condition, and determine the amplitude, frequency and decay rate of the resulting motion.

53 2.2. STATE SPACE MODELS 43 q(t) c m u(t) = Asinωt k Figure 2.8: A driven spring-mass system, with damping. Here we use a linear damping element with coefficient of viscous friction c. The mass is driven with a sinusoidal force of amplitude A. We choose to model the system with a linear ordinary differential equation. Using Hooke s law to model the spring and assuming that the damper exerts a force that is proportional to the velocity of the system, we have m q+c q+kq = u, (2.14) where m is the mass, q is the displacement of the mass, c is the coefficient of viscous friction, k is the spring constant and u is the applied force. In state space form, using x = (q, q) as the state and choosing y = q as the output, we have x dx 2 dt = c m x 2 k m x 1 + u, y = x 1. m We see that this is a linear, second order differential equation with one input and one output. We now wish to compute the response of the system to an input of the form u = Asinωt. Although it is possible to solve for the response analytically, we instead make use of a computational approach that does not rely on the specific form of this system. Consider the general state space system dx dt = f(x,u). Given the state x at time t, we can approximate the value of the state at a short time h > later by assuming that the rate of change of f(x,u) is constant over the interval t to t + h. This gives x(t + h) = x(t)+hf(x(t),u(t)). (2.15) Iterating this equation, we can thus solve for x as a function of time. This approximation is known as Euler integration, and is in fact a difference equation if we let h represent the time increment and write x[k] = x(kh). Although modern simulation tools such as MATLAB and Mathematica use more accurate methods than Euler integration, it still illustrates some of the basic tradeoffs. Returning to our specific example, Figure 2.9 shows the results of computing x(t) using equation (2.15), along with the analytical computation. We see that as

54 44 CHAPTER 2. SYSTEM MODELING 2 Position q [m] 1 h = 1 1 h =.5 h =.1 analytical Time [s] Figure 2.9: Simulation of the forced spring-mass system with different simulation time constants. The darker dashed line represents that analytical solution. The solid lines represent the approximate solution via the method of Euler integration, using decreasing step sizes. h gets smaller, the computed solution converges to the exact solution. The form of the solution is also worth noticing: after an initial transient, the system settles into a periodic motion. The portion of the response after the transient is called the steady state response to the input. In addition to generating simulations, models can also be used to answer other types of questions. Two that are central to the methods described in this text are stability of an equilibrium point and the input/output frequency response. We illustrate these two computations through the examples below, and return to the general computations in later chapters. Returning to the damped spring-mass system, the equations of motion with no input forcing are given by dx x 2 dt = c m x 2 k m x, (2.16) 1 where x 1 is the position of the mass (relative to the rest position) and x 2 its velocity. We wish to show that if the initial state of the system is away from the rest position, the system will return to the rest position eventually (we will later define this situation to mean that the rest position is asymptotically stable). While we could heuristically show this by simulating many, many initial conditions, we seek instead to prove that this is true for any initial condition. To do so, we construct a function V : R n R that maps the system state to a positive real number. For mechanical systems, a convenient choice is the energy of the system, V(x) = 1 2 kx mx2 2. (2.17) If we look at the time derivative of the energy function, we see that dv dt = kx 1 ẋ 1 + mx 2 ẋ 2 = kx 1 x 2 + mx 2 ( c m x 2 k m x 1) = cx 2 2,

55 2.2. STATE SPACE MODELS 45 which is always either negative or zero. Hence V(x(t)) is never increasing and, using a bit of analysis that we will see formally later, the individual states must remain bounded. If we wish to show that the states eventually return to the origin, we must use a more slightly more detailed analysis. Intuitively, we can reason as follows: suppose that for some period of time, V(x(t)) stops decreasing. Then it must be true that V(x(t)) =, which in turn implies that x 2 (t) = for that same period. In that case, ẋ 2 (t) = and we can substitute into the second line of equation (2.16) to obtain: = ẋ 2 = c m x 2 k m x 1 = k m x 1. Thus we must have that x 1 also equals zero and so the only time that V(x(t)) can stop decreasing is if the state is at the origin (and hence this system is at its rest position). Since we know that V(x(t)) is never increasing (since V ), we therefore conclude that the origin is stable (for any initial condition). This type of analysis, called Lyapunov analysis, is considered in detail in Chapter 4 but shows some of the power of using models for analysis of system properties. Another type of analysis that we can perform with models is to compute the output of a system to a sinusoidal input. We again consider the spring-mass system, but this time keeping the input and leaving the system in its original form: m q+c q+kq = u. (2.18) We wish to understand what the response of the system is to a sinusoidal input of the form u(t) = Asinωt. We will see how to do this analytically in Chapter 6, but for now we make use of simulations to compute the answer. We first begin with the observation that if q(t) is the solution to equation (2.18) with input u(t), then applying an input 2u(t) will give a solution 2q(t) (this is easily verified by substitution). Hence it suffices to look at an an input with unit magnitude, A = 1. A second observation, which we will prove in Chapter 5, is that the long term response of the system to a sinusoidal input is itself a sinusoid at the same frequency and so the output has the form q(t) = g(ω)sin(ωt + ϕ(ω)), where g(ω) is called the gain of the system and ϕ(ω) is called the phase (or phase offset). To compute the frequency response numerically, we can simply simulate the system at a set of frequencies ω 1,...,ω N and plot the gain and phase at each of these frequencies. An example of this type of computation is shown in Figure 2.1.

56 46 CHAPTER 2. SYSTEM MODELING Output, y 2 2 Gain (log scale) Time [s] Frequency [rad/sec] (log scale) Figure 2.1: A frequency response (magnitude only) computed by measuring the response of individual sinusoids. The figure on the left shows the response of the system as a function of time to a number of different unit magnitude inputs (at different frequencies). The figure on the right shows this same data in a different way, with the magnitude of the response plotted as a function of the input frequency. The filled circles correspond to the particular frequencies shown in the time responses. 2.3 MODELING METHODOLOGY To deal with large complex systems, it is useful to have different representations of the system that capture the essential features and hide irrelevant details. In all branches of science and engineering it is common practice to use some graphical description of systems. They can range from stylistic pictures to drastically simplified standard symbols. These pictures make it possible to get an overall view of the system and to identify the individual components. Examples of such diagrams are shown in Figure Schematic diagrams are useful because they give an overall picture of a system, showing different subprocesses and their interconnection, and indicating variables that can be manipulated and signals that can be measured. Block Diagrams A special graphical representation called a block diagram has been developed in control engineering. The purpose of a block diagram is to emphasize the information flow and to hide details of the system. In a block diagram, different process elements are shown as boxes and each box has inputs denoted by lines with arrows pointing toward the box and outputs denoted by lines with arrows going out of the box. The inputs denote the variables that influence a process and the outputs denote signals that we are interested in or signals that influence other subsystems. Block diagrams can also be organized in hierarchies, where individual blocks may themselves contain more detailed block diagrams. Figure 2.12 shows some of the notation that we use for block diagrams. Signals are represented as lines, with arrows to indicate inputs and outputs. The first diagram is the representation for a summation of two signals. An input/output response is represented as a rectangle with the system name (or mathematical description) in the block. Two special cases are a proportional gain, which scales the

57 2.3. MODELING METHODOLOGY 47 (a) Power electronics (b) Cell biology (c) Process control (d) Networking Figure 2.11: Schematic diagrams in different disciplines. Each diagram is used to illustrate the dynamics of a feedback system: (a) electrical schematics for a power system, (b) a biological circuit diagram for a synthetic clock circuit [ASMN3], (c) process diagram for a distillation column and (d) Petri net description of a communication protocol [?]. u 2 u 1 u 1 + u 2 Σ u k ku u sat(u) (a) Summing junction t u u(t) dt (d) Integrator (b) Gain block u y System (e) Input/output system (c) Saturation u f(u) (f) Nonlinear map Figure 2.12: Standard block diagram elements. The arrows indicate the the inputs and outputs of each element, with the mathematical operation corresponding to the blocked labeled at the output. The system block (e) represents the full input/output response of a dynamical system.

58 48 CHAPTER 2. SYSTEM MODELING (d) Drag Aerodynamics Wind Ref Σ (a) Sensory Motor System (b) Wing Aerodynamics Σ (c) Body Dynamics 1 (e) Vision System Figure 2.13: A block diagram representation of the flight control system for an insect flying against the wind. The mechanical portion of the model consists of the rigid body dynamics of the fly, the drag due to flying through the air and the forces generated by the wings. The motion of the body causes the visual environment of the fly to change, and this information is then used to control the motion of the wings (through the sensory motor system), closing the loop. input by a multiplicative factor, and an integrator, which outputs the integral of the input signal. Figure 2.13 illustrates the use of a block diagram, in this case for modeling the flight response of a fly. The flight dynamics of an insect are incredibly intricate, involving a careful coordination of the muscles within the fly to maintain stable flight in response to external stimuli. One known characteristic of flies is their ability to fly upwind by making use of the optical flow in their compound eyes as a feedback mechanism. Roughly speaking, the fly controls its orientation so that the point of contraction of the visual field is centered in its visual field. To understand this complex behavior, we can decompose the overall dynamics of the system into a series of interconnected subsystems (or blocks ). Referring to Figure 2.13, we can model the insect navigation system through an interconnection of five blocks. The sensory motor system (a) takes the information from the visual system (e) and generates muscle commands that attempt to steer the fly so that the point of contraction is centered. These muscle commands are converted into forces through the flapping of the wings (b) and the resulting aerodynamic forces that are produced. The forces from the wings are combined with the drag on the fly (d) to produce a net force on the body of the fly. The wind velocity enters through the drag aerodynamics. Finally, the body dynamics (c) describe how the fly translates and rotates as a function of the net forces that are applied to it. The insect position, speed and orientation is fed back to the drag aerodynamics and vision system blocks as inputs. Each of the blocks in the diagram can itself be a complicated subsystem. For example, the fly visual system of a fruit fly consists of two complicated compound eyes (with about 7 elements per eye) and the sensory motor system has about

59 2.3. MODELING METHODOLOGY 49 2, neurons that are used to process that information. A more detailed block diagram of the insect flight control system would show the interconnections between these elements, but here we have used one block to represent how the motion of the fly affects the output of the visual system and a second block to represent how the visual field is processed by the fly s brain to generate muscle commands. The choice of the level of detail of the blocks and what elements to separate into different blocks often depends on experience and the questions that one wants to answer using the model. One of the powerful features of block diagrams is their ability to hide information about the details of a system that may not be needed to gain an understanding of the essential dynamics of the system. Modeling from Experiments Since control systems are provided with sensors and actuators it is also possible to obtain models of system dynamics from experiments on the process. The models are restricted to input/output models since only these signals are accessible to experiments, but modeling from experiments can also be combined with modeling from physics through the use of feedback and interconnection. A simple way to determine a system s dynamics is to observe the response to a step change in the control signal. Such an experiment begins by setting the control signal to a constant value, then when steady state is established the control signal is changed quickly to a new level and the output is observed. The experiment will give the step response of the system and the shape of the response gives useful information about the dynamics. It immediately gives an indication of the response time and it tells if the system is oscillatory or if the response in monotone. By repeating the experiment for different steady state values and different amplitudes of the change of the control signal we can also determine ranges where the process can be approximated by a linear system. Example 2.5 Identification of a spring-mass system Consider the spring-mass system from Section 2.1, whose dynamics are given by m q+c q+kq = u. (2.19) We wish to determine the constants m, c and k by measuring the response of the system to a step input of magnitude F. We will show in Chapter 5 that when c 2 < 4km, the step response for this system from the rest configuration is given by q(t) = F k 4km c 2 ( 1 exp ( ct ) ω d = sin(ωd t + ϕ)) 2m 2m ϕ = tan 1( 4km c ). 2 From the form of the solution, we see that the form of the response is determined by the parameters of the system. Hence, by measuring certain features of the step response we can determine the parameter values.

60 5 CHAPTER 2. SYSTEM MODELING.8 q(t 1 ) Position q [m] T q(t 2 ) q( ) Time t [s] Figure 2.14: Step response for a spring-mass system. The magnitude of the step input is F = 2 N. The period of oscillation, T is determined by looking at the time between two subsequent local maxima in the response. The period combined with the steady state value q( ) and the relative decrease between local maxima can be used to estimate the parameters in a model of the system. Figure 2.14 shows the response of the system to a step of magnitude F = 2 N, along with some measurements. We start by noting that the steady state position of the mass (after the oscillations die down) is a function of the spring constant, k: q( ) = F k, (2.2) where F is the magnitude of the applied force (F = 1 for a unit step input). The parameter 1/k is called the gain of the system. The period of the oscillation can be measured between two peaks and must satisfy 2π 4km c T = 2. (2.21) 2m Finally, the rate of decay of the oscillations is given by the exponential factor in the solution. Measuring the amount of decay between two peaks, we have (using Exercise 2.5) log ( q(t 1 ) F /k ) log ( q(t 2 ) F /k ) = c 2m (t 2 t 1 ). (2.22) Using this set of three equations, we can solve for the parameters and determine that for the step response in Figure 2.14 we have m 25 kg, c 6 N s/m and k 4 N/m. Modeling from experiments can also be done using many other signals. Sinusoidal signals are commonly used (particularly for systems with fast dynamics) and precise measurements can be obtained by exploiting correlation techniques. An indication of nonlinearities can be obtained by repeating experiments with input signals having different amplitudes.

61 2.3. MODELING METHODOLOGY 51 Normalization and Scaling Having obtained a model, it is often useful to scale the variables by introducing dimension free variables. Such a procedure can often simplify the equations for a system by reducing the number of parameters and reveal interesting properties of the model. Scaling can also improve the numerical conditioning of the model to allow faster and more accurate simulations. The procedure of scaling is straightforward: simply choose units for each independent variable and introduce new variables by dividing the variables with the chosen normalization unit. We illustrate the procedure with two examples. Example 2.6 Spring-mass system Consider again the spring-mass system introduced earlier. Neglecting the damping, the system is described by m q+kq = u. The model has two parameters m and k. To normalize the model we introduce dimension free variables x = q/l and τ = ω t, where ω = k/m and l is the chosen length scale. We scale force by mlω 2 and introduce v = u(mlω2 ). The scaled equation then becomes d 2 x dτ 2 = d2 q/l d(ω t) 2 = 1 = x+v, lm( kq+u) ω 2 which is the normalized undamped spring-mass system. Notice that the normalized model has no parameters while the original model had two parameters m and k. Introducing the scaled, dimension-free state variables z 1 = x = q/l and z 2 = dx/dτ = q/(lω ) the model can be written as d dt z 1 = 1 z v z 2 This simple linear equation describes the dynamics of any spring-mass system, independent of the particular parameters, and hence gives us insight into the fundamental dynamics of this oscillatory system. To recover the physical frequency of oscillation or its magnitude, we must invert the scaling we have applied. Example 2.7 Balance system Consider the balance system described in Section 2.1. Neglecting damping by putting c = and γ = in equation (2.9) the model can be written as (M + m) d2 q dt 2 ml cosθ d2 θ dt 2 + ml sinθ( dq) 2 = F dt ml cosθ d2 q dt 2 +(J + ml2 ) d2 θ mgl sinθ = dt2 Let ω = mgl/(j + ml 2 ), choose the length scale as l, the time scale as 1/ω, the force scale as (M + m)lω 2 and introduce the scaled variables τ = ω t, x = q/l z 2

62 52 CHAPTER 2. SYSTEM MODELING Output, y Amplitude u M Σ y Input, u (a) Frequency (b) M (c) Figure 2.15: Characterization of model uncertainty. Uncertainty of a static system is illustrated in (a). The uncertainty lemon in (b) is one way to capture uncertainty in dynamical systems emphasizing that a model is only valid in some amplitude and frequency ranges. In (c) a model is represented by a nominal model (M) and another model M representing the uncertainty analogous to representation of parameter uncertainty. and u = F/((M + m)lω 2 ). The equations then become d 2 x dτ 2 α cosθ d2 θ ( dθ ) 2 dτ 2 + α = u dτ β cosθ d2 x dτ 2 + d2 θ sinθ =, dτ2 where α = m/(m+m) and β = ml 2 /(J +ml 2 ). Notice that the original model has five parameters m, M, J, l and g but the normalized model has only two parameters α and β. If M m and ml 2 J we get α and β 1 and the model can be approximated by d 2 x dτ 2 = u, d 2 θ sinθ = ucosθ. dτ2 The model can be interpreted as a mass combined with an inverted pendulum driven by the same input. Model Uncertainty Reducing uncertainty is one of the main reasons for using feedback and it is therefore important to characterize uncertainty. When making measurements there is a good tradition to assign both a nominal value and a measure of uncertainty. It is useful to apply same principle to modeling, but unfortunately it is often difficult to express the uncertainty of a model quantitatively. For a static system whose input-output relation can be characterized by a function, uncertainty can be expressed by an uncertainty band as illustrated in In Figure 2.15a. At low signal levels there are uncertainties due to sensor resolution, friction and quantization. Some models for queuing systems or cells are based on averages that exhibit significant variations for small populations. At large signal levels there are saturations or even system failures. The signal ranges where a model is reasonably accurate varies dramatically between applications but it is rare to find models that are accurate for signal ranges larger than 1 4.

63 2.4. MODELING EXAMPLES 53 Characterization of uncertainty of dynamic model is much more difficult. We can try to capture uncertainties by assigning uncertainties to parameters of the model but this is often not sufficient. There may be errors due to phenomena that have been neglected, for example small time delays. In control the ultimate test is how well a control system based on the model performs and time delays can be important. There is also a frequency aspect. There are slow phenomena, such as aging, that can cause changes or drift in the systems. There are also high frequency effects: a resistor will no longer be a pure resistance at very high frequencies and a beam has stiffness and will exhibit additional dynamics when subject to high frequency excitation. The uncertainty lemon shown in Figure 2.15b is one way to conceptualize the uncertainty of a system. It illustrates that a model is only valid in certain amplitude and frequency ranges. We will introduce some formal tools for representing uncertainty in Chapter 12 using figures such as the one shown in Figure 2.15c. These tools make use of the concept of a transfer function, which describes the frequency response of an input/output system. For now, we simply note that one should always be careful to recognize the limits of a model and not to make use of models outside their range of applicability. For example, one can describe the uncertainty lemon and then check to make sure that signals remain in this region. 2.4 MODELING EXAMPLES In this section we introduce additional examples that illustrate some of the different types of systems for which one can develop differential equation and difference equation models. These examples are specifically chosen from a range of different fields to highlight the broad variety of systems to which feedback and control concepts can be applied. A more detailed set of applications that serve as running examples throughout the text are given in the next chapter. Motion Control Systems Motion control systems involve the use of computation and feedback to control the movement of a mechanical system. Motion control systems range from nanopositioning systems (atomic force microscopes, adaptive optics), to control systems for the read/write heads in a disk drive of CD player, to manufacturing systems (transfer machines and industrial robots), to automotive control systems (anti-lock brakes, suspension control, traction control), to air and space flight control systems (airplanes, satellites, rockets and planetary rovers). Example 2.8 Vehicle steering the bicycle model A common problem in motion control is to control the trajectory of a vehicle through an actuator that causes a change in the orientation. A steering wheel on an automobile or the front wheel of a bicycle are two examples, but similar dynamics occur in steering of ships or control of the pitch dynamics of an aircraft. In many

64 54 CHAPTER 2. SYSTEM MODELING Figure 2.16: Vehicle steering dynamics. The left figure shows an overhead view of a vehicle with four wheels. By approximating the motion of the front and rear pairs of wheels by a single front and rear wheel, we obtain an abstraction called the bicycle model, shown on the right. The wheel base is b and the center of mass at a distance a forward of the rear wheels. The steering angle is δ and the velocity at the center of mass has the angle α relative the length axis of the vehicle. The position of the vehicle is given by (x,y) and the orientation (heading) by θ. cases, we can understand the basic behavior of these systems through the use of a simple model that captures the basic geometry of the system. Consider a vehicle with two wheels as shown in Figure For the purpose of steering we are interested in a model that describes how the velocity of the vehicle depends on the steering angle δ. To be specific, consider the velocity v at the center of mass, a distance a from the rear wheel, and let b be the wheel base, as shown in Figure Let x and y be the coordinates of the center of mass, θ the heading angle and α the angle between the velocity vector v and the centerline of the vehicle. Since b = r a tanδ and a = r a tanα it follows that tanα = (a/b)tanβ and we get the following relation between α and the steering angle δ: ( atanδ ) α(δ) = arctan. (2.23) b Assume that the wheels are rolling without slip and that the velocity of the rear wheel is v. The vehicle speed at its center of mass is v = v /cosα and we find that the motion of this point is given by dx dt = vcos(α + θ) = v cos(α + θ) cosα (2.24) dy dt = vsin(α + θ) = v sin(α + θ). cosα To see how the angle θ is influenced by the steering angle we observe from Figure 2.16 that the vehicle rotates with the angular velocity v /r a around the point O. Hence dθ dt = v = v tanδ. (2.25) r a b Equations (2.23) (2.25) can be used to model an automobile under the assumptions that there is no slip between the wheels and the road and that the two front

65 2.4. MODELING EXAMPLES 55 θ y r F 2 (a) x (b) F 1 Figure 2.17: Vectored thrust aircraft. The Harrier AV-8B military aircraft (a) redirects its engine thrust downward so that it can hover above the ground. Some air from the engine is diverted to the wing tips to be used for maneuvering. As shown in (b), the net thrust on the aircraft can be decomposed into a horizontal force F 1 and a vertical force F 2 acting at a distance r from the center of mass. wheels can be a approximated by a single wheel at the center of the car. The assumption of no slip can be relaxed by adding an extra state variable, giving a more realistic model. Such a model also describes the steering dynamics of ships as well as the pitch dynamics of aircraft and missiles. It is also possible to place the coordinates of the car at the rear wheels (corresponding to setting α = ), a model which is often referred to as the Dubins car [Dub57]. The situation in Figure 2.16 represents the situation when the vehicle moves forward and has front-wheel steering. The case when the vehicle reverses is obtained simply by changing the sign of the velocity, which is equivalent to a vehicle with rear-wheel steering. Example 2.9 Vectored thrust aircraft Consider the motion of vectored thrust aircraft, such as the Harrier jump jet shown Figure 2.17a. The Harrier is capable of vertical takeoff by redirecting its thrust downward and through the use of smaller maneuvering thrusters located on its wings. A simplified model of the Harrier is shown in Figure 2.17b, where we focus on the motion of the vehicle in a vertical plane through the wings of the aircraft. We resolve the forces generated by the main downward thruster and the maneuvering thrusters as a pair of forces F 1 and F 2 acting at a distance r below the aircraft (determined by the geometry of the thrusters). Let (x,y,θ) denote the position and orientation of the center of mass of aircraft. Let m be the mass of the vehicle, J the moment of inertia, g the gravitational constant, and c the damping coefficient. Then the equations of motion for the

66 56 CHAPTER 2. SYSTEM MODELING λ µ incoming messages x message queue outgoing messages Figure 2.18: Schematic diagram of a queuing system. Messages arrive at rate λ and are stored in a queue. Messages are processed and removed from the queue at rate µ. The average size of the queue is given by x R. vehicle are given by mẍ = F 1 cosθ F 2 sinθ cẋ mÿ = F 1 sinθ + F 2 cosθ mg cẏ J θ = rf 1. (2.26) It is convenient to redefine the inputs so that the origin is an equilibrium point of the system with zero input. Letting u 1 = F 1 and u 2 = F 2 mg, the equations become mẍ = mgsinθ cẋ+u 1 cosθ u 2 sinθ mÿ = mg(cosθ 1) cẏ+u 1 sinθ + u 2 cosθ J θ = ru 1. (2.27) These equations described the motion of the vehicle as a set of three coupled second order differential equations. Information Systems Information systems range from communication systems like the Internet to software systems that manipulate data or manage enterprise wide resources. Feedback is present in all these systems, and design of strategies for routing, flow control and buffer management are typical problems. Many results in queuing theory emerged from design of telecommunication systems and later from development of the Internet and computer communication systems [BG87, Kle75, Sch87]. Management of queues to avoid congestion is a central problem and we will therefore start by discussing modeling of queuing systems. Example 2.1 Queuing systems A schematic picture of a simple queue is shown in Figure Requests arrive and are then queued and processed. There can be large variations in arrival rates and service rates and the queue length builds up when the arrival rate is larger than the service rate. When the queue becomes too large, service is denied using an admission control policy. The system can be modeled in many different ways. One way is to model each incoming request, which leads to an event-based model where the state is

67 2.4. MODELING EXAMPLES 57 an integer that represents the queue length. The queue changes when a request arrives or a request is serviced. The statistics of arrival and servicing are typically modeled as random processes. In many cases it is possible to determine statistics of quantities like queue length and service time but the computations can be quite complicated. A significant simplification can be obtained by using a flow model. Instead of keeping track of each request we instead view service and requests as flows, similar to what is done when replacing molecules by a continuum when analyzing fluids. Assuming that the average queue length x is a continuous variable and that arrivals and services are flows with rates λ and µ, the system can be modeled by the first order differential equation dx dt = λ µ = λ µ max f(x), x, (2.28) where µ max is the maximum service rate and f(x) is a number between and 1 that describes the effective service rate as a function of the queue length. It is natural to assume that the effective service rate depends on the queue length because larger queues require more resources. In steady state we have f(x) = λ/µ max and we assume that the queue length goes to zero when λ/µ max goes to zero and that it goes to infinity when λ/µ max goes to 1. This implies that f() = and that f( ) = 1. In addition if we assume that the effective service rate deteriorates monotonically with queue length then the function f(x) is monotone and concave. A simple function that satisfies the basic requirements is f(x) = x/(1 + x), which gives the model dx dt = λ µ x max x+1. (2.29) This model was proposed by Agnew [Agn76]. It can be shown that if arrival and service processes are Poisson processes the average queue length is given by equation (2.29) and that equation (2.29) is a good approximation even for short queue lengths; see Tipper [TS9]. To explore the properties of the model (2.29) we will first investigate the equilibrium value of the queue length when the arrival rate λ is constant. Setting the derivative dx/dt to zero in equation (2.29) and solving for x we find that the queue length x approaches the steady state value x e = λ µ max λ. (2.3) Figure 2.19a shows the steady state queue length as a function of λ/µ max, the effective service rate excess. Notice that the queue length increases rapidly as λ approaches µ max. To have a queue length less than 2 requires λ/µ max <.95. The average time to service a request is T s = (x + 1)/µ max and it also increases dramatically as λ approaches µ max. Figure 2.19b illustrates the behavior of the server in a typical overload situation. The maximum service rate is µ max = 1, and the arrival rate starts at λ =.5. The

68 58 CHAPTER 2. SYSTEM MODELING Queue length xe [MB] Service rate excess λ/µ max [MB/s] (a) Queue length xe [MB] Time t [s] Figure 2.19: Queuing dynamics. The figure on the left shows steady state queue length as a function of λ/µ max, and the figure on the right shows the behavior of the queue length when there is a temporary overload in the system. The full line shows a realization of an event based simulation and the dashed line shows the behavior of the flow model (2.29). (b) arrival rate is increased to λ = 4 at time 2, and it returns to λ =.5 at time 25. The figure shows that the queue builds up quickly and clears very slowly. Since the response time is proportional to queue length, it means that the quality of service is poor for a long period after an overload. This behavior is called the rush-hour effect and has been observed in web servers and many other queuing systems such as automobile traffic. The dashed line in Figure 2.19b shows the behavior of the flow model, which describes the average queue length. The simple model captures behavior qualitatively, but there are significant variations from sample to sample when the queue length is short. Queuing problems of the type illustrated in Example 2.1 have been observed in many different situations. The following example illustrates an early example of the difficulty and it also describes how it can be avoided by using a simple feedback scheme. Example 2.11 Virtual memory paging control An early example of use of feedback in computer systems was applied in operating system OS/VS for the IBM 37 [BG68, Cro75]. The system used virtual memory, which allows programs to address more memory than is physically available as fast memory. Data in current fast memory (RAM) is accessed directly but data that resides in slower memory (disk) is automatically loaded into fast memory. The system is implemented in such a way that it appears to the programmer as a single large section of memory. The system performed very well in many situations but very long execution times were encountered in overload situations, as shown in Figure 2.2a. The difficulty was resolved with a simple discrete feedback system. The load of the central processing unit (CPU) was measured together with the number of page swaps between fast memory and slow memory. The operating region was classified as being in one of three states: normal, underload or overload. The normal state is characterized by high CPU activity, the underload state is characterized by low CPU activity and few page replacements, the

69 2.4. MODELING EXAMPLES CPU load Execution time [s] 1 5 Underload Normal Overload Number of processes (a) Memory swaps (b) Figure 2.2: Illustration of feedback in the virtual memory system of IBM/37. The left figure (a) shows the effect of feedback on execution times in a simulation, following [BG68]. Results with no feedback are shown with o and with feedback with x. Notice the dramatic decrease in execution time for the system with feedback. The right figure (b) illustrates how the three states were obtained based on process measurements. overload state has moderate to low CPU load but many page replacements, see Figure 2.2a. The boundaries between the regions and the time for measuring the load were determined from simulations using typical loads. The control strategy was to do nothing in the normal load condition, to exclude a process from memory in an overload condition and to allow a new process or a previously excluded process in the underload condition. Figure 2.2a shows the effectiveness of the simple feedback system in simulated loads. Similar principles are used in many other situations, for example in fast, on-chip cache memory. Example 2.12 Consensus protocols in sensor networks Sensor networks are used in a variety of applications where we want to collect and aggregate information over a region of space using multiple sensors that are connected together via a communications network. Examples include monitoring environmental conditions in a geographical area (or inside a building), monitoring movement of animals or vehicles, or monitoring the resource loading across a group of computers. In many sensor networks the computational resources for the system are distributed along with the sensors and it can be important for the set of distributed agents to reach a consensus about a certain property, such as the average temperature in a region or the average computational load amongst a set of computers. To illustrate how such a consensus might be achieved, we consider the problem of computing the average value of a set of numbers that are locally available to the individual agents. We wish to design a protocol (algorithm) such that all agents will agree on the average value. We consider the case in which all agents cannot necessarily communicate with each other directly, although we will assume that the communications network is connected (meaning that no two groups of agents are completely isolated from each other). Figure 2.21a shows a simple situation of this type. We model the connectivity of the sensor network using a graph, with nodes

70 6 CHAPTER 2. SYSTEM MODELING xi (a) Iteration (b) Figure 2.21: Consensus protocols for sensor networks. A simple sensor network with five nodes is shown on the left. In this network, node 1 communicates with node 2, node 2 communicates with notes 1, 3, 4 and 5, etc. A simulation demonstrating the convergence of the consensus protocol (2.31) to the average value of the initial conditions is shown on the right. corresponding to the sensors and edges corresponding to the existence of a direct communications link between two nodes. For any such graph, we can build an adjacency matrix, where each row and column of the matrix corresponds to a node and a 1 in the respective row and column indicates that the two nodes are connected. For the network shown in Figure 2.21a, the corresponding adjacency matrix is A = We also use the notation N i to represent the set of neighbors of a node i. For example, N 2 = {1,3,4,5} and N 3 = {2,4}. To solve the consensus problem, we let x i be the state of the ith sensor, corresponding to that sensor s estimate of the average value that we are trying to compute. We initialize the state to the value of the quantity measured by the individual sensor. Our consensus protocol can now be realized as a local update law of the form x i [k+ 1] = x i [k]+γ (x j [k] x i [k]). (2.31) j N i This protocol attempts to compute the average by updating the local state of each agent based on the value of its neighbors. The combined dynamics of all agents can be written in the form x[k+ 1] = x[k] γ(d A)x[k] (2.32) where A is the adjacency matrix and D is a diagonal matrix whose entries correspond to the number of neighbors of the corresponding node. The constant γ

71 2.4. MODELING EXAMPLES 61 describes the rate at which we update our own estimate of the average based on the information from our neighbors. The matrix L := D A is called the Laplacian of the graph. The equilibrium points of equation (2.32) are the set of states such that x e [k+ 1] = x e [k]. It is easy to show that x e = (α,α,...,α) is an equilibrium state for the system, corresponding to each sensor having an identical estimate α for the average. Furthermore, we can show that α is the precisely the average value of the initial states. To see this, let W[k] = 1 N n i=1 x i [k] where N is the number of nodes in the sensor network. W[] is the average of the initial states of the network, which is the quantity we are trying to compute. W[k] is given by the difference equation W[k+ 1] = 1 N n i=1x i [k+ 1] = 1 N n i=1 ( xi [k]+γ (x j [k] x i [k]) ). j N i Since i N j implies that j N i, it follows that each term in the second summation occurs twice with opposite sign. Thus we can conclude that W[k + 1] = W[k] and hence W[k] = W[] for all k, which implies that at the equilibrium point α must be W, the average of the initial states. W is called an invariant and the use of invariants is an important technique for verifying correctness of computer programs. Having shown that the desired consensus state is an equilibrium point for our protocol, we still must show that the algorithm actually converges to this state. Since there can be cycles in the graph, it is possible that the state of the system could get into an infinite loop and never converge to the desired consensus state. A formal analysis requires tools that will be introduced later in the text, but it can be shown that for any connected graph, we can always find a γ such that the states of the individual agents converge to the average. A simulation demonstrating this property is shown in Figure 2.21b. Although we have focused here on consensus to the average value of a set of measurements, other consensus states can be achieved through choice of appropriate feedback laws. Examples include finding the maximum or minimum value in a network, counting the number of nodes in a network or computing higher order statistical moments of a distributed quantity [OSFM7]. Biological Systems Biological systems provide perhaps the richest source of feedback and control examples. The basic problem of homeostasis, in which a quantity such as temperature or blood sugar level is regulated to a fixed value, is but one of the many types of complex feedback interactions that can occur in molecular machines, cells, organisms and ecosystems.

72 62 CHAPTER 2. SYSTEM MODELING Figure 2.22: Biological circuitry. The cell on the left is a bovine pulmonary cell, stained so that the the nucleus, actin and chromatin are visible. The figure on the right gives an overview of the process by which proteins in the cell are made. RNA is transcribed from DNA by an RNA polymerase enzyme. The RNA is then translated into a protein by an organelle called the ribosome. Example 2.13 Transcriptional regulation Transcription is the process by which mrna is generated from a segment of DNA. The promoter region of a gene allows transcription to be controlled by the presence of other proteins, which bind to the promoter region and either repress or activate RNA polymerase (RNAP), the enzyme that produces an mrna transcript from DNA. The mrna is then translated into a protein according to its nucleotide sequence. This process is illustrated in Figure A simple model of the transcriptional regulation process is through the use of a Hill function [dj2, Mur4]. Consider the regulation of a protein A with concentration given by p A and corresponding mrna concentration m A. Let B be a second protein with concentration p B that represses the production of protein A through transcriptional regulation. The resulting dynamics of p A and m A can be written as dm A dt α = γm A + 1+k B p n + α, B d p A dt = βm A δ p A, (2.33) where α + α is the unregulated transcription rate, γ represents the rate of degradation of mrna, α and n are parameters that describe how B represses A, β represents the rate of production of the protein from its corresponding mrna and δ represents the rate of degradation of the protein A. The parameter α describes the leakiness of the promoter and n is called the Hill coefficient and relates to the cooperativity of the promoter. A similar model can be used when a protein activates the production of another protein, rather than repressing it. In this case, the equations have the form dm A dt = αk Bp n B 1+k B p n + α γm A, B d p A dt = βm A δ p A, (2.34)

73 2.4. MODELING EXAMPLES 63 Proteins per cell ci laci tetr Time [min] Figure 2.23: The repressilator genetic regulatory network. A schematic diagram of the repressilator is given on the left, showing the layout of the genes in the plasmid that holds the circuit as well as the circuit diagram (center). A simulation of a simple model for the repressilator is shown on the right, showing the oscillation of the individual protein concentrations. Parameter values taken from [EL]. where the variables are the same as described previously. Note that in the case of the activator, if p B is zero then the production rate is α (versus α + α for the repressor). As p B gets large, the first term in the expression for ṁ A approaches 1 and the transcription rate becomes α + α (versus α for the repressor). Thus we see that the activator and repressor act in opposite fashion from each other. As an example of how these models can be used, we consider the model of a repressilator, originally due to Elowitz and Leibler [EL]. The repressilator is a synthetic circuit in which three proteins each repress another in a cycle. This is shown schematically in Figure 2.23a, where the three proteins are TetR, λ ci and LacI. The basic idea of the repressilator is that if TetR is present then it represses the production of λ ci. If λ ci is absent, then LacI is produced (at the unregulated transcription rate), which in turn represses TetR. Once TetR is repressed then λ ci is no longer repressed and so on. If the dynamics of the circuit are designed properly, the resulting protein concentrations will oscillate. We can model this system using three copies of equation (2.33), with A and B replaced by the appropriate combination of TetR, ci and LacI. The state of the system is then given by x = (m TetR, p TetR,m ci, p ci,m LacI, p LacI ). Figure 2.23b shows the traces of the three protein concentrations for parameters n = 2, α =.5, k = , α = 5 1 4, γ = , β =.12 and δ = with initial conditions x() = (1,,, 2,, ) (following [EL]). Example 2.14 Wave propagation in neuronal networks The dynamics of the membrane potential in a cell are a fundamental mechanism in understanding signaling in cells, particularly in neurons and muscle cells. The Hodgkin-Huxley equations give a simple model for studying propagation waves in networks of neurons. The model for a single neuron has the form C dv dt = I Na I K I leak + I input,

74 64 CHAPTER 2. SYSTEM MODELING where V is the membrane potential, C the capacitance, I Na and I K the current caused by transport of sodium and potassium across the cell membrane, I leak a leakage current and I input the external stimulation of the cell. Each current obeys Ohm s law, i.e. I = g(v E), where g is the conductance and E the equilibrium voltage. The equilibrium voltage is given by Nernst s law E = RT nf log(c e/c i ), where R is Boltzmann s constant, T the absolute temperature, F Faraday s constant, n is the charge (or valence) of the ion, and c i and c e are the ion concentrations inside the cell and in the external fluid. At 2 C we have RT/F = 2 mv. The Hodgkin-Huxley model was originally developed as a means to predict the quantitative behavior of the squid giant axon [HH52]. Hodgkin and Huxley shared the 1963 Nobel Prize in Physiology (along with J. C. Eccles) for analysis of the electrical and chemical events in nerve cell discharge. The voltage clamp described in Section 1.3 (see Figure 1.8) was a key element in Hodgkin and Huxley s experiments. 2.5 FURTHER READING Modeling is ubiquitous in engineering and science and has a long history in applied mathematics. For example, the Fourier series was introduced by Fourier when he modeled heat conduction in solids [Fou7]. Models of dynamics have been developed in many different fields, including mechanics [Arn78, Gol53], heat conduction [CJ59], fluids [BRS6], vehicles [Abk69, Bla91, Ell94], circuit theory [Gui63], acoustics [Ber54] and micromechanical systems [Sen1]. Control theory requires modeling from many different domains and most control theory texts contain several chapters on modeling using ordinary differential equations and difference equations (see, for example, [FPEN5]). A classic book on modeling of physical systems, especially mechanical, electrical and thermo-fluid systems, is Cannon [Can3]. The book by Aris [Ari94] is highly original and has a detailed discussion of the use of dimension free variables. Two of the authors favorite books on modeling of biological systems are J. D. Murray [Mur4] and Wilson [Wil99]. For readers interested in learning more about object oriented modeling and Modelica, Tiller [Til1] provides an excellent introduction.

75 2.5. FURTHER READING 65 EXERCISES 2.1 Consider the linear ordinary differential equation (2.7). Show that by choosing a state space representation with x 1 = y, the dynamics can be written as 1. A =..... B = 1. a n a n 1 a 1 1 C = This canonical form is called chain of integrators form. 2.2 Use the equations of motion for a balance system to derive a dynamic model for the inverted pendulum described in Example 2.2 and verify that for small θ the dynamics are approximated by equation (2.1). 2.3 Consider the following discrete time system x[k+ 1] = Ax[k]+Bu[k] y[k] = Cx[k] where x = x 1 A = a 11 a 12 B = C = 1 a 22 1 x 2 In this problem, we will explore some of the properties of this discrete time system as a function of the parameters, the initial conditions, and the inputs. (a) For the case when a 12 = and u =, give a closed for expression for the output of the system. (b) A discrete system is in equilibrium when x[k + 1] = x[k] for all k. Let u = r be a constant input and compute the resulting equilibrium point for the system. Show that if a ii < 1 for all i, all initial conditions give solutions that converge to the equilibrium point. (c) Write a computer program to plot the output of the system in response to a unit step input, u[k] = 1, k. Plot the response of your system with x[] = and A given by A = Keynes simple model for an economy is given by Y[k] = C[k]+I[k]+G[k], where Y, C, I and G are gross national product (GNP), consumption, investment and government expenditure for year k. Consumption and investment are modeled

76 66 CHAPTER 2. SYSTEM MODELING by difference equations of the form C[k+ 1] = ay[k], I[k+ 1] = b(c[k+ 1] C[k]), where a and b are parameters. The first equation implies that consumption increases with GNP but that the effect is delayed. The second equation implies that investment is proportional to the rate of change of consumption. Show that the equilibrium value of the GNP is given by Y e = 1 1 a (I e + G e ), where the parameter 1/(1 a) is the Keynes multiplier (the gain from I or G to Y ). With a =.25 an increase of government expenditure will result in a fourfold increase of GNP. Also show that the model can be written as the following discrete time state model C[k+ 1] = I[k+ 1] a a ab a ab Y[k] = C[k]+I[k]+G[k]. C[k] + a G[k] I[k] ab 2.5 (Second order system identification) Verify that equation (2.22) in Example 2.5 is correct and use this formula and the others in the example to compute the parameters corresponding to the step response in Figure (Least squares system identification) Consider a nonlinear differential equation that can be written in the form M dx dt = α i f i (x), i=1 where f i (x) are known nonlinear functions and α i are unknown, but constant, parameters. Suppose that we have measurements (or estimates) of the state x at time instants t 1,t 2,...,t N, with N > M. Show that the parameters α i can be determined by finding the least squares solution to a linear equation of the form Hα = b, where α R M is the vector of all parameters and H R N M and b R N are appropriately defined. 2.7 (Normalized oscillator dynamics) Consider a damped spring-mass system with dynamics m q+c q+kq = u. Let ω = k/m be the undamped natural frequency and ζ = c/(2 km) be the relative damping. (a) Show that by rescaling the equations, we can write the system dynamics in the form q+2ζ ω ż+ω 2 q = w (2.35)

77 2.5. FURTHER READING 67 where u = F/m. This form of the dynamics is that of a linear oscillator with natural frequency ω and damping coefficient ζ. (b) Show that the system can be further normalized and written in the form dz 1 dz 2 = z 2, = z 1 2ζ z 2 + v. dt dt We thus see that the essential dynamics of the system are governed by a single damping parameter, ζ. 2.8 An electric generator connected to a strong power grid can be modeled by a momentum balance for the rotor of the generator: J d2 ϕ dt 2 = P m P e = P m EV X sinϕ, where J is the effective moment of inertia of the generator, ϕ the angle of rotation, P m the mechanical power that drives the generator, E is the generator voltage, V the grid voltage and X the reactance of the line. P e is the active electrical power and, assuming that the line dynamics are much faster than the rotor dynamics, it is given by P e = V I = (EV/X)sinϕ, where I is the current component in phase with the voltage E and ϕ is the phase angle between voltages E and V. Show that the dynamics of the electric generate have the same normalized form as the inverted pendulum (note that damping has been neglected in the model above). 2.9 Show that the dynamics for a balance system using normalized coordinates can be written in state space form as x 3 x 4 dx dt = αx4 2 α sinx 2 cosx 2 + u 1 αβ cos 2, x 2 αβ cosx 2 x4 2 sinx 2 + β cosx 2 u 1 αβ cos 2 x 2 where x = (q/l,θ, q/l, θ). 2.1 Consider the dynamics of two repressors connected together in a cycle, as shown below: A u 1 u 2 B Using the models from Example 2.13, under the assumption that the parameters are the same for both genes, and further assuming that the mrna concentrations

78 68 CHAPTER 2. SYSTEM MODELING reach steady state quickly, show that the dynamics for this system can be written as dz 1 dτ = µ dz 2 1+z n z 1 v 1, 2 dτ = µ 1+z n z 2 v 2. (2.36) 1 where z 1 and z 2 represent scaled versions of the protein concentrations and the time scale has been changed. Show that µ 2.16 using the parameters in Example 2.13.

79 Chapter Three Examples... Don t apply any model until you understand the simplifying assumptions on which it is based, and you can test their validity. Catch phrase: use only as directed. Don t limit yourself to a single model: More than one model may be useful for understanding different aspects of the same phenomenon. Catch phrase: legalize polygamy. Saul Golomb in his 197 paper Mathematical Models Uses and Limitations [Gol7]. In this chapter we present a collection of examples spanning many different fields of science and engineering. These examples will be used throughout the text and in exercises to illustrate different concepts. First time readers may wish to focus only on a few examples with which they have the most prior experience or insight to understand the concepts of state, input, output and dynamics in a familiar setting. 3.1 CRUISE CONTROL The cruise control system of a car is a common feedback system encountered in everyday life. The system attempts to maintain a constant velocity in the presence of disturbances primarily caused by changes in the slope of a road. The controller compensates for these unknowns by measuring the speed of the car and adjusting the throttle appropriately. To model the system we start with the block diagram in Figure 3.1. Let v be the speed of the car and v r the desired (reference) speed. The controller, which typically is of the proportional-integral (PI) type described briefly in Chapter 1, receives the signals v and v r and generates a control signal u that is sent to an actuator that controls throttle position. The throttle in turn controls the torque T delivered by the engine, which is transmitted through gears and the wheels, generating a force F that moves the car. There are disturbance forces F d due to variations in the slope of the road, the rolling resistance and aerodynamic forces. The cruise controller also has a human-machine interface that allows the driver to set and modify the desired speed. There are also functions that disconnect the cruise control when the brake is touched. The system has many individual components actuator, engine, transmission, wheels and car body and a detailed model can be very complicated. In spite of this, the model required to design the cruise controller can be quite simple. To develop a mathematical mode we start with a force balance for the car body. Let v be the speed of the car, m the total mass (including passengers), F the force

80 7 CHAPTER 3. EXAMPLES F d Throttle & Engine T Gears & Wheels F Body v Actuator u Controller v r Human Interface on/off set/decel resume/accel cancel Figure 3.1: Block diagram of a cruise control system for an automobile. The throttlecontrolled engine generates a torque T that is transmitted to the ground through the gearbox and wheels. Combined with the external forces from the environment, such as aerodynamic drag and gravitational forces on hills, the net force causes the car to move. The velocity of the car v is measured by a control system that adjusts the throttle through an actuation mechanism. A human interface allows the system to be turned on an off and the reference speed v r to be established. generated by the contact of the wheels with the road, and F d the disturbance force due to gravity and friction. The equation of motion of the car is simply m dv dt = F F d. (3.1) The force F is generated by the engine, whose torque is proportional to the rate of fuel injection, which is itself proportional to a control signal u 1 that controls throttle position. The torque also depends on engine speed ω. A simple representation of the torque at full throttle is given by the torque curve ( ) ) ω 2 T(ω) = T m (1 β 1, (3.2) ω m where the maximum torque T m is obtained at engine speed ω m. Typical parameters are T m = 19 Nm, ω m = 42 rad/s (about 4 RPM) and β =.4. Let n be the gear ratio and r the wheel radius. The engine speed is related to the velocity Torque [Nm] Angular velocity ω [rad/s] Torque [Nm] 2 15 n = 5 n = 1 n = 2 n = 3 n = Velocity v [m/s] Figure 3.2: Torque curves for typical car engine. The graph on the left shows the torque generated by the engine as a function of the angular velocity of the engine, while the curve on the right shows torque as a function of car speed for different gears.

81 3.1. CRUISE CONTROL 71 F g F Velocity v [m/s] θ mg Throttle Time t [s] (a) (b) Figure 3.3: Car with cruise control encountering a sloping road: a schematic diagram is shown in (a) and (b) shows the response in speed and throttle when a slope of 4 is encountered. The hill is modeled as a net change in hill angle, θ, of 4 degrees, with a linear change in the angle between t = 5 and t = 6. The PI controller has proportional gain is k p =.5 and the integral gain is k i =.1. through the expression ω = n r v =: α nv, and the driving force can be written as F = nu r T(ω) = α nut(α n v). Typical values of α n for gears 1 through 5 are α 1 = 4, α 2 = 25, α 3 = 16, α 4 = 12 and α 5 = 1. The inverse of α n has a physical interpretation as the effective wheel radius. Figure 3.2 shows the torque as a function of engine speed and vehicle speed. The figure shows that the effect of the gear is to flatten the torque curve so that a almost full torque can be obtained almost over the whole speed range. The disturbance force F d has three major components: F g, the forces due to gravity; F r, the forces due to rolling friction; and F a, the aerodynamic drag, Letting the slope of the road be θ, gravity gives the force F g = mgsinθ, as illustrated in Figure 3.3a, where g = 9.8 m/s 2 is the gravitational constant. A simple model of rolling friction is F r = mgc r sgn(v), where C r is the coefficient of rolling friction and sgn(v) is the sign of v (±1) or zero if v =. A typical value for the coefficient of rolling friction is C r =.1. Finally, the aerodynamic drag is proportional to the square of the speed: F a = 1 2 ρc dav 2, where ρ is the density of air, C d is the shape-dependent aerodynamic drag coefficient and A is the frontal area of the car. Typical parameters are ρ = 1.3 kg/m 3, C d =.32 and A = 2.4 m 2.

82 72 CHAPTER 3. EXAMPLES Summarizing, we find that the car can be modeled by m dv dt = α nut(α n v) mgc r sgn(v) 1 2 ρc dav 2 mgsinθ, (3.3) where the function T is given by equation (3.2). The model (3.3) is a dynamical system of first order. The state is the car velocity v, which is also the output. The input is the signal u that controls the throttle position, and the disturbance is the force F d, which depends on the slope of the road. The system is nonlinear because of the torque curve and the nonlinear character of the aerodynamic drag. There can also be variations in the parameters, e.g. the mass of the car depends on the number of passengers and the load being carried in the car. We add to this model a feedback controller that attempts to regulate the speed of the car in the presence of disturbances. We shall use a PI (proportional-integral) controller, which has the form t u(t) = k p e(t)+k i e(τ) dτ. This controller can itself be realized as an input/output dynamical system by defining a controller state z and implementing the differential equation dz dt = v r v u = k p (v r v)+k i z, (3.4) where v r is the desired (reference) speed. As discussed briefly in the introduction, the integrator (represented by the state z) ensures that in steady state the error will be driven to zero, even when there are disturbances or modeling errors. (The design of PI controllers is the subject of Chapter 1.) Figure 3.3b shows the response of the closed loop system, consisting of equations (3.3) and (3.4), when it encounters a hill. The figure shows that even if the hill is so steep that the throttle changes from.17 to almost full throttle, the largest speed error is less than 1 m/s, and the desired velocity is recovered after 2 s. Many approximations were made when deriving the model (3.3). It may seem surprising that such a seemingly complicated system can be described by the simple model (3.3). It is important to make sure that we restrict our use of the model to the uncertainty lemon conceptualized in Figure 2.15b. The model is not valid for very rapid changes of the throttle because since we have ignored the details of the engine dynamics, neither is it valid for very slow changes because the properties of the engine will change over the years. Nevertheless the model is very useful for the design of a cruise control system. As we shall see in later chapters, the reason for this is the inherent robustness of feedback systems: even if the model is not perfectly accurate, we can use it to design a controller and make use of the feedback in the controller to manage the uncertainty in the system. The cruise control system also has a human-machine interface that allows the driver to communicate with the system. There are many different ways to implement this system; one version is illustrated in Figure 3.4. The system has four buttons: on-off, set/decelerate, resume/accelerate and cancel. The operation of the

83 3.2. BICYCLE DYNAMICS 73 Figure 3.4: Finite state machine for cruise control system. The figure on the left shows some typical buttons used to control the system. The controller can be in one of four modes, corresponding to the nodes in the diagram on the right. Transition between the modes is controlled by pressing one of five buttons on the cruise control interface: on, off, set/accel, resume or cancel. system is governed by a finite state machine that controls the modes of the PI controller and the reference generator. Implementation of controllers and reference generators will be discussed more fully in Chapter 1. The use of control in automotive systems goes well beyond the simple cruise control system described here. Modern applications include emissions control, traction control and power control (especially in hybrid vehicles). Many automotive applications are discussed in detail in and the book by Kiencke and Nielsen [KN] and the survey papers by Powers et al. [BP96, PN]. 3.2 BICYCLE DYNAMICS The bicycle is an interesting dynamical system with the feature that one of its key properties is due to a feedback mechanism that is created by the design of the front fork. A detailed model of a bicycle is complex because the system has many degrees of freedom and the geometry is complicated. However, a great deal of insight can be obtained from simple models. To derive the equations of motion we assume that the bicycle rolls on the horizontal xy plane. Introduce a coordinate system that is fixed to the bicycle with the ξ -axis through the contact points of the wheels with the ground, the η-axis horizontal and the ζ -axis vertical, as shown in Figure 3.5. Let v be the velocity of the bicycle at the rear wheel, b the wheel base, ϕ the tilt angle and δ the steering angle. The coordinate system rotates around the point O with the angular velocity ω = v δ/b, and an observer fixed to the bicycle experiences forces due to the motion of the coordinate system. The tilting motion of the bicycle is similar to an inverted pendulum, as shown in the rear view in Figure 3.5b. To model the tilt, consider the rigid body obtained when the wheels, the rider and the front fork assembly are fixed to the bicycle frame. Let m be the total mass of the system, J the moment of inertia of this body with respect to the ξ -axis, and D the product of inertia with respect to the ξ ζ axes. Furthermore, let the ξ and ζ coordinates of the center of mass with

Feedback Systems: An Introduction for Scientists and Engineers

Feedback Systems: An Introduction for Scientists and Engineers Feedback Systems: An Introduction for Scientists and Engineers Karl Johan Åström Department of Automatic Control Lund Institute of Technology Richard M. Murray Control and Dynamical Systems California

More information

An Introduction for Scientists and Engineers

An Introduction for Scientists and Engineers Feedback Systems An Introduction for Scientists and Engineers Karl Johan Åström Richard M. Murray Version v2.1b (February 22, 29) This is the electronic edition of Feedback Systems and is available from

More information

Microcontroller-based experiments for a control systems course in electrical engineering technology

Microcontroller-based experiments for a control systems course in electrical engineering technology Microcontroller-based experiments for a control systems course in electrical engineering technology Albert Lozano-Nieto Penn State University, Wilkes-Barre Campus, Lehman, PA, USA E-mail: AXL17@psu.edu

More information

System Modeling and Control for Mechanical Engineers

System Modeling and Control for Mechanical Engineers Session 1655 System Modeling and Control for Mechanical Engineers Hugh Jack, Associate Professor Padnos School of Engineering Grand Valley State University Grand Rapids, MI email: jackh@gvsu.edu Abstract

More information

- 2.12 Lecture Notes - H. Harry Asada Ford Professor of Mechanical Engineering

- 2.12 Lecture Notes - H. Harry Asada Ford Professor of Mechanical Engineering - 2.12 Lecture Notes - H. Harry Asada Ford Professor of Mechanical Engineering Fall 2005 1 Chapter 1 Introduction Many definitions have been suggested for what we call a robot. The word may conjure up

More information

E. K. A. ADVANCED PHYSICS LABORATORY PHYSICS 3081, 4051 NUCLEAR MAGNETIC RESONANCE

E. K. A. ADVANCED PHYSICS LABORATORY PHYSICS 3081, 4051 NUCLEAR MAGNETIC RESONANCE E. K. A. ADVANCED PHYSICS LABORATORY PHYSICS 3081, 4051 NUCLEAR MAGNETIC RESONANCE References for Nuclear Magnetic Resonance 1. Slichter, Principles of Magnetic Resonance, Harper and Row, 1963. chapter

More information

Degree programme in Automation Engineering

Degree programme in Automation Engineering Degree programme in Automation Engineering Course descriptions of the courses for exchange students, 2014-2015 Autumn 2014 21727630 Application Programming Students know the basis of systems application

More information

DISTANCE DEGREE PROGRAM CURRICULUM NOTE:

DISTANCE DEGREE PROGRAM CURRICULUM NOTE: Bachelor of Science in Electrical Engineering DISTANCE DEGREE PROGRAM CURRICULUM NOTE: Some Courses May Not Be Offered At A Distance Every Semester. Chem 121C General Chemistry I 3 Credits Online Fall

More information

ELECTRICAL ENGINEERING

ELECTRICAL ENGINEERING EE ELECTRICAL ENGINEERING See beginning of Section H for abbreviations, course numbers and coding. The * denotes labs which are held on alternate weeks. A minimum grade of C is required for all prerequisite

More information

STEPPER MOTOR SPEED AND POSITION CONTROL

STEPPER MOTOR SPEED AND POSITION CONTROL STEPPER MOTOR SPEED AND POSITION CONTROL Group 8: Subash Anigandla Hemanth Rachakonda Bala Subramanyam Yannam Sri Divya Krovvidi Instructor: Dr. Jens - Peter Kaps ECE 511 Microprocessors Fall Semester

More information

KINETIC ENERGY RECOVERY SYSTEM BY MEANS OF FLYWHEEL ENERGY STORAGE

KINETIC ENERGY RECOVERY SYSTEM BY MEANS OF FLYWHEEL ENERGY STORAGE ADVANCED ENGINEERING 3(2009)1, ISSN 1846-5900 KINETIC ENERGY RECOVERY SYSTEM BY MEANS OF FLYWHEEL ENERGY STORAGE Cibulka, J. Abstract: This paper deals with the design of Kinetic Energy Recovery Systems

More information

Modeling, Analysis, and Control of Dynamic Systems

Modeling, Analysis, and Control of Dynamic Systems Modeling, Analysis, and Control of Dynamic Systems Second Edition William J. Palm III University of Rhode Island John Wiley Sons, Inc. New York Chichester Weinheim Brisbane Singapore Toronto To Louise.

More information

CONNECTING LESSONS NGSS STANDARD

CONNECTING LESSONS NGSS STANDARD CONNECTING LESSONS TO NGSS STANDARDS 1 This chart provides an overview of the NGSS Standards that can be met by, or extended to meet, specific STEAM Student Set challenges. Information on how to fulfill

More information

Line Monitoring and Control in Subsea Networks

Line Monitoring and Control in Subsea Networks Line Monitoring and Control in Subsea Networks This paper discusses how submerged equipment is monitored and contrasts different methods of doing this. It also considers what features are required by the

More information

Process Control Primer

Process Control Primer Process Control Primer At the onset of the Industrial Revolution, processes were controlled manually. Men turned valves, pulled levers or changed switches based on the need to turn devices on or off. As

More information

Figure 1. The Ball and Beam System.

Figure 1. The Ball and Beam System. BALL AND BEAM : Basics Peter Wellstead: control systems principles.co.uk ABSTRACT: This is one of a series of white papers on systems modelling, analysis and control, prepared by Control Systems Principles.co.uk

More information

High Voltage Power Supplies for Analytical Instrumentation

High Voltage Power Supplies for Analytical Instrumentation ABSTRACT High Voltage Power Supplies for Analytical Instrumentation by Cliff Scapellati Power supply requirements for Analytical Instrumentation are as varied as the applications themselves. Power supply

More information

Introduction to Metropolitan Area Networks and Wide Area Networks

Introduction to Metropolitan Area Networks and Wide Area Networks Introduction to Metropolitan Area Networks and Wide Area Networks Chapter 9 Learning Objectives After reading this chapter, you should be able to: Distinguish local area networks, metropolitan area networks,

More information

MECE 102 Mechatronics Engineering Orientation

MECE 102 Mechatronics Engineering Orientation MECE 102 Mechatronics Engineering Orientation Mechatronic System Components Associate Prof. Dr. of Mechatronics Engineering Çankaya University Compulsory Course in Mechatronics Engineering Credits (2/0/2)

More information

Basics electronic speed Governor

Basics electronic speed Governor Basics electronic speed Governor 1 MAN B&W Diesel Aktiengesellschaft, Augsburg Why do we need Governors? Power sources must be controlled to be converted to useful work. Uncontrolled prime movers, not

More information

INTRUSION PREVENTION AND EXPERT SYSTEMS

INTRUSION PREVENTION AND EXPERT SYSTEMS INTRUSION PREVENTION AND EXPERT SYSTEMS By Avi Chesla avic@v-secure.com Introduction Over the past few years, the market has developed new expectations from the security industry, especially from the intrusion

More information

How To Calculate The Power Gain Of An Opamp

How To Calculate The Power Gain Of An Opamp A. M. Niknejad University of California, Berkeley EE 100 / 42 Lecture 8 p. 1/23 EE 42/100 Lecture 8: Op-Amps ELECTRONICS Rev C 2/8/2012 (9:54 AM) Prof. Ali M. Niknejad University of California, Berkeley

More information

Appendix A: Science Practices for AP Physics 1 and 2

Appendix A: Science Practices for AP Physics 1 and 2 Appendix A: Science Practices for AP Physics 1 and 2 Science Practice 1: The student can use representations and models to communicate scientific phenomena and solve scientific problems. The real world

More information

EDUMECH Mechatronic Instructional Systems. Ball on Beam System

EDUMECH Mechatronic Instructional Systems. Ball on Beam System EDUMECH Mechatronic Instructional Systems Ball on Beam System Product of Shandor Motion Systems Written by Robert Hirsch Ph.D. 998-9 All Rights Reserved. 999 Shandor Motion Systems, Ball on Beam Instructional

More information

Research Methodology Part III: Thesis Proposal. Dr. Tarek A. Tutunji Mechatronics Engineering Department Philadelphia University - Jordan

Research Methodology Part III: Thesis Proposal. Dr. Tarek A. Tutunji Mechatronics Engineering Department Philadelphia University - Jordan Research Methodology Part III: Thesis Proposal Dr. Tarek A. Tutunji Mechatronics Engineering Department Philadelphia University - Jordan Outline Thesis Phases Thesis Proposal Sections Thesis Flow Chart

More information

Industrial Steam System Process Control Schemes

Industrial Steam System Process Control Schemes Industrial Steam System Process Control Schemes This paper was developed to provide a basic understanding of the different process control schemes used in a typical steam system. This is however a fundamental

More information

Structure and Properties of Atoms

Structure and Properties of Atoms PS-2.1 Compare the subatomic particles (protons, neutrons, electrons) of an atom with regard to mass, location, and charge, and explain how these particles affect the properties of an atom (including identity,

More information

The D-Wave 2X Quantum Computer Technology Overview

The D-Wave 2X Quantum Computer Technology Overview The D-Wave 2X Quantum Computer Technology Overview D-Wave Systems Inc. www.dwavesys.com D-Wave Systems Founded in 1999, D-Wave Systems is the world s first quantum computing company. Our mission is to

More information

Department of Aeronautics and Astronautics School of Engineering Massachusetts Institute of Technology. Graduate Program (S.M., Ph.D., Sc.D.

Department of Aeronautics and Astronautics School of Engineering Massachusetts Institute of Technology. Graduate Program (S.M., Ph.D., Sc.D. Department of Aeronautics and Astronautics School of Engineering Massachusetts Institute of Technology Graduate Program (S.M., Ph.D., Sc.D.) Field: Space Propulsion Date: October 15, 2013 1. Introduction

More information

ENERGY TRANSFER SYSTEMS AND THEIR DYNAMIC ANALYSIS

ENERGY TRANSFER SYSTEMS AND THEIR DYNAMIC ANALYSIS ENERGY TRANSFER SYSTEMS AND THEIR DYNAMIC ANALYSIS Many mechanical energy systems are devoted to transfer of energy between two points: the source or prime mover (input) and the load (output). For chemical

More information

THE STRAIN GAGE PRESSURE TRANSDUCER

THE STRAIN GAGE PRESSURE TRANSDUCER THE STRAIN GAGE PRESSURE TRANSDUCER Pressure transducers use a variety of sensing devices to provide an electrical output proportional to applied pressure. The sensing device employed in the transducers

More information

E/M Experiment: Electrons in a Magnetic Field.

E/M Experiment: Electrons in a Magnetic Field. E/M Experiment: Electrons in a Magnetic Field. PRE-LAB You will be doing this experiment before we cover the relevant material in class. But there are only two fundamental concepts that you need to understand.

More information

Force/position control of a robotic system for transcranial magnetic stimulation

Force/position control of a robotic system for transcranial magnetic stimulation Force/position control of a robotic system for transcranial magnetic stimulation W.N. Wan Zakaria School of Mechanical and System Engineering Newcastle University Abstract To develop a force control scheme

More information

TRU Math Conversation Guide

TRU Math Conversation Guide Release Version Alpha TRU Math Conversation Guide Module A: Contextual Algebraic Tasks This TRU Math Conversation Guide, Module A: Contextual Algebraic Tasks is a product of The Algebra Teaching Study

More information

Grade Level Expectations for the Sunshine State Standards

Grade Level Expectations for the Sunshine State Standards for the Sunshine State Standards FLORIDA DEPARTMENT OF EDUCATION http://www.myfloridaeducation.com/ The seventh grade student: The Nature of Matter uses a variety of measurements to describe the physical

More information

Whitepaper. Image stabilization improving camera usability

Whitepaper. Image stabilization improving camera usability Whitepaper Image stabilization improving camera usability Table of contents 1. Introduction 3 2. Vibration Impact on Video Output 3 3. Image Stabilization Techniques 3 3.1 Optical Image Stabilization 3

More information

Motion & The Global Positioning System (GPS)

Motion & The Global Positioning System (GPS) Grade Level: K - 8 Subject: Motion Prep Time: < 10 minutes Duration: 30 minutes Objective: To learn how to analyze GPS data in order to track an object and derive its velocity from positions and times.

More information

054414 PROCESS CONTROL SYSTEM DESIGN. 054414 Process Control System Design. LECTURE 6: SIMO and MISO CONTROL

054414 PROCESS CONTROL SYSTEM DESIGN. 054414 Process Control System Design. LECTURE 6: SIMO and MISO CONTROL 05444 Process Control System Design LECTURE 6: SIMO and MISO CONTROL Daniel R. Lewin Department of Chemical Engineering Technion, Haifa, Israel 6 - Introduction This part of the course explores opportunities

More information

CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation. Prof. Dr. Hani Hagras

CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation. Prof. Dr. Hani Hagras 1 CE801: Intelligent Systems and Robotics Lecture 3: Actuators and Localisation Prof. Dr. Hani Hagras Robot Locomotion Robots might want to move in water, in the air, on land, in space.. 2 Most of the

More information

Active Vibration Isolation of an Unbalanced Machine Spindle

Active Vibration Isolation of an Unbalanced Machine Spindle UCRL-CONF-206108 Active Vibration Isolation of an Unbalanced Machine Spindle D. J. Hopkins, P. Geraghty August 18, 2004 American Society of Precision Engineering Annual Conference Orlando, FL, United States

More information

COMPLEXITY RISING: FROM HUMAN BEINGS TO HUMAN CIVILIZATION, A COMPLEXITY PROFILE. Y. Bar-Yam New England Complex Systems Institute, Cambridge, MA, USA

COMPLEXITY RISING: FROM HUMAN BEINGS TO HUMAN CIVILIZATION, A COMPLEXITY PROFILE. Y. Bar-Yam New England Complex Systems Institute, Cambridge, MA, USA COMPLEXITY RISING: FROM HUMAN BEINGS TO HUMAN CIVILIZATION, A COMPLEXITY PROFILE Y. BarYam New England Complex Systems Institute, Cambridge, MA, USA Keywords: complexity, scale, social systems, hierarchical

More information

SAMPLE CHAPTERS UNESCO EOLSS PID CONTROL. Araki M. Kyoto University, Japan

SAMPLE CHAPTERS UNESCO EOLSS PID CONTROL. Araki M. Kyoto University, Japan PID CONTROL Araki M. Kyoto University, Japan Keywords: feedback control, proportional, integral, derivative, reaction curve, process with self-regulation, integrating process, process model, steady-state

More information

Department of Electrical and Computer Engineering Ben-Gurion University of the Negev. LAB 1 - Introduction to USRP

Department of Electrical and Computer Engineering Ben-Gurion University of the Negev. LAB 1 - Introduction to USRP Department of Electrical and Computer Engineering Ben-Gurion University of the Negev LAB 1 - Introduction to USRP - 1-1 Introduction In this lab you will use software reconfigurable RF hardware from National

More information

RF Measurements Using a Modular Digitizer

RF Measurements Using a Modular Digitizer RF Measurements Using a Modular Digitizer Modern modular digitizers, like the Spectrum M4i series PCIe digitizers, offer greater bandwidth and higher resolution at any given bandwidth than ever before.

More information

Calibration and Use of a Strain-Gage-Instrumented Beam: Density Determination and Weight-Flow-Rate Measurement

Calibration and Use of a Strain-Gage-Instrumented Beam: Density Determination and Weight-Flow-Rate Measurement Chapter 2 Calibration and Use of a Strain-Gage-Instrumented Beam: Density Determination and Weight-Flow-Rate Measurement 2.1 Introduction and Objectives This laboratory exercise involves the static calibration

More information

Accuracy and Tuning in CNC Machine Tools

Accuracy and Tuning in CNC Machine Tools FAMA Technical Article/001 Accuracy and Tuning in CNC Machine Tools Introduction: This article explains how it is possible to achieve a better performance on High Speed CNC Machine Tools. Performance is

More information

Developments in Point of Load Regulation

Developments in Point of Load Regulation Developments in Point of Load Regulation By Paul Greenland VP of Marketing, Power Management Group, Point of load regulation has been used in electronic systems for many years especially when the load

More information

White paper. HDTV (High Definition Television) and video surveillance

White paper. HDTV (High Definition Television) and video surveillance White paper HDTV (High Definition Television) and video surveillance Table of contents Introduction 3 1. HDTV impact on video surveillance market 3 2. Development of HDTV 3 3. How HDTV works 4 4. HDTV

More information

Signal to Noise Instrumental Excel Assignment

Signal to Noise Instrumental Excel Assignment Signal to Noise Instrumental Excel Assignment Instrumental methods, as all techniques involved in physical measurements, are limited by both the precision and accuracy. The precision and accuracy of a

More information

Introduction to Process Control Actuators

Introduction to Process Control Actuators 1 Introduction to Process Control Actuators Actuators are the final elements in a control system. They receive a low power command signal and energy input to amplify the command signal as appropriate to

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XVI - Fault Accomodation Using Model Predictive Methods - Jovan D. Bošković and Raman K.

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XVI - Fault Accomodation Using Model Predictive Methods - Jovan D. Bošković and Raman K. FAULT ACCOMMODATION USING MODEL PREDICTIVE METHODS Scientific Systems Company, Inc., Woburn, Massachusetts, USA. Keywords: Fault accommodation, Model Predictive Control (MPC), Failure Detection, Identification

More information

Measuring Temperature withthermistors a Tutorial David Potter

Measuring Temperature withthermistors a Tutorial David Potter NATIONAL INSTRUMENTS The Software is the Instrument Application Note 065 Measuring Temperature withthermistors a Tutorial David Potter Introduction Thermistors are thermally sensitive resistors used in

More information

Introduction to Netlogo: A Newton s Law of Gravity Simulation

Introduction to Netlogo: A Newton s Law of Gravity Simulation Introduction to Netlogo: A Newton s Law of Gravity Simulation Purpose Netlogo is an agent-based programming language that provides an all-inclusive platform for writing code, having graphics, and leaving

More information

Does function point analysis change with new approaches to software development? January 2013

Does function point analysis change with new approaches to software development? January 2013 Does function point analysis change with new approaches to software development? January 2013 Scope of this Report The information technology world is constantly changing with newer products, process models

More information

Using Fuzzy Logic Control to Provide Intelligent Traffic Management Service for High-Speed Networks ABSTRACT:

Using Fuzzy Logic Control to Provide Intelligent Traffic Management Service for High-Speed Networks ABSTRACT: Using Fuzzy Logic Control to Provide Intelligent Traffic Management Service for High-Speed Networks ABSTRACT: In view of the fast-growing Internet traffic, this paper propose a distributed traffic management

More information

Cathode Ray Tube. Introduction. Functional principle

Cathode Ray Tube. Introduction. Functional principle Introduction The Cathode Ray Tube or Braun s Tube was invented by the German physicist Karl Ferdinand Braun in 897 and is today used in computer monitors, TV sets and oscilloscope tubes. The path of the

More information

Students Manual for the Exam. General Engineering and Electrical Civil Engineering Discipline

Students Manual for the Exam. General Engineering and Electrical Civil Engineering Discipline Students Manual for the Exam General Engineering and Electrical Civil Engineering Discipline -- October March 2014 2013 -- COPYRIGHT NOTICE COPYRIGHTS 2013 NATIONAL CENTER FOR ASSESSMENT IN HIGHER EDUCATION

More information

Surveillance of algorithmic trading

Surveillance of algorithmic trading Surveillance of algorithmic trading A proposal for how to monitor trading algorithms Lars-Ivar Sellberg Executive Chairman, MSc Electrical Engineering/IT BA Financial Economics Copyright 2009-2013. All

More information

The Quest for Energy Efficiency. A White Paper from the experts in Business-Critical Continuity

The Quest for Energy Efficiency. A White Paper from the experts in Business-Critical Continuity The Quest for Energy Efficiency A White Paper from the experts in Business-Critical Continuity Abstract One of the most widely discussed issues throughout the world today is the rapidly increasing price

More information

Onboard electronics of UAVs

Onboard electronics of UAVs AARMS Vol. 5, No. 2 (2006) 237 243 TECHNOLOGY Onboard electronics of UAVs ANTAL TURÓCZI, IMRE MAKKAY Department of Electronic Warfare, Miklós Zrínyi National Defence University, Budapest, Hungary Recent

More information

Dually Fed Permanent Magnet Synchronous Generator Condition Monitoring Using Stator Current

Dually Fed Permanent Magnet Synchronous Generator Condition Monitoring Using Stator Current Summary Dually Fed Permanent Magnet Synchronous Generator Condition Monitoring Using Stator Current Joachim Härsjö, Massimo Bongiorno and Ola Carlson Chalmers University of Technology Energi och Miljö,

More information

DETERMINATION OF THE HEAT STORAGE CAPACITY OF PCM AND PCM-OBJECTS AS A FUNCTION OF TEMPERATURE. E. Günther, S. Hiebler, H. Mehling

DETERMINATION OF THE HEAT STORAGE CAPACITY OF PCM AND PCM-OBJECTS AS A FUNCTION OF TEMPERATURE. E. Günther, S. Hiebler, H. Mehling DETERMINATION OF THE HEAT STORAGE CAPACITY OF PCM AND PCM-OBJECTS AS A FUNCTION OF TEMPERATURE E. Günther, S. Hiebler, H. Mehling Bavarian Center for Applied Energy Research (ZAE Bayern) Walther-Meißner-Str.

More information

Robotics & Automation

Robotics & Automation Robotics & Automation Levels: Grades 10-12 Units of Credit: 1.0 CIP Code: 21.0117 Core Code: 38-01-00-00-130 Prerequisite: None Skill Test: 612 COURSE DESCRIPTION Robotics & Automation is a lab-based,

More information

Force on Moving Charges in a Magnetic Field

Force on Moving Charges in a Magnetic Field [ Assignment View ] [ Eðlisfræði 2, vor 2007 27. Magnetic Field and Magnetic Forces Assignment is due at 2:00am on Wednesday, February 28, 2007 Credit for problems submitted late will decrease to 0% after

More information

Control System Definition

Control System Definition Control System Definition A control system consist of subsytems and processes (or plants) assembled for the purpose of controlling the outputs of the process. For example, a furnace produces heat as a

More information

CRITERIA FOR ACCREDITING ENGINEERING TECHNOLOGY PROGRAMS

CRITERIA FOR ACCREDITING ENGINEERING TECHNOLOGY PROGRAMS CRITERIA FOR ACCREDITING ENGINEERING TECHNOLOGY PROGRAMS Effective for Reviews During the 2013-2014 Accreditation Cycle Incorporates all changes approved by the ABET Board of Directors as of October 27,

More information

DOLBY SR-D DIGITAL. by JOHN F ALLEN

DOLBY SR-D DIGITAL. by JOHN F ALLEN DOLBY SR-D DIGITAL by JOHN F ALLEN Though primarily known for their analog audio products, Dolby Laboratories has been working with digital sound for over ten years. Even while talk about digital movie

More information

PHYSICAL WORLD. Heat & Energy GOD S DESIGN. 4th Edition Debbie & Richard Lawrence

PHYSICAL WORLD. Heat & Energy GOD S DESIGN. 4th Edition Debbie & Richard Lawrence PHYSICAL WORLD Heat & Energy GOD S DESIGN 4th Edition Debbie & Richard Lawrence God s Design for the Physical World is a complete physical science curriculum for grades 3 8. The books in this series are

More information

WEIGHTLESS WONDER Reduced Gravity Flight

WEIGHTLESS WONDER Reduced Gravity Flight WEIGHTLESS WONDER Reduced Gravity Flight Instructional Objectives Students will use trigonometric ratios to find vertical and horizontal components of a velocity vector; derive a formula describing height

More information

The Different Types of UPS Systems

The Different Types of UPS Systems Systems White Paper 1 Revision 6 by Neil Rasmussen Contents Click on a section to jump to it > Executive summary There is much confusion in the marketplace about the different types of UPS systems and

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION CHAPTER 1 INTRODUCTION 1.1 Background of the Research Agile and precise maneuverability of helicopters makes them useful for many critical tasks ranging from rescue and law enforcement task to inspection

More information

CRITERIA FOR ACCREDITING ENGINEERING TECHNOLOGY PROGRAMS

CRITERIA FOR ACCREDITING ENGINEERING TECHNOLOGY PROGRAMS CRITERIA FOR ACCREDITING ENGINEERING TECHNOLOGY PROGRAMS Effective for Evaluations During the 2011-2012 Accreditation Cycle Incorporates all changes approved by the ABET Board of Directors as of October

More information

PC BASED PID TEMPERATURE CONTROLLER

PC BASED PID TEMPERATURE CONTROLLER PC BASED PID TEMPERATURE CONTROLLER R. Nisha * and K.N. Madhusoodanan Dept. of Instrumentation, Cochin University of Science and Technology, Cochin 22, India ABSTRACT: A simple and versatile PC based Programmable

More information

dspace DSP DS-1104 based State Observer Design for Position Control of DC Servo Motor

dspace DSP DS-1104 based State Observer Design for Position Control of DC Servo Motor dspace DSP DS-1104 based State Observer Design for Position Control of DC Servo Motor Jaswandi Sawant, Divyesh Ginoya Department of Instrumentation and control, College of Engineering, Pune. ABSTRACT This

More information

Introduction. Chapter 1. 1.1 Scope of Electrical Engineering

Introduction. Chapter 1. 1.1 Scope of Electrical Engineering Chapter 1 Introduction 1.1 Scope of Electrical Engineering In today s world, it s hard to go through a day without encountering some aspect of technology developed by electrical engineers. The impact has

More information

LLRF. Digital RF Stabilization System

LLRF. Digital RF Stabilization System LLRF Digital RF Stabilization System Many instruments. Many people. Working together. Stability means knowing your machine has innovative solutions. For users, stability means a machine achieving its full

More information

AP1 Oscillations. 1. Which of the following statements about a spring-block oscillator in simple harmonic motion about its equilibrium point is false?

AP1 Oscillations. 1. Which of the following statements about a spring-block oscillator in simple harmonic motion about its equilibrium point is false? 1. Which of the following statements about a spring-block oscillator in simple harmonic motion about its equilibrium point is false? (A) The displacement is directly related to the acceleration. (B) The

More information

Industrial Plant and Process Control

Industrial Plant and Process Control Unit 47: Industrial Plant and Process Control Unit code: QCF Level 3: Credit value: 10 Guided learning hours: 60 Aim and purpose D/600/0326 BTEC Nationals The aim of this unit is to introduce learners

More information

EFFECTIVE STRATEGIC PLANNING IN MODERN INFORMATION AGE ORGANIZATIONS

EFFECTIVE STRATEGIC PLANNING IN MODERN INFORMATION AGE ORGANIZATIONS EFFECTIVE STRATEGIC PLANNING IN MODERN INFORMATION AGE ORGANIZATIONS Cezar Vasilescu and Aura Codreanu Abstract: The field of strategic management has offered a variety of frameworks and concepts during

More information

Modeling and Simulation Design for Load Testing a Large Space High Accuracy Catalog. Barry S. Graham 46 Test Squadron (Tybrin Corporation)

Modeling and Simulation Design for Load Testing a Large Space High Accuracy Catalog. Barry S. Graham 46 Test Squadron (Tybrin Corporation) Modeling and Simulation Design for Load Testing a Large Space High Accuracy Catalog Barry S. Graham 46 Test Squadron (Tybrin Corporation) ABSTRACT A large High Accuracy Catalog (HAC) of space objects is

More information

The Effective Number of Bits (ENOB) of my R&S Digital Oscilloscope Technical Paper

The Effective Number of Bits (ENOB) of my R&S Digital Oscilloscope Technical Paper The Effective Number of Bits (ENOB) of my R&S Digital Oscilloscope Technical Paper Products: R&S RTO1012 R&S RTO1014 R&S RTO1022 R&S RTO1024 This technical paper provides an introduction to the signal

More information

Cancellation of Load-Regulation in Low Drop-Out Regulators

Cancellation of Load-Regulation in Low Drop-Out Regulators Cancellation of Load-Regulation in Low Drop-Out Regulators Rajeev K. Dokania, Student Member, IEE and Gabriel A. Rincόn-Mora, Senior Member, IEEE Georgia Tech Analog Consortium Georgia Institute of Technology

More information

ARCHITECTURE OF INDUSTRIAL AUTOMATION SYSTEMS

ARCHITECTURE OF INDUSTRIAL AUTOMATION SYSTEMS ARCHITECTURE OF INDUSTRIAL AUTOMATION SYSTEMS Abdu Idris Omer Taleb M.M., PhD Majmaah University, Kingdom of Saudia Arabia Abstract This article is aimed to name the levels of industrial automation, describes

More information

Tadahiro Yasuda. Introduction. Overview of Criterion D200. Feature Article

Tadahiro Yasuda. Introduction. Overview of Criterion D200. Feature Article F e a t u r e A r t i c l e Feature Article Development of a High Accuracy, Fast Response Mass Flow Module Utilizing Pressure Measurement with a Laminar Flow Element (Resistive Element) Criterion D200

More information

Introduction to SMPS Control Techniques

Introduction to SMPS Control Techniques Introduction to SMPS Control Techniques 2006 Microchip Technology Incorporated. All Rights Reserved. Introduction to SMPS Control Techniques Slide 1 Welcome to the Introduction to SMPS Control Techniques

More information

Friday 20 January 2012 Morning

Friday 20 January 2012 Morning THIS IS A NEW SPECIFICATION H Friday 20 January 2012 Morning GCSE TWENTY FIRST CENTURY SCIENCE PHYSICS A A181/02 Modules P1 P2 P3 (Higher Tier) *A131500112* Candidates answer on the Question Paper. A calculator

More information

Green Education through Green Power: Photovoltaics as a Conduit to Interdisciplinary Learning

Green Education through Green Power: Photovoltaics as a Conduit to Interdisciplinary Learning Green Education through Green Power: Photovoltaics as a Conduit to Interdisciplinary Learning The proposed project will enable ABC University to: 1) develop an interdisciplinary educational program to

More information

DC Motor Driven Throttle Bodies and Control Valves

DC Motor Driven Throttle Bodies and Control Valves DC Motor Driven Throttle Bodies and Control Valves Flexible Air Management DC motor driven throttle bodies and control valves The Pierburg modular ETC system is a consistent extension of the Pierburg

More information

Robot Perception Continued

Robot Perception Continued Robot Perception Continued 1 Visual Perception Visual Odometry Reconstruction Recognition CS 685 11 Range Sensing strategies Active range sensors Ultrasound Laser range sensor Slides adopted from Siegwart

More information

Introduction. Chapter 1. 1.1 The Motivation

Introduction. Chapter 1. 1.1 The Motivation Chapter 1 Introduction 1.1 The Motivation Hydroelectric power plants, like real systems, have nonlinear behaviour. In order to design turbine controllers, it was normal practice in the past, when computer

More information

Kresimir Bakic, CIGRE & ELES, Slovenia

Kresimir Bakic, CIGRE & ELES, Slovenia "Maintenance SLOVENIJA and recovery 2014 of HV electricity transport systems and aerospace assistance" STATE-OF-THE-ART FOR DYNAMIC LINE RATING TECHNOLOGY Future Vision Kresimir Bakic, CIGRE & ELES, Slovenia

More information

What is Organizational Communication?

What is Organizational Communication? What is Organizational Communication? By Matt Koschmann Department of Communication University of Colorado Boulder 2012 So what is organizational communication? And what are we doing when we study organizational

More information

Module 13 : Measurements on Fiber Optic Systems

Module 13 : Measurements on Fiber Optic Systems Module 13 : Measurements on Fiber Optic Systems Lecture : Measurements on Fiber Optic Systems Objectives In this lecture you will learn the following Measurements on Fiber Optic Systems Attenuation (Loss)

More information

SOLIDWORKS SOFTWARE OPTIMIZATION

SOLIDWORKS SOFTWARE OPTIMIZATION W H I T E P A P E R SOLIDWORKS SOFTWARE OPTIMIZATION Overview Optimization is the calculation of weight, stress, cost, deflection, natural frequencies, and temperature factors, which are dependent on variables

More information

Autonomous Advertising Mobile Robot for Exhibitions, Developed at BMF

Autonomous Advertising Mobile Robot for Exhibitions, Developed at BMF Autonomous Advertising Mobile Robot for Exhibitions, Developed at BMF Kucsera Péter (kucsera.peter@kvk.bmf.hu) Abstract In this article an autonomous advertising mobile robot that has been realized in

More information

Spacecraft Dynamics and Control. An Introduction

Spacecraft Dynamics and Control. An Introduction Brochure More information from http://www.researchandmarkets.com/reports/2328050/ Spacecraft Dynamics and Control. An Introduction Description: Provides the basics of spacecraft orbital dynamics plus attitude

More information

stable response to load disturbances, e.g., an exothermic reaction.

stable response to load disturbances, e.g., an exothermic reaction. C REACTOR TEMPERATURE control typically is very important to product quality, production rate and operating costs. With continuous reactors, the usual objectives are to: hold temperature within a certain

More information

imagine SOLAR AIR CONDITIONING MADE EASY

imagine SOLAR AIR CONDITIONING MADE EASY imagine SOLAR AIR CONDITIONING MADE EASY WHY SOLAR COOLING? Imagine...being able to fit a solar air conditioning system to your building that would result in dramatic reductions in electricity consumption

More information

Siemens and National Instruments Deliver Integrated Automation and Measurement Solutions

Siemens and National Instruments Deliver Integrated Automation and Measurement Solutions Siemens and National Instruments Deliver Integrated Automation and Measurement Solutions The Need for Integrated Automation and Measurement Manufacturing lines consist of numerous decoupled systems for

More information

Rotorcraft Health Management System (RHMS)

Rotorcraft Health Management System (RHMS) AIAC-11 Eleventh Australian International Aerospace Congress Rotorcraft Health Management System (RHMS) Robab Safa-Bakhsh 1, Dmitry Cherkassky 2 1 The Boeing Company, Phantom Works Philadelphia Center

More information