×
WORK HARD IN SILENCE; LET SUCCESS MAKE THE NOISE
--Your friends at LectureNotes
Close

Advanced Control Systems

by Jntu Heroes
Type: NoteInstitute: Jawaharlal nehru technological university anantapur college of engineering Offline Downloads: 47Views: 1141Uploaded: 10 months agoAdd to Favourite

Share it with your friends

Suggested Materials

Leave your Comments

Contributors

Jntu Heroes
Jntu Heroes
UNIT - I INTRODUCTION Introduction The classical control theory and methods (such as root locus) that we have been using in class to date are based on a simple input-output description of the plant, usually expressed as a transfer function. These methods do not use any knowledge of the interior structure of the plant, and limit us to single-input singleoutput (SISO) systems, and as we have seen allows only limited control of the closed-loop behavior when feedback control is used. Modern control theory solves many of the limitations by using a much “richer” description of the plant dynamics. The so-called state-space description provide the dynamics as a set of coupled first-order differential equations in a set of internal variables known as state variables, together with a set of algebraic equations that combine the state variables into physical output variables. In a state space system, the internal state of the system is explicitly accounted for by an equation known as the state equation. The system output is given in terms of a combination of the current system state, and the current system input, through the output equation. These two equations form a system of equations known collectively as state-space equations. The state-space is the vector space that consists of all the possible internal states of the system. For a system to be modeled using the state-space method, the system must meet this requirement: The system must be "lumped""Lumped" in this context, means that we can find a finitedimensional state-space vector which fully characterizes all such internal states of the system. this text mostly considers linear state space systems, where the state and output equations satisfy the superposition principle and the state space is linear. However, the state-space approach is equally valid for nonlinear systems although some specific methods are not applicable to nonlinear systems. Central to the state-space notation are the idea of a state. A state of a system is the current value of internal elements of the system, that change separately (but not completely unrelated) to the output of the system. In essence, the state of a system is an explicit account of the values of the internal system components. Here are some examples: Consider an electric circuit with both an input and an output terminal. This circuit may contain any number of inductors and capacitors. The state variables may represent the magnetic and electric fields of the inductors and capacitors, respectively. Consider a spring-mass-dashpot system. The state variables may represent the compression of the spring, or the acceleration at the dashpot. Consider a chemical reaction where certain reagents are poured into a mixing container, and the output is the amount of the chemical product produced over time. The state variables may represent the amounts of unreacted chemicals in the container, or other properties such as the quantity of thermal energy in the container (that can serve to facilitate the reaction). When modeling a system using a state-space equation, we first need to define three vectors: Input variables A SISO (Single Input Single Output) system will only have a single input value, but a MIMO system may have multiple inputs. We need to define all the inputs to the system, and we need to arrange them 1
into a vector. Output variables This is the system output value, and in the case of MIMO systems, we may have several. Output variables should be independent of one another, and only dependent on a linear combination of the input vector and the state vector. State Variables The state variables represent values from inside the system, that can change over time. In an electric circuit, for instance, the node voltages or the mesh currents can be state variables. In a mechanical system, the forces applied by springs, gravity, and dashpots can be state variables. We denote the input variables with u, the output variables with y, and the state variables with x. In essence, we have the following relationship: Where f(x, u) is our system. Also, the state variables can change with respect to the current state and the system input: Where x' is the rate of change of the state variables. We will define f(u, x) and g(u, x). The state equations and the output equations of systems can be expressed in terms of matrices A, B, C, and D. Because the form of these equations is always the same, we can use an ordered quadruplet to denote a system. We can use the shorthand (A, B, C, D) to denote a complete state-space representation. Also, because the state equation is very important for our later analyis, we can write an ordered pair (A, B) to refer to the state equation: Obtaining the State-Space Equations The beauty of state equations, is that they can be used to transparently describe systems that are both continuous and discrete in nature. Some texts will differentiate notation between discrete and continuous cases, but this text will not make such a distinction. Instead we will opt to use the generic coefficient matrices A, B, C and D for both continuous and discrete systems. Occasionally this book may employ the subscript C to denote a continuous-time version of the matrix, and the subscript D to denote the discrete-time version of the same matrix. Other texts may use the letters F, H, and G for continuous systems and Γ, and Θ for use in discrete systems. However, if we keep track of our time-domain system, we don't need to worry about such notations. 2
IMPORTANCE The state space model of a continuous-time dynamic system can be derived either from the system model given in the time domain by a differential equation or from its transfer function representation. Both cases will be considered in this section. Four state space forms—the phase variable form (controller form), the observer form, the modal form, and the Jordan form—which are often used in modern control theory and practice, are presented. APPLICATIONS the analysis and design of the following systems can be carried using state space method 1.linear systems 2.non-linear systems 3.time varying systems 4.multiple i/p and multiple output systems State-space methods of feedback control system design and design optimization for invariant and time-varying deterministic, continuous systems; pole positioning, observability, controllability, modal control, observer design, the theory of optimal processes and Pontryagin's Maximum principle, the linear quadratic optimal regulator problem, Lyapunov’s functions and stability theorems, linear optimal open loop control; introduction to the calculus of variations. Intended for engineers with a variety of backgrounds. Examples will be drawn from mechanical, electrical and chemical engineering applications. MATLAB is used extensively during the course for the analysis, design and simulation. Transfer functions ↔ state-space representations - Solution of linear differential equations, linearization - Canonical systems, modes, modal signal-flow diagrams Observability & Controllability - Observability & Controllability, Rank tests - Stability - State feedback control; Accommodating reference inputs - Linear observer design - Separation principle 3
UNIT - II INTRODUCTION The importance of linear multivariable control systems is evidenced by the papers published in recent years. Despite the extensive literature certain fundamental matters are not well understood. This is confirmed by numerous inaccurate stability analyses, erroneous statements about the existence of stable control, and overly severe constraints on compensator characteristics. The basic difficulty has been a failure to account properly for all dynamic modes of system response. This failure is attributable to a limitation of the transfer-function matrix- it fully describes a linear system if and only if the system is controllable and observable. The concepts of controllability and observability were introduced by Kalman and have been employed primarily in the study of optimal control. In this paper, the primary objective is to determine the controllability and observability of composite systems which are formed by the interconnection of several multivariable subsystems. To avoid the limitations of the transfer-function matrix, the beginning sections deal with multivariable systems as described by a set of n first order, constant coefficient differential equations. Later, the extension to systems described by transfer-function matrices is made. Throughout, emphasis is on the fundamental aspects of describing multivariable control systems. Detail design procedures are not treated. Introduction In the context of this course, the main objective of using state-space equations to model systems is the design of suitable compensation schemes to control these systems. Typically, the control signal u(t) is a function of several measurable state variables. Thus, a state variable controller that operates on the measurable information is developed. State variable controller design is typically comprised of three steps: Assume that all the state variables are measurable and use them to design a full state feedback control law. In practice, only certain states or combination of them can be measured and provided as system outputs. An observer is constructed to estimate the states that are not directly sensed and available as outputs. Reduced-order observers take advantage of the fact that certain states are already available as outputs and they don’t need to be estimated. Appropriately connecting the observer to the full-state feedback control law yields a state-variable controller, or compensator. Definitions and notation: Controllability and Controllability matrix Definition: A control system is said to be (completely) controllable if, for all initial times t0 and all initial states x(t0), there exists some input function u(t) that drives the state vector x(t) to any final state at some finite time t0<t<T CONTROLLABILITY TEST: Given a system defined by the linear state equation the controllability matrix is defined as: It can be proved that a system is controllable if and only if: For the general multiple-input (m) case, A is an n x n matrix and B is n x m. Then, P consists of n matrix blocks[ B, AB, A2B …An-1B, ]each with dimension n x m, stacked side by side. Thus, P has dimension n x n·m, having more columns than rows. For the single-input case, B consists of a single column, yielding a square n x n controllability matrix P. Therefore, a single-input linear system is controllable if and only if the associated controllability matrix P is nonsingular. x  Ax  Bu rank[Pc ]  n Pc  0 Controllability Matrix Pc  [B AB A2B 4 An1B]

Lecture Notes