What is the S level in the control system

Adjustment-and Navigation Technology


1 1 Regulation and control technology H. Unbehauen F. Ley Regulation technology H. Unbehauen 1 Introduction 1.1 Classification of regulation and control technology Automated industrial processes are characterized by automatically operating machines and devices, which often form very complex plants or systems. The sub-systems of the same are now coordinated by the higher-level, highly information-oriented control technology. Its essential basics include regulation and control technology as well as process data processing. A typical feature of regulation and control systems is that in them a targeted influencing of certain variables (signals) and information processing takes place, which N. Wiener [1] caused for the regularities of these regulation and control processes (in technology, nature and society) to introduce the concept of cybernetics. Since regulation and control technology are largely device-independent, the following will focus more on the system-theoretical fundamentals than on the device-technical fundamentals. 1.2 Representation in the block diagram In a regulation or control system, signals are processed and transmitted. Such systems are therefore also referred to as transmission systems (or transmission links). These have a clear direction of action, which is indicated by the direction of the arrow in the input and output signals, and are non-reactive. In a single-variable system, there is an input and output signal x e (t) or x a (t). In multi-variable systems, there are accordingly several variables at the input or output of the transmission element (also called a subsystem). Individual transmission elements are represented by boxes that can be connected to one another via signals to form larger units (overall systems). The concept of the system ranges from the simple single-variable system through the multi-variable system to hierarchically structured multi-level systems. Figure 1-1 shows a simple example of a block diagram. The most important symbols used in block diagrams are listed in Table 1-1. Table 1-1. The most important symbols for signal links and systems in the block diagram of regulation and control technology H. Unbehauen, F. Ley, The engineering knowledge: regulation and control technology, DOI / _1, Springer-Verlag Berlin Heidelberg 2014

2 2 Regulation and control technology / control technology Fig. 1-1. Example of a block diagram 1.3 Differentiation between regulation and control According to DIN [2], regulation is a process in which one variable, the controlled variable, is continuously recorded (measured), compared with another variable, the reference variable, and depending on the result of this comparison in The adjustment to the reference variable is influenced. The resulting action sequence takes place in a closed loop, the control loop. In contrast, control is the process in a system in which one or more variables as input variables influence other variables than output variables due to the regularities peculiar to the system. Characteristic for the control is the open operational sequence via the individual transmission link or the control chain. From the block diagram (Fig. 1-2a) it is easy to see that the control is characterized by the following steps: measurement of the controlled variable y, formation of the control deviation e = wy by comparing the actual value of the controlled variable y with the setpoint w (reference variable), processing of the Control deviation such that the control deviation is reduced or eliminated by changing the manipulated variable u. If one compares a control with a regulation, the following differences can easily be determined: The regulation represents a closed operational sequence (control loop); can counteract all disturbances z due to the closed operating principle (negative feedback); can become unstable, i.e. In other words, oscillations in a circle no longer subside, but increase (theoretically) over all limits even with limited input variables w and z. The control represents an open operational sequence (control chain); can only counteract the disturbance variables for which it was designed; other disturbances cannot be eliminated; cannot become unstable as long as the object to be controlled is itself stable. According to Fig. 1-2a, a control loop consists of 4 main components: controlled system, measuring element, controller and actuator. This block diagram shows that the task of controlling a system or a process (controlled system) is to either keep the controlled variable y (t) continuously recorded by the measuring element at a constant setpoint value w (t) regardless of external disturbances z (t) ) to hold (fixed value control or disturbance variable control) or to track y (t) to a variable setpoint w (t) (reference variable) (follow-up control, follow-up or servo control). This task is carried out by a computing device, the controller R. The controller forms the control deviation e (t) = w (t) y (t), i.e. the difference between the setpoint w (t) and the actual value y (t) of the controlled variable, processes it according to its mode of operation (e.g. proportional, integral or differential) and generates a signal u R (t), which is shown in Fig. 1-2. Comparison of a regulation and b a control in the block diagram

3 1 Introduction 3 the actuator acts as a manipulated variable u (t) on the controlled system and z. B. in the case of disturbance variable control counteracts the disturbance signal z (t). The control loop is characterized by this closed signal course, the control function being to eliminate any control deviation e (t) that has occurred as quickly as possible or at least to keep it small. The symbols used here are based on the common international terms used in the following. 1.4 Examples of regulation and control systems Using a few typical applications, the mode of operation of a closed-loop control and a control system is shown below, without the internal functionality of the devices already being explained. Figure 1-3 shows a schematic comparison of a regulation and a control for a space heating system. With the control, Figure 1-3a, the outside temperature ϑ A is measured by a temperature sensor and fed to the control unit. The control unit adjusts the heating flow Q via the motor M and the valve V as a function of. A. The slope of the characteristic curve Q = f (ϑ A) can be preset on the control unit. As can be seen from the block diagram, a well-adjusted control only compensates for the effects of a change in the outside temperature z 2 ˆ = ϑ A, but not disturbances in the room temperature, e.g. B. by opening a window or by strong sunlight. If the room temperature is regulated, R, Fig. 1-3b, this is measured and compared with the setpoint w (e.g. w = 20 C). If the room temperature deviates from the setpoint, the heating flow Q is changed via a controller (R) that processes the deviation. All changes in the room temperature ϑ R are processed by the controller and, if possible, eliminated. On the basis of the block diagrams, one can see the closed sequence of action of the regulation (control loop) and the open sequence of the control (control chain). Figure 1-4 shows some further application examples for controls. This clearly shows the difference between fixed value control and the following Fig. 1-3. Comparison of a control and b a regulation for a room heating: Schematic sketches and associated block diagrams regulated. So z. For example, in the case of a steam turbine, the speed according to the fixed setpoint can be maintained (fixed value control), while with course control the setpoint may be changed when driving around an obstacle and the course control then has the task of tracking the ship on this setpoint (follow-up control ). As these examples already show, the signal transmission in regulation and control systems can take various forms, i. H. by mechanical, hydraulic, pneumatic or electrical auxiliary energy. However, the signals will continue to be independent of the technical implementation

4 4 Regulation and control technology / control technology only continue with the individual steps when certain conditions have been reached that are determined by feedback signals (not to be confused with feedback in control loops), e.g. B. can be realized by limit switches. Sequence controls are characterized by a specific fixed or variable program that runs step-by-step, with the individual steps being triggered by feedback signals. A typical example of a combined schedule and sequence control is the washing machine. Since program controls are now largely carried out using digital technology, they are often also referred to as binary controls. In these binary controls, signals are used that can only assume two values. The modern programmable logic controllers (PLC) are based on this principle, which is discussed in detail in Chapter 14. Chapters 2 to 13 deal with the treatment of control engineering aspects. 2 Models and system properties 2.1 Mathematical models Fig. 1-4.a d. Application examples for regulations only considered with regard to their information and i. Generally understood as pure (unitless) mathematical functions. The example of space heating control shown at the beginning represents a certain type of control that falls into the group of command controls, which in the steady state by a fixed relationship between input and output variables, e.g. B. are characterized by the heating curve. There are also so-called program controls, which include schedule controls, route plan controls and sequence controls and their combinations. Schedule controls run according to a fixed schedule without feedback. Route plan controls switch to The static and dynamic behavior of a regulation or control system can either be described analytically by physical or other regularities or determined on the basis of measurements and converted into a mathematical model, e.g. B. be represented by differential equations, algebraic or logical equations, etc. In terms of its structure and parameters, the special shape essentially depends on the system properties. The most important properties of control systems are shown in Figure 2-1. Mathematical system models, which may simplify the behavior of a real system in abstract form, but describe it with sufficient accuracy, usually form the basis for the analysis or synthesis of the real technical system and often also for its computational simulation [1]. In this way, various types of operation can easily be checked using a simulation of the system as early as the design stage.

5 2 Models and system properties 5 Figure 2-1. Aspects for describing the properties of control systems 2.2 System properties Linear and non-linear systems A distinction is usually made between dynamic and static behavior in systems. The dynamic behavior or time behavior describes the time course of the system output variable x a (t) with a given system input variable x e (t). Thus, x e (t) and x a (t) represent two variables that are assigned to one another. As an example, the response x a (t) of a system to a sudden change in the input variable x e (t) is considered in Figure 2-2. In this example, x a (t) describes the time transition from a stationary initial state at time t 0 to a stationary final state (theoretically for t) x a (). If one now varies the step height xe, s = const as shown in Fig. 2-3 and plots the steady-state values ​​of the output variable xa, s = xa () over xe, s, one obtains the static characteristic xa, s = f (xe, s), (2-1) further use of (2-1) should, however, for the sake of simpler representation, be changed to the notation xa, s = xa and xe, s = xe, where xa and xe are each stationary values ​​of represent xa (t) and xe (t). If (2-1) describes a straight line equation, the system is called linear. The superposition principle applies to a linear system, which describes the following facts: If one lets any input quantities x ei (t) act on the input of a system n one after the other and one determines the system responses x ai (t), the result is the system response to the sum of the n input quantities as the sum of the n responses x ai (t). If the superposition principle is not fulfilled, the system is non-linear. Linear continuous systems can usually be described by linear differential equations. As an example, consider a normal linear differential equation: n i = 0 a i (t) di x a (t) dt i = n j = 0 b j (t) d j x e (t) dt j. (2-2) As you can easily see, the superposition principle also applies here. Since today a largely complete theory is available for the treatment of linear systems, one is, when nonlinearities occur, i. Generally trying to carry out a linearization. In many cases it is possible to describe the system behavior with sufficient accuracy using a linearized approach. The implementation of the linearization depends on the respective non-linear character of the system. Therefore, in the following, a distinction is made between the linearization of a static characteristic curve and the linearization of a non-linear differential equation. (a) Linearization of a static characteristic curve The non-linear characteristic curve for the static behavior of a system is described by x a = f (x e), i.e. by which the static behavior or steady-state behavior of the system in a certain working range is described. Equation (2-1) therefore gives the relationship between the signal values ​​in the idle state. With the picture 2-2. Example of the dynamic behavior of a system

6 6 Regulation and control technology / control technology Fig. 2-3. Example for a the dynamic and b the static behavior of a system (2-1), this nonlinear equation can be converted into the Taylor series xa = f (xe) + df (xexe) at the respective working point (xe, xa) dx e xe = xe + 1 2! d2 f (xexe) (2-3) xe = xe dx 2 e, see A and A If the deviations (xexe) from the working point are small, the terms with the higher derivatives can be neglected, and from (2-3 ) follows the linear relationship xaxa K (xexe), with xa = f (xe) and K = df dx e. (2-4) xe = x e The same procedure is also possible for a function with two or more independent variables x a = f (x e1, x e2). In this case, analogous to (2-4), the linear relationship x a x a K 1 (x e1 x e1) + K 2 (x e2 x e2) is obtained. (2-5) (b) Linearization of nonlinear differential equations A nonlinear dynamic system with the input variable xe (t) = u (t) and the output variable xa (t) = y (t) is described by the nonlinear differential equation of the first order ẏ ( t) = f [y (t), u (t)], (2-6) which is to be linearized in the vicinity of a rest position (ȳ, ū). A rest position ȳ for a constant input variable ū is characterized in that y (t) is constant over time, i.e. This means that ẏ (t) = 0. For a given input variable ū, the rest positions of the system are obtained by solving the equation 0 = f (ȳ, ū). If y (t) denotes the deviation of the variable y (t) from the rest position ȳ, then y (t) = ȳ + y (t), and from this follows ẏ (t) = ẏ (t). Correspondingly, the result for the second variable is u (t) = ū + u (t). The Taylor series expansion of (2-6) around the rest position (ȳ, ū) yields approximately the linear differential equation ẏ (t) Ay (t) + Bu (t), (2-7) if the terms with the higher derivatives are neglected with A = f (y, u) yy = ȳ u = ū and B = f (y, u) uu = ū y = ȳ Correspondingly, for non-linear vector differential equations ẋ (t) = f [x (t), u (t)], x (t) = [x 1 (t) ... xn (t)] T, with u (t) = [u 1 (t) ... ur (t)] T (2- 8) must be proceeded. Here f (x, u), x (t) and u (t) represent column vectors. The linearization yields the linear vector differential equation ẋ (t) = Ax (t) + Bu (t), (2-9) where A and B contain the partial derivatives as Jacobi matrices: f 1 (x, u) ... f 1 (x, u) x 1 xn A = .. (2-10) fn (x, u) ... fn (x, u) x 1 xnxxu = ū.

7 2 Models and System Properties 7 f 1 (x, u) ... f 1 (x, u) u 1 ur B = .. fn (x, u) ... fn (x, u) u 1 urxxu = ū (2-11) Systems with concentrated and distributed parameters One can imagine a transmission system composed of a finite number of idealized individual elements, e.g. B. Ohm's resistances, capacitances, inductances, dampers, springs, masses, etc. Such systems are referred to as systems with concentrated parameters. These are described by ordinary differential equations. If a system has an infinite number of infinitely small individual elements of the type mentioned above, then it represents a system with distributed parameters, which is described by partial differential equations. A typical example of this is an electrical line.The voltage curve on a line is a function of location and time and can therefore only be described by a partial differential equation. Time variant and time-invariant systems If the system parameters are not constant, but change as a function of time, then the system is time-variant (time-variable, non-stationary) . If this is not the case, then the system is called time-invariant. Examples of time-variant systems are: rocket (mass changes), nuclear reactor (burn-up), chemical processes (pollution). Time-invariant systems whose parameters are constant are more common and important. In these systems, for. B. a temporal shift of the input signal xe (t) by 0 results in an equal shift of the output signal xa (t) without otherwise changing xa (t) Systems with continuous and discrete operation If a system variable (signal) y, e.g. . B. the input or output variable of a system, given at any point in time, and it is within the limits of Figure 2-4. Distinguishing features for continuous and discrete signals a continuous, b quantized, c time-discrete, d time-discrete and quantized continuously variable, then one speaks of a continuous signal curve (Fig. 2-4a). If the signal can only assume certain discrete amplitude values, then it is a quantized signal (Fig. 2-4b). If, on the other hand, the value of the signal is only known at certain discrete points in time, then it is a time-discrete (or in short: discrete) signal (Fig. 2-4c). If the signal values ​​are given at equidistant points in time with the interval T, then one speaks of a sampling signal with the sampling period T. Systems in which such signals are processed are also referred to as sampling systems. In all control systems in which a digital computer z. B. takes over the functions of a controller, it can only process discrete-time quantized signals (Fig. 2-4d) Systems with deterministic or stochastic variables A system variable can have either a deterministic or a stochastic character. The

8 8 Regulation and control technology / control technology Fig. 2-5. a stable and b unstable system behavior x a (t) with a limited input variable x e (t) deterministic or stochastic properties refer both to the signals occurring in a system and to the parameters of the mathematical system model. In the deterministic case, the signals and the mathematical model of a system are clearly defined. The behavior of the system over time can thus be reproduced. In the stochastic case, however, both the signals acting on the system and the system model, e.g. B. have a coefficient of the system equation, stochastic, i.e. irregular, character. The value of these variables occurring in the signals or in the system can therefore only be described at any point in time by stochastic laws and is therefore no longer reproducible. Causal systems In a causal system, the output variable xa (t 1) at any point in time t 1 only depends on the The course of the input variable xe (t) up to this point in time t 1. So a cause has to appear first, Figure 2-6. Symbolic representation of the system term: a single-variable system, b multi-variable system, c multi-level system

9 3 Description of linear continuous systems in the time domain 9 before an effect is shown. All real systems are therefore causally stable and unstable systems. A system is stable if and only if every restricted admissible input signal x e (t) results in a likewise restricted output signal x a (t). If this is not the case, the system is unstable (Fig. 2-5) Single-variable and multiple-variable systems A system that has exactly one input and one output variable is called a single-variable system. A system with several input variables and / or output variables is called a multi-variable system. Large systems are often arranged in several stages. They are therefore also referred to as multi-stage systems (Fig. 2-6). In addition to the system properties discussed here, there are a few more. For example, the controllability and observability of a system are essential properties that describe the internal system behavior. 3 Description of linear continuous systems in the time domain 3.1 Description by means of differential equations The transfer behavior of linear continuous systems can be described by linear differential equations. In the case of systems with lumped parameters, this leads to ordinary linear differential equations according to (2-2) in 2.2, Figure 3-1. An electrical oscillating circuit while systems with distributed parameters result in partial linear differential equations as mathematical models for system description. The following are examples of the differential equations describing the system. Electrical systems For the treatment of electrical networks, Kirchhoff's laws are required: 1. The sum of the currents in a node is equal to zero: ii = the sum of the voltages during one cycle in one mesh is equal to zero: ui = 0. If one applies these laws to the two meshes and the node A of the oscillating circuit shown in Figure 3-1 and assumes that i 3 = 0, one obtains the linear one after a short calculation Second order differential equation with constant coefficients T2 2 d 2 xa dt + T dx a dx exa = xe + T 1, (3-1) dt dt with the abbreviations T 1 = RC and T 2 = LC. For an unambiguous solution the two initial conditions xa (0) and ẋ a (0) must be given Mechanical systems To set up the differential equations of mechanical systems, one needs the following laws: Newton's law, force and moment equilibria, conservation laws of momentum, angular momentum and Energy. As an example of a mechanical system, the differential equation of a damped oscillator according to Figure 3-2 is to be determined. Here, c denotes the spring constant, d the damping constant and m its mass. The quantities x 1 (= x a), x 2 and x e each describe the speeds in the marked points. The application of the above laws yields the same differential equation (3-1) as with the previous one after a short intermediate calculation

10 10 Regulation and control engineering / control engineering c F, c R α D i, D a specific heat capacity (fluid, pipe) heat transfer coefficient fluid / pipe inner and outer pipe diameter Fig. 3-2. Damped mechanical oscillators considered the electrical oscillating circuit, although T 1 = m / d and T 2 = m / c. Both systems are therefore analogous to each other. Thermal systems To determine the differential equations of thermal systems, one needs the conservation laws of the internal energy or enthalpy as well as the heat conduction and heat transfer laws. As an example, consider the mathematical model of the mass and heat transport in a thick-walled pipe through which a fluid flows, as shown in Figure 3-3. First of all, the following simplifying assumptions are made: The temperature, both in the fluid and in the pipe wall, is only dependent on the coordinate z. The entire heat transport in the direction of the pipe axis is brought about only by the mass transport, but not by heat conduction within the fluid or the pipe wall. The flow rate of the fluid is constant throughout the pipe and has only one component in the z-direction. The physical properties of the fluid and pipe are constant over the length of the pipe. The pipe is ideally insulated from the outside. With the following terms ϑ (z, t) fluid temperature folgenden (z, t) pipe temperature ṁ fluid flow L pipe length w F fluid velocity ϱ F, ϱ R density (fluid, pipe) the differential equations of the mathematical model are now to be derived. A pipe element of length dz is considered. The corresponding pipe wall volume is dv R, the corresponding fluid volume is dv F. The following applies to the heat quantities shown in Figure 3-3: dq 1 = c F ϑṁ dt (dq 2 = c F ϑ + ϑ) z dz ṁ dt dq 3 = α (ϑ Θ) πD i dz dt. During the time interval dt, the amount of heat stored in the fluid element dv F changes by π dq F = ϱ F 4 D2 i dzc ϑ F t dt Specify the heat balance equation for the fluid in the considered time interval dt: dq F = dq 1 dq 2 dq 3. (3-2) For heat storage in the pipe wall element dv R, on the other hand, it follows in the same time interval: π () dq R = ϱ RD 2 4 a D 2 Θ i dzcr t dt. This means that the heat balance equation for the pipe wall element can now be specified. The following applies: dq R = dq 3, (3-3) because, according to the requirements, ideal thermal insulation is required on the pipe outer wall. Fig. 3-3. Section from the examined pipe

11 3 Description of linear continuous systems in the time domain 11. If the above relationships are used in (3-2) and (3-3), the abbreviations K 1 = απd i π 4 D2 i ϱ Fc F, K 2 = π 4 απd i () D 2 a D 2 i ϱr c R and ṁ w F = π 4 D2 i QF the two partial differential equations ϑ t + w ϑ F z = K 1 (Θ ϑ) and Θ t (3-4a) = K 2 (ϑ Θ), (3-4b) which describe the system dealt with here. In addition to the two initial conditions ϑ (z, 0) and Θ (z, 0), the boundary condition ϑ (0, t) is also required for the solution. A special case is the thin-walled pipe, in which dq 3 = 0, since no heat storage takes place. In this case, (3-4a) changes to ϑ t + w ϑ F z = 0. (3-5) In systems with spatially distributed parameters, the input variable xe (t) does not necessarily have to appear in the differential equations, it can also go into the boundary conditions. In the present case, the fluid temperature at the pipe inlet is considered as the input variable: xe (t) = ϑ (0, t) t> 0. Accordingly, the output variable xa (t) = ϑ (l, t) is the fluid temperature at the end of the pipe length L defined. Under the additional assumption ϑ (z, 0) = 0, the solution of (3-5) 3.2 Description using special output signals The transition function (standardized step response) For further considerations, the term step function (also unit step) is required: {1 for t 0 σ (t) = 0 for t <0. (3-7) The so-called step response can be defined as the reaction xa (t) of the system to a sudden change in the input variable xe (t) = ˆx e σ (t ) with ˆx e = const, see Fig. 3-4. The transition function then represents the step response h (t) = 1ˆx exa (t) (3-8) related to the step height ˆx e, which in a causal system has the property h (t) = 0fort <0 The weight function (impulse response) The weight function g (t) is defined as the response of the system to the momentum function (unit momentum or Dirac momentum) δ (t). Here δ (t) is not a function in the sense of classical analysis, but has to be understood as a generalized function or distribution [1], see A 8.3. For the sake of simplicity, σ (t) is approximated as a square pulse function x a (t) = x e (t T t) with T t = L w F. (3-6) This equation thus describes the pure transport process in the pipe. The time T t by which the output variable x a (t) lags the input variable x e (t) is referred to as the dead time. Picture 3-4. To define the transition function h (t) and the weight function g (t)

12 12 Regulation and control technology / control technology 1 for r ε = ε 0 otherwise 0 t ε (3-9) described with a small positive ε (see Fig. 3-5). Thus the momentum function is defined by with the properties δ (t) = 0 for t 0 and δ (t) = lim ε 0 r ε (t) (3-10) δ (t) dt = 1. Usually the δ- function according to Figure 3-5b for t = 0 symbolically represented as an arrow of length 1. The length 1 is called the impulse strength (it should be noted that δ (0) = still applies to the height of the impulse). In terms of distribution theory, the relationship between the δ-function and the step function σ (t) is δ (t) = dσ (t). (3-11) dt Correspondingly, the relationship g (t) = d dt h (t) applies between the weight function g (t) and the transition function h (t). (3-12a) If one denotes the value of h (t) for t = 0+ with h (0+), then h (t) can be written in the form h (t) = h 0 (t) + h (0+) σ (t), whereby it is assumed that the step-free portion h 0 (t) is continuously and piece-wise differentiable on the entire t-axis. This means that (3-12a) can also be written in the form g (t) = ḣ (t) = ḣ 0 (t) + h (0+) δ (t). (3-12b) The convolution integral (Duhamel's integral) In the following considerations, the controlled system with the input variable xe (t) = u (t) and the output variable xa (t) = y (t) is chosen as the dynamic system to be described . It should be noted, however, that these considerations are of course generally applicable. The transfer behavior of a causal linear time-invariant system is determined by the knowledge of a function pair [y i (t); u i (t)] uniquely determined. If one knows in particular the weighting function g (t), then for any input signal u (t) the output signal y (t) can be calculated using the convolution integral y (t) = t 0 g (t τ) u (τ) dτ (3-13) can be determined, see A Conversely, if the course of u (t) and y (t) is known, the weight function g (t) can be calculated by reversing the convolution. Both the weight function g (t) and the transition function h (t) are of great importance for the description of linear systems, since they contain all the information about their dynamic behavior. 3.3 State space representation State space representation for single-variable systems Using the example of the RLC network shown in Fig. 3-6, the system description in the form of the state space representation is to be dealt with in a brief introduction. The dynamic behavior of the system is completely defined for all times t t 0 if the initial values ​​u C (t 0), i (t 0) and the input variable u K (t) for t t 0 are known. With this information, Figs. 3-5. a approximation of the δ (t) function; b symbolic representation of the δ-function Fig. 3-6. RLC network

13 3 Description of linear continuous systems in the time domain 13 Determine quantities i (t) and u C (t) for all values ​​t t 0. The variables i (t) and u C (t) characterize the state of the network and are therefore referred to as its state variables. The following relationships apply to this network: L di dt + Ri + u C = u K, C du C dt From (3-14a, b) one obtains LC d2 u C dt 2 (3-14a) = i. (3-14b) + RC du C dt + u C = u K. This linear differential equation of the 2nd order completely describes the system with regard to the input-output behavior. The two original linear differential equations of the first order, i.e. (3-14a, b), can also be used to describe the system. To this end, these two equations are expediently summarized using the vector notation to form a linear vector differential equation of the first order di R dt L 1 [] 1 L i = + L du C 1 u C u K (3-15) dt 0 C 0 with the initial value vector [ ] i (t0) u C (t 0) together. This linear vector differential equation of the first order describes the relationship between the input variable and the state variables. However, you now need an equation that indicates the dependence of the output variable on the state variables and the input variable. In this example, as you can see directly, y (t) = u C (t) for the output variable. The output variable usually represents a linear combination of the state variables and the input variable. In general, the state space representation for single variable systems therefore has the following form: ẋ = Ax + bu, x (t 0) = x 0, (3-16) y = c T x + du (3-17) Here, (3-16) describes a linear system of differential equations of the first order for the state variables x 1, x 2, ..., xn, which are combined to form the state vector x = [x 1 ... xn] T , where the input variable u multiplied by the vector b appears as an interference term. Equation (3-17), on the other hand, is a purely algebraic equation that indicates the linear dependence of the output variable on the state variables and the input variable. Mathematically, the state space representation is based on the proposition that every linear differential equation of the nth order can be converted into n coupled differential equations of the first order. If one compares the representation according to (3-16) and (3-17) with the equations of the example considered above, it follows: [] [] x1 ii (t 0) x = =, x 0 = x 2 u C u C (t 0), RL 1 1 LA =, b = L 1 C 0 0; u = u K, c T = [0, 1]; d = state space representation for multi-variable systems For linear multi-variable systems with r input variables and m output variables, (3-16), (3-17) take the general form ẋ = Ax + Bu with the initial condition x (t 0), (3-18) y = Cx + Du (3-19) over, where the following relationships apply :. State vector x =. Input vector (control vector) x 1 x n u 1 u r, u =.,

14 14 Regulation and control technology / control technology output vector m =. y 1 y m, system matrix A (n n) matrix, control matrix B (n r) matrix, output or C (m n) matrix, observation matrix, passage matrix D (m r) matrix. Of course, the general representation of (3-18) and (3-19) also includes the state space representation of the single variable system. The use of the state space representation has various advantages, some of which are mentioned here: 1. Single and multi-variable systems can be treated formally in the same way. 2.This representation is well suited for theoretical treatment (analytical solutions, optimization) as well as for numerical calculation. 3. The computation of the behavior of the homogeneous system using the initial condition x (t 0) is very simple. 4. Finally, this representation gives a better insight into the internal system behavior. In this way, general system properties such as the controllability or observability of the system can be defined and checked with this form of representation. (3-18) and (3-19) describe linear systems with lumped parameters. However, the state space representation can also be extended to nonlinear systems with concentrated parameters: ẋ = f 1 (x, u, t) (vector differential equation), (3-20) y = f 2 (x, u, t) (vector equation). (3-21) The state vector x (t) represents a point in an n-dimensional Euclidean space (state space) for the time t. With increasing time t, this state point of the system changes its spatial position and describes a curve that is called State curve or trajectory of the system is called. 4 Description of linear continuous systems in the frequency domain 4.1 The Laplace transform [1] The Laplace transform can be seen as an important tool for solving linear differential equations with constant coefficients. In control engineering tasks, the differential equations to be solved usually meet the requirements for using the Laplace transformation. The Laplace transformation is an integral transformation that reversibly and unambiguously assigns an image function F (s) to a large class of original functions f (t), see A This assignment is made via the Laplace integral of f (t), i.e. by F (s) = 0 f (t) e st dt = L {f (t)}, (4-1) where the complex variable s = σ + jω occurs in the argument of this Laplace transform F (s) and L represents the operator notation. The prerequisites for the validity of (4-1) are: (a) f (t) = 0fort <0; (b) the integral in (4-1) must converge. When dealing with dynamic systems, the original function f (t) is usually a function of time. Since the complex variable s contains the frequency ω, the image function F (s) is often also referred to as the frequency function. The Laplace transformation according to (4-1) enables the transition from the time domain (original domain) to the frequency domain (image domain). The so-called back transformation or inverse Laplace transformation, i.e. the extraction of the original function from the image function, is given by the inverse integral f (t) = 1 c + j F (s) e st ds = L 1 {F (s)}, t> 0 2πj cj (4-2)

15 4 Description of linear continuous systems in the frequency range 15, where f (t) = 0fort <0 applies, see A The Laplace transformation is a reversible unambiguous assignment of original function and image function. Therefore, in many cases the inverse integral does not need to be calculated at all; Instead, correspondence tables can be used that contain the above-mentioned assignment for many functions, see Table A. The solution of differential equations using the Laplace transformation is carried out in the following three steps as shown in Figure 4-1: 1. Transformation of the differential equation into the image area , 2. Solution of the algebraic equation in the image area, 3. Reverse transformation of the solution into the original area. Example: Given is the differential equation f (t) + 3 f (t) + 2 f (t) = et with the initial conditions f (0+) = f (0+) = 0. The solution takes place in the steps given above: 1st step: s 2 F (s) + 3sF (s) + 2F (s) = 1 s step: F (s) = 1 ss 2 + 3s step: Before the inverse transformation, F (s) is broken down into partial fractions, since the correspondence tables only contain certain standard functions: F (s) = 1 ss (s + 1). 2 Fig. 4-1. Scheme for solving differential equations with the Laplace transformation Using the correspondences from Table A 23-2, the inverse Laplace transformation as a solution to the given differential equation follows: f (t) = e 2t e t + te t. As can easily be seen from this example, the position of the poles 1, s 2 and s 3 is decisive for the course of f (t). Since all poles of F (s) have a negative real part here, the course of f (t) is damped, i. that is, it fades to zero for t. However, if the real part of a pole were positive, then f (t) would also become infinitely large for t. Since the original function f (t) always represents the temporal course of a system variable occurring in the control loop, the vibration behavior of this system variable f (t) can be assessed directly by examining the position of the poles of the associated image function F (s). This crucial importance of the position of the poles of an image function is discussed in detail in Chapter 6. 4.2 The Fourier transformation [2] The Laplace transformation for time functions f (t) with the property f (t) = 0 in the range t <0 was treated above. Time functions with this property mainly occur during technical switch-on processes. For time functions in the entire t range t +, the Fourier transform (F transform, spectral or frequency function) F (jω) = F {f (t)} = and the inverse Fourier transform f (t) = F 1 {F (jω)} = 1 2π f (t) e jωt dt (4-3) F (jω) e jωt dω (4-4), where with the operator symbols F and F 1 formally the Fourier transform resp. their inverse is marked. Since the Fourier transform is usually a complex function, the representations

16 16 Regulation and control engineering / control engineering and F (jω) = R (ω) + ji (ω) (4-5) F (jω) = A (ω) e jϕ (ω) (4-6) using Real and imaginary parts R (ω) and I (ω) or of the amplitude and phase response A (ω) and ϕ (ω) can be selected, where A (ω) = F (jω) = R 2 (ω) + I 2 (ω) (4-7) is also referred to as the Fourier spectrum or the amplitude density spectrum of f (t), and the following applies to the phase response: ϕ (ω) = arctan I (ω) R (ω). (4-8) Similar to the Laplace transform, the Fourier transform creates a reversible, unambiguous assignment between the time function f (t) and the frequency or spectral function F (jω). The most important function pairs are listed in Table A 23.1. Because of the analogies of Fourier and Laplace transforms see A 23.1 and A The concept of the transfer function Dešnition Linear, continuous, time-invariant systems with concentrated parameters, without dead time, are given by the differential equation ni = 0 aidixa (t) dt i = mj = 0 bjdjxe (t ) dt j, mn (4-9). If all initial values ​​are zero and the Laplace transformation is applied to both sides of (4-9), then after a short transformation X a (s) X e (s) = b 0 + b 1 sbmsma 0 + a 1 sansn Z (s) = G (s) = N (s), (4-10) where Z (s) and N (s) are the numerator and denominator polynomials of G (s). The function G (s) which completely characterizes the transfer behavior of the system is called the transfer function of the system. If a dead time T t has to be taken into account, instead of (4-9) n i = 0 a i d i x a (t) dt i = m j = 0 b j d j x e (t T t) dt j is obtained. (4-11) In this case, the Laplace transformation yields the transcendent transfer function G (s) = Z (s) N (s) e stt. (4-12) The excitation of a linear system by a unit impulse δ (t) supplies the weight function as output variable: xa (t) = g (t), cf.It is now because of L {δ (t)} = 1 and with ( 4-10) L {g (t)} = X a (s) = X a (s) / xe (s) = G (s); (4-13) d. that is, the transfer function G (s) is identical to the Laplace transform of the weight function. The result (4-13) also follows from the relation (3-13) by Laplace transformation: t L {xa (t)} = L g (t τ) xe (τ) dτ = G (s) X e ( s) Poles and zeros of the transfer function 0 (4-14) It is often useful to factor the rational transfer function G (s) according to (4-10) in the form G (s) = Z (s) N (s) = k (ss N1) (ss N2) ... (ss Nm) 0 (ss P1) (ss P2) ... (ss Pn) (4-15). Since only real coefficients a i, b j occur for physical reasons, the zeros s N j or the poles s Pi of G (s) can be real or conjugate complex. Poles and zeros can be clearly shown in the complex s-plane as shown in Figure 4-2. A linear time-invariant system without dead time is thus fully described by specifying the pole and zero point distribution and the factor k 0. In addition, the poles of the transfer function have another meaning. If one considers the undisturbed system (xe (t) 0) according to (4-9) and one wants to determine the time course of the output variable xa (t) according to the specification of n initial conditions, one has the corresponding homogeneous differential equation ni = 0 aidixa (t) dt i = 0 (4-16)

17 4 Description of linear continuous systems in the frequency domain 17 Fig. 4-4. Parallel connection of two transmission elements Fig. 4-2. Pole and zero distribution of a transfer function in the s-plane Fig. 4-5. To solve circular connection of two transmission links. If the solution approach xa (t) = e st is made for (4-16), the characteristic equation naisi = 0 is obtained as the determining equation for s. (4-17) t = 0 This relationship is thus obtained directly by setting the denominator to zero ( N (s) = 0) from G (s), provided that N (s) and Z (s) are coprime. The zeros sk of the characteristic equation thus represent poles s P j of the transfer function. Since the characteristic behavior (xe (t) 0) is described solely by the characteristic equation, the poles s P j of the transfer function contain this information completely For the interconnection of transfer elements, simple calculation rules can now be derived for determining the transfer function. a) Series connection: From the connection according to Fig. 4-3 follows Y (s) = G 2 (s) g 1 (s) u (s). This results in the overall transfer function of the series connection G (s) = Y (s) U (s) = G 1 (s) G 2 (s). (4-18) b) Parallel connection: For the output variable of the overall system according to Fig. 4-4 one obtains Y (s) = X a (s) = X a1 (s) + X a2 (s) = [G 1 (s) + G 2 (s)] U (s), and this results in the overall transfer function of the parallel connection G (s) = Y (s) U (s) = G 1 (s) + G 2 (s). (4-19) c) Circular connection: From Fig. 4-5 it follows immediately for the output variable Y (s) = X a (s) = [U (s) () X a2 (s)] g 1 (s). With X a2 (s) = G 2 (s) y (s) one obtains the total transfer function of the circuit G (s) = Y (s) U (s) = G 1 (s) 1 + G 1 (s) g 2 (s). (4-20) () Since the output variable from G 1 (s) is fed back to the input via G 2 (s), it is also referred to as feedback. A distinction is made between positive feedback (positive feedback) with positive connection of X a2 (s) and negative feedback (negative feedback) with negative connection of X a2 (s) Relationship between G (s) and the state space representation Figure 4-3. Series connection of two transfer elements If one turns to the state space representation of a single variable system, described in by (3-16)

18 18 Regulation and control technology / control technology and (4-17) with x (t 0) = 0, the Laplace transformation, then it follows from sx (s) = AX (s) + bu (s) Y (s) = c TX (s) + du (s) and after elimination of X (s) after a short calculation the transfer function G (s) = Y (s) U (s) = ct (si A) 1 b + d. (4-21) I is the identity matrix. Equation (4-21) naturally agrees with (4-10) if both mathematical models describe the same system The complex G-plane The complex transfer function G (s) describes a locally conforming mapping of the s-plane to the G-plane, See A 19. Because of the angular fidelity guaranteed in this figure, the orthogonal network of axis-parallel straight lines σ = const and ω = const of the s-plane is mapped into an orthogonal but curvilinear network of the G-plane as shown in Figure 4-6. At the same time, even the infinitely small remains true to scale. A very important special case is obtained for σ = 0 and ω 0. It represents the conformal mapping of the positive imaginary axis of the s-plane and is called the locus of the frequency response G (jω) of the system. 4.4 The frequency response representation Dešnition As already mentioned briefly, for σ = 0, i.e. for the special case s = jω, the transfer function G (s) goes over into the frequency response G (jω). While the transfer function G (s) is more of an abstract, non-measurable form of description for the mathematical treatment of linear systems, the frequency response G (jω) can also be interpreted physically in a clear and straightforward manner. For this purpose, the frequency response is first calculated as a complex variable G (jω) = R (ω) + ji (ω), (4-22) with the real part R (ω) and the imaginary part I (ω), expediently through its amplitude response A (ω ) and its phase response ϕ (ω) in the form G (jω) = A (ω) e jϕ (ω) (4-23). If you now imagine the system variable xe (t) sinusoidally with the amplitude xe and the frequency ω excited, i.e. by xe (t) = xe sin ωt, (4-24) then in a linear continuous system the output variable will be with the same frequency ω with a different amplitude xa and with a certain phase shift ϕ = ϕ (ω) also carry out sinusoidal oscillations: xa (t) = xa sin (ωt + ϕ). (4-25) If this experiment is carried out for different frequencies ω = ω ν (ν = 1, 2, ...) with xe = const, then one finds a frequency dependence of the amplitude xa of the output signal and the phase shift iebung, and thus for the respective frequency ω ν xa, ν = xa (ω ν) and ϕ ν = ϕ (ω ν) applies. Picture 4-6. Locally conformal mapping of the straight lines σ = const and ω = const of the s-plane into the G-plane. The amplitude response of the frequency response can now be derived from the ratio of the amplitudes x a and x e

19 4 Description of linear continuous systems in the frequency range 19 Define A (ω) = x a (ω) = G (jω) = R x 2 (ω) + I 2 (ω) (4-26) e as a frequency-dependent quantity. Furthermore, the frequency-dependent phase shift ϕ (ω) is referred to as the phase response of the frequency response. Thus (ω) = argg (jω) = arctan I (ω) R (ω) applies. (4-27) From these considerations it can be seen that by using sinusoidal input signals x e (t) of different frequencies, the amplitude response A (ω) and the phase response ϕ (ω) of the frequency response G (jω) can be measured directly. The entire frequency response G (jω) for all frequencies 0 ω describes, similarly to the transfer function G (s) or the transition function h (t), the transfer behavior of a linear continuous system ν with the help of A (ω ν) and ϕ (ω ν) the respective value of G (jω ν) = A (ω ν) e jϕ (ων) into the complex G-plane, one obtains the locus parameterized in ω of the frequency response, which is also known as the Nyquist locus. Figure 4-7 shows such a locus curve determined experimentally from 8 measured values. The locus plot of frequency responses has, inter alia. the advantage that the frequency responses can be graphically constructed very easily both from one behind the other and from transmission elements connected in parallel. The pointers of the locus in question belonging to the same ω-values ​​are searched out. In the case of parallel connection, the pointers are added (parallelogram construction); In the case of series connection, the pointers are multiplied by multiplying the lengths of the pointers and adding their angles. Representation of the frequency response using frequency characteristics (Bode diagram) Using the amount A (ω) and the phase ϕ (ω) of the frequency response G (jω) = A (ω) e jϕ (ω) separately over the frequency ω, one obtains the amplitude response or the magnitude characteristic as well as the phase response or the phase characteristic of the transmission element. Both together result in the frequency characteristics or the Bode diagram (Fig. 4-8). A (ω) (possibly after normalization to dimension 1) and ω are plotted logarithmically and ϕ (ω) linearly. It is customary to relate A (ω) to the unit decibel (db). According to the definition, A db (ω) = 20 lg A (ω). The logarithmic representation offers particular advantages when connecting transmission elements in series, especially since more complicated frequency responses, such as those from G (s) = K (ss N1) ... (ss Nm) (ss P1) ... (ss Pn) ( 4-28) emerge with s = jω, as a cascading arrangement- Fig. 4-7. Example of an experimentally determined frequency response locus Fig. 4-8. Representation of the frequency response by frequency characteristics (Bode diagram)

20 20 Regulation and control engineering / control engineering of the frequency responses of simple transfer elements of the form G i (jω) = (jω s Ni) for i = 1, 2, ..., m (4-29) and 1 G m + i ( ω) = jω s Pi for i = 1, 2, ..., n (4-30). Then G (jω) = KG 1 (jω) g 2 (jω) ... g n + m (jω). (4-31) From the representation G (jω) = KA 1 (ω) a 2 (ω) ... a n + m (ω) ej [ϕ1 (ω) + ϕ2 (ω) + ... + ϕn + m (ω)] (4-32) or from A (ω) = G (jω) one obtains the logarithmic amplitude response A db (ω) = K db + A 1dB (ω) + A 2dB (ω) +. ..A n + mdb (ω) (4-33) and the phase response ϕ (ω) = ϕ 0 (ω) + ϕ 1 (ω) ϕ n + m (ω), (4-34) where ϕ 0 ( ω) = 0 for K> 0 and ϕ 0 (ω) = 180 for K <0. The overall frequency response of a series connection is thus obtained by adding the individual frequency characteristics.4.5 The behavior of the most important transfer elements For the transfer elements treated below, the course of the transition function h (t) and the frequency response G (jω) is compiled in Table 4-1 The proportionally acting element (P element) a) Representation in the time domain: xa (t) = Kx e (t). (4-35) K is referred to as the gain factor or the transfer coefficient of the P element. b) Transfer function: G (s) = K. (4-36) c) Frequency response: G (jω) = K. (4-37) The locus of G (jω) represents a point on the real axis for all frequencies represents the distance K from the zero point, d. This means that the phase response ϕ (ω) is zero for K> 0 or 180 for K <0, while the following applies for the logarithmic amplitude response. A db (ω) = 20 lg K = K db = const The integrating element (I element) a) Representation in the time domain: x a (t) = 1 T I t 0 x e (τ) dτ + x a (0). (4-38) T I is a constant with the dimension time and is therefore called the integration time constant. b) Transfer function: c) Frequency response: G (s) = 1 st I. (4-39) G (jω) = 1 jωt I = 1 ωt I ej π 2, (4-40) with the amplitude and phase response A. (ω) = 1 and ϕ (ω) = π = const (4-41) ωt I 2 and the logarithmic amplitude response A db (ω) = 20 lg ωt I = 20 lg ω ω e, (4-42) where ω e = 1 / TI is defined as the base frequency The di erentiating element (D element) a) Representation in the time domain: xa (t) = TD d dt xe (t). (4-43)

21 Table 4-1. Transfer elements with transition function and frequency response 4 Description of linear continuous systems in the frequency range 21

22 22 Regulation and control engineering / control engineering b) Transfer function: c) Frequency response: G (s) = st D. (4-44) G (jω) = jωt D = ωt D ej π 2, (4-45) with the logarithmic amplitude response A db (ω) = 20 lg ωt D = 20 lg ω ω e (4-46) and the phase response ϕ (ω) = π = const. (4-47) 2 It is easy to see that the transfer functions of the I and D term merge into one another through inversion. Therefore, the curves for the amplitude and phase response of the D element can be obtained by mirroring the corresponding curves of the I element on the 0 dB line or on the line ϕ = 0. The 1st order delay element (PT 1 element ) a) Representation in the time domain: xa (t) + T ẋ a (t) = Kx e (t), with xa (0) = x a0. (4-48) b) Transfer function: K G (s) = 1 + st. (4-49) c) Frequency response: 1 j ω ω G (jω) = K e () 2 (4-50) ω 1 + ω e with the kink frequency ω e = 1 / T. T is defined as a time constant. The amplitude response is () 2 ω A (ω) = G (jω) = K / 1 + (4-51) and the phase response ω e ϕ (ω) = arctan I (ω) R (ω) = arctan ω ω e. (4-52) The logarithmic amplitude response () 2 ω A db (ω) = 20 lg K 20 lg 1 + (4-53) can be derived from (4-51). Equation (4-53) can be approximated asymptotically by two straight lines: α) In the area ω / ω e 1 by the initial asymptote ω e A db (ω) 20 lg K = K db, where ϕ (ω) becomes 0. β) In the range ω / ω e 1 by the end asymptote A db (ω) 20 lg K 20 lg ω ω e, where ϕ (ω) π 2 applies. The course of the initial asymptote is horizontal, with the end asymptote having a gradient of 20 / decade. The point of intersection of the two straight lines results in ω / ω e = 1. The maximum deviation of the amplitude response from the asymptotes occurs at ω = ω e and is ΔA db (ω e) = 20 lg 2ˆ = 3dB The 2nd order delay element (PT 2 Element and PT 2 S element) The 2nd order delay element is characterized by two independent energy stores. Depending on the damping properties or the position of the poles of G (s), a distinction is made between oscillating and aperiodic behavior in the case of the 2nd order delay element. If a 2nd order delay element has a complex conjugate pole pair, then it exhibits oscillating behavior (PT 2 S behavior). If the two poles are on the negative real axis, the transmission element has a retarding PT 2 behavior. a) Representation in the time domain: T 2 2 d 2 x a (t) + T dx a (t) dt x a (t) = Kx e (t). (4-54) German

23 4 Description of linear continuous systems in the frequency range 23 b) Transfer function: K G (s) = 1 + T 1 s + T = K 2 2s2 (1 + T 1 s) (1 + T 2 s). (4-55) If one introduces terms that characterize the time behavior, namely the degree of damping D = T 1 / 2T 2 and the natural frequency (of the undamped oscillation) ω 0 = 1 / T 2, one obtains from ( 4-55) G (s) = c) Frequency response: K 1 + 2D s + 1. (4-56) s 2 ω 0 ω 2 0 () 2 ω 1 ω j 2D ω 0 ω 0 G (jω) = K () 2 2]. (4-57) 2 ω 1 ω + [2D ωω0 0 Thus the associated amplitude response is A (ω) = K () 2 2] 2 ω 1 + [2D ωω0 ω 0 (4-58) and the phase response is 2D ω ω ϕ (ω) = arctan 0 () 2. (4-59) ω 1 ω 0 For the logarithmic amplitude response results from (4-58) A db (ω) = 20 lg K () 2 2 [ω 20 lg 1 + 2D ω] 2. ω 0 ω 0 (4-60) The course of A db (ω) can be approximated by the following asymptotes: α) In the range ω / ω 0 1 by the initial asymptote A db (ω) 20 lg K with ϕ (ω) 0. β) In the area ω / ω 0 1 by the end asymptote () ω A db (ω) 20 lg K 40 lg with ϕ (ω) π. ω 0 Fig. 4-9. Bode diagram of a 2nd order delay element (K = 1)

24 24 Regulation and control technology / control technology In the Bode diagram, the end asymptote represents a straight line with a gradient of 40dB / decade. The angular frequency ω / ω 0 = 1 normalized to ω 0 follows as the intersection of both asymptotes. The actual value of A db ( ω) can deviate considerably from the asymptot intersection at ω = ω 0. For D <0.5 the value is above, for D> 0.5 it is below the asymptote. Figure 4-9 shows the course of A db (ω) and ϕ (ω) in the Bode diagram for 0

25 4 Description of linear continuous systems in the frequency range 25. The poles of the transfer function are s 1,2 = ω 0 D ± ω 0 D2 1. (4-64) Depending on the position of the poles in the s-plane, the oscillation behavior of a second-order delay element can now be clearly described. For this purpose, the course of the associated transition function h (t) is expediently selected. Table 4-2 shows, depending on D, the position of the poles of the transfer function and the associated transition functions of this system. Bandwidth of a transfer link The bandwidth of a transfer link is an important term. B. PT 1 -, PT 2 - and PT 2 S elements as well as PT n elements (series connection of n PT 1 elements), have so-called low-pass properties, i. That is, they preferably transmit low frequencies, while high frequencies of signals are transmitted attenuated in accordance with the sharply falling amplitude response. The term bandwidth is introduced to describe this transmission behavior. The bandwidth of a low-pass element is the frequency ω b at which the logarithmic amplitude response has fallen by 3 db compared to the horizontal initial asymptote, see figure Systems with minimal and non-minimal phase behavior Through a transfer function that has no poles and zeros in the right half-plane , a system with minimum phase behavior is described. It is characterized by the fact that if the amplitude response A (ω) = G (jω) is known in the range 0 ω

26 26 Regulation and control engineering / control engineering with the amplitude response A (ω) = G (jω) = cos ωt t jsinωt t = 1 (4-69) and the phase response (in radians) ϕ (ω) = ωt t (4- 70) is described. The locus of G (jω) thus represents a circle around the coordinate origin, which is continuously traversed with ω = 0 on the real axis at R (ω) = 1 starting with increasing ω values, since the phase angle is constantly increasing. In systems with minimum phase behavior, the phase response ϕ (ω) can be determined unambiguously from the amplitude response A (ω). However, this does not apply to systems with non-minimal phase behavior. Checking whether a system exhibits minimum phase behavior or not can easily be estimated from the course of ϕ (ω) and A db (ω) for high frequencies. In a minimum phase system, which is represented by the fractional rational transfer function G (s) = Z (s) / N (s), where the numerator Z (s) is of degree m and the denominator N (s) of degree, one obtains for ω the phase response ϕ () = 90 (nm). (4-71) In a system with non-minimal phase behavior, this value is always larger. In both cases the logarithmic amplitude response for ω will have a slope of 20 (n m) db / decade. 5 The behavior of linear continuous control loops 5.1 Dynamic behavior of the control loop Figure 5-1 shows the block diagram of the closed control loop with the 4 classic components: controller, actuator, controlled system and measuring element. It is usually useful to combine the controller and actuator to form the control device, while the measuring element is often included in the controlled system. This leads to the simplified description according to Figure 5-2. Since the disturbance variable z usually intervenes at a different point in the controlled system than Figure 5-1. The basic components of a control loop Fig. 5-2. Block diagram of the control loop for the manipulated variable u, the controlled system represents a system with at least two input variables (if there is only one fault). In general, each of these two input variables also has a different transfer behavior on the controlled variable y. A distinction is therefore made between the control behavior and the disturbance behavior of the controlled system, which are described by the transfer functions G SU (s) and G SZ (s). Furthermore, the transfer function G R (s) characterizes the transfer behavior of the control device (in the following mostly referred to as controller again). As can be easily seen from Figure 5-2, the following applies in the closed control loop for the controlled variable Y (s) = G SZ (s) 1 + GR (s) G SU (s) Z (s) + GR (s) G SU ( s) W (s). (5-1) 1 + GR (s) G SU (s) On the basis of this relationship, transfer functions for the two tasks of a control (cf. 1.3) can be distinguished: a) For W (s) = 0 one obtains the transfer function of the closed Control loop for disturbance behavior (fixed value control or disturbance variable control) GZ (s) = Y (s) Z (s) = G SZ (s) 1 + GR (s) G SU (s). (5-2) b) For Z (s) = 0 it follows accordingly as a transfer function of the closed control loop for lead behavior (follow-up or follow-up control)

27 5 The behavior of linear continuous control loops 27 Fig. 5-3. Open control loop G W (s) = Y (s) W (s) = G R (s) G SU (s) 1 + G R (s) G SU (s). (5-3) Both transfer functions GZ (s) and GW (s) together contain the dynamic control factor R (s) = 1 / [1 + G 0 (s)] (5-4) with G 0 (s) = GR (s) g SU (s). (5-5) For W (s) = 0 and Z (s) = 0, cut the control loop according to Figure 5-3 at any point, and define the input variable xe (t) and the output variable, taking into account the direction of action of the transmission elements xa (t), the transfer function of the open control loop is G open (s) = X a (s) X e (s) = GR (s) G SU (s) = G 0 (s). (5-6) However, it has (incorrectly) prevailed in control engineering that mostly G 0 (s) is defined as the transfer function of the open control loop. For the closed control loop, by setting the denominator expression in (5-2) and (5-3) to zero from the condition 1 + G 0 (s) = 0 (5-7), the characteristic equation in the form a 0 + a 1 is obtained s + a 2 sansn = 0, (5-8) if G 0 (s) represents a fractional rational transfer function. 5.2 Stationary behavior of the control loop Very often the transfer behavior of the open control loop can be determined by a general standard transfer function of the form G 0 (s) = K 0 s 1 + β 1 s β msmsk 1 + α 1 s α nksnke Tt, mn (5-9 ), where the (integer) constant k = 0, 1, 2, ... essentially characterizes the type of transfer function G 0 (s). K 0 = K R K S represents the gain of the open control loop and is also referred to as loop gain; K R and K S are the gain factors of the controller and the controlled system. G 0 (s) thus has e.g. B. for k = 0: proportional behavior (P behavior) k = 1: integral behavior (I behavior) k = 2: double integral behavior (I 2 behavior). It is now assumed that the term of the fractional rational function occurring in (5-9) has only poles in the left half-plane. In this way, the steady-state behavior of the closed control loop for t can be investigated for the individual types of the transfer function G 0 (s) with different signal forms of the reference variable w (t) or the disturbance variable z (t). With E (s) = W (s) Y (s) it follows from (5-1) and (5-5) for the control deviation 1 E (s) = [W (s) Z (s)]. (5-10) 1 + G 0 (s) Assuming that the limit value of the control deviation e (t) exists for t, with the help of the limit value theorem of the Laplace transformation for the stationary end value of the control deviation, lim te (t) = lim se (s). (5-11) s 0 In the event that all disturbance variables are related to the system output, it follows from (5-10) that, apart from the sign, both types of input variables, i.e. command or disturbance variables, can be treated equally. In the following, the designation X e (s) is chosen as an input variable to represent both types of signal. With the help of (5-10) and (5-11), the stationary end values ​​of the control deviation can now be calculated for the most varied of signal forms of x e (t) for different types of the transfer function G 0 (s) of the open control loop. These values ​​characterize the static behavior of the closed control loop. They are to be determined in the following for the most important cases. The following test signals are used as a basis for further considerations as shown in Figure 5-4:

28 28 Regulation and control technology / control technology Table 5-1. Permanent control deviation for different system types of G 0 (s) and different input variables x e (t) (reference and disturbance variables, if all disturbance variables are related to the output of the controlled system) System type of input variable Permanent control- Fig. 5-4. Various input functions xe (t), which are often used as a basis for disturbance variables z (t) and reference variables w (t): a jump-shaped, b ramp-shaped and c parabolic signal curve a) jump-shaped excitation: X e (s) = x e0, (5 -12) s where x e0 represents the jump height. b) Ramp-shaped excitation: X e (s) = x e1 s 2, (5-13) where x e1 describes the speed of the ramp-shaped rise of the signal x e (t). c) Parabolic excitation: X e (s) = x e2 s 3, (5-14) where x e2 is a measure of the acceleration of the parabolic signal rise x e (t). According to (5-10) E (s) = G 0 (s) X e (s), (5-15) applies to the control deviation, whereby the difference between command and disturbance behavior is only noticeable in the sign of X e (s) power (disruptive behavior: X e (s) = Z (s); leadership behavior: X e (s) = W (s)).If (10) to (14) are inserted one after the other into this relationship, then the corresponding system deviation for different types of the transfer function G 0 (s) can be calculated. The results are shown in Table 5-1. It follows that the remaining control deviation e, which characterizes the static behavior of the control loop, can be kept smaller in all cases where it assumes a finite value, G 0 (s) according to (5-9) X e ( s) deviation ek = 0 x e0 1 x e0 (P behavior) s 1 + K 0 x e1 s 2 x e2 s 3 k = 1 x e0 (I behavior) s 0 x e1 1 xs 2 e1 K 0 x e2 s 3 k = 2 (I 2 behavior) x e0 s 0 x e1 s 2 0 x e2 1 xs 3 e2 K 0 the greater the loop gain K 0 is selected. In the case of P behavior of the open control loop, this also means that the remaining control deviation e becomes smaller the smaller the static control factor 1 R = (5-16) 1 + K 0. Often, however, too large a loop gain K 0 quickly leads to instability of the closed control loop, as discussed in detail in Chapter 6. A corresponding compromise must therefore usually be made when determining K 0, provided that the remaining control deviation does not disappear simply by selecting a suitable controller type. 5.3 The PID controller and the controller types that can be derived from it The device-related design of a controller includes the formation of the control deviation as well as its

29 5 The behavior of linear continuous control loops 29 Fig. 5-5. Two equivalent block diagrams of the PID controller further processing for the controller output variable u R (t) according to Figure 5-1 or directly for the manipulated variable u (t), if the actuator is combined with the controller to form the control device according to Figure 5-2. Most of the linear controller types used in industry are standard controllers, the transmission behavior of which can be traced back to the three idealized linear basic forms of the P, I and D element. The most important standard controller exhibits PID behavior. The basic mode of operation of this PID controller can be clearly explained by the parallel connection of a P, I and D element each shown in Fig. 5-5. From this representation, the transfer function of the PID controller is G R (s) = U R (s) E (s) By introducing the variables = K P + K I s + K Ds. (5-17) occur. These three variables K R, T I and T D can usually be set in certain value ranges; they are therefore also referred to as setting values ​​of the controller. By suitable selection of these setting values, a controller can be adapted to the behavior of the controlled system in such a way that the most favorable control behavior is achieved. From (5-18) follows for the time course of the controller output variable u R (t) = KR e (t) + K t R de (t) e (τ) dτ + k RT D. (5-19) TI dt 0 This means that the transition function h (t) of the PID controller can easily be formed for a sudden change in e (t), i.e. e (t) = σ (t). It is shown in Figure 5-6a. In the previous considerations, it was assumed that the D behavior can be implemented in the PID controller. In terms of equipment, however, the ideal D behavior cannot be achieved. In the case of controllers that are actually implemented, the D behavior is always subject to a certain delay, so that instead of the D element in the circuit of Figure 5-5, a DT 1 element with the transfer function Ts GD (s) = KD (5-20) 1 + Ts KR = KPTI = KPKITD = KDKP gain factor integral time or reset time differential time or lead time can be converted (5-17) in such a way that, in addition to the dimensional gain factor KR, only the two time constants TI and TD in the transfer function (GR (s) = KR ) TI s + T Ds (5-18) Fig. 5-6. Transition function a of the ideal and b of the real PID controller

30 30 Regulation and control technology / control technology must be taken into account. This gives the relationship GR (s) = KP + KI s + K Ts D 1 + Ts, (5-21) as the transfer function of the real PID controller or, more precisely, of the PIDT 1 controller, and by introducing the controller setting values ​​KR = KP, TI = KR / KI and TD = KDT / KR follows from this (GR (s) = KR) TI s + T s D (5-22) 1 + Ts The transition function h (t) despidt 1 controller is shown in Figure 5- 6b. The following are obtained as special cases of the PID controller: a) T D = 0denPI controller with the transfer function (G R (s) = K R); (5-23) TI sb) TI the PD controller with the transfer function GR (s) = KR (1 + TD s) (5-24) or the PDT 1 controller with the transfer function (s) GR (s) = KR 1 + TD; (5-25) 1 + Ts c) for T D = 0 and T I the P controller with the transfer function G R (s) = K R. (5-26) The transition functions of these controller types are shown in Fig. 5-7. In addition to the controller types dealt with here, which can be derived directly from a PID controller (universal controller) by selecting the appropriate setting values, a pure I controller is sometimes also used. The transfer function of the I controller is 1 GR (s) = KI s = KRTI s. (5-27) It should also be mentioned that D elements are not used directly as controllers, but only in conjunction with P elements in the PD- and PID controllers occur. If you use the controller presented here z. B. on controlled systems with P behavior and disturbs the resulting control loop with a jump of height z 0, the following qualitative statements can be made [1]: Fig. 5-7. Transition functions of the controller types that can be derived from the PID controller: a P controller, b PI controller, c PD controller (ideal) and d PDT 1 controller (real PD controller) a) The P controller has a relatively large maximum Overshoot y max / KS z 0 of the normalized controlled variable, a long settling time t 3% (this is the point in time at which the difference y (t) y () <3% of the stationary end value of the controlled system transition function), as well as a permanent control deviation . b) Due to the slow onset of the I behavior, the I controller has an even greater maximum overshoot than the P controller, but no permanent control deviation. c) The PI controller combines the properties of P and I controllers. It delivers approximately a maximum overshoot and a settling time like the P controller and has no permanent control deviation. d) Due to the fast D component, the PD controller has a smaller maximum overshoot than the controller types listed under a) to c). For the same reason, it is also characterized by the shortest settling time. But also