IDC Technologies Pty Ltd
PO Box 1093, West Perth, Western Australia 6872
Offices in Australia, New Zealand, Singapore, United Kingdom, Ireland, Malaysia, Poland, United States of America, Canada, South Africa and India
Copyright © IDC Technologies 2013. All rights reserved.
First published 2008
All rights to this publication, associated software and workshop are reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without the prior written permission of the publisher. All enquiries should be made to the publisher at the address above.
Whilst all reasonable care has been taken to ensure that the descriptions, opinions, programs, listings, software and diagrams are accurate and workable, IDC Technologies do not accept any legal responsibility or liability to any person, organization or other entity for any direct loss, consequential loss or damage, however caused, that may be suffered as a result of the use of this publication or the associated workshop and software.
In case of any uncertainty, we recommend that you contact IDC Technologies for clarification or assistance.
All logos and trademarks belong to, and are copyrighted to, their companies respectively.
IDC Technologies expresses its sincere thanks to all those engineers and technicians on our training workshops who freely made available their expertise in preparing this manual.
1 Introduction 1
1.1 Objectives 1
1.2 Introduction 1
1.3 Basic definitions and terms used in process control 2
1.4 Process modeling 2
1.5 Process dynamics and time constants 6
1.6 Types or modes of operation of process control systems 15
1.7 Closed loop controller and process gain calculations 16
1.8 Proportional, integral and derivative control modes 17
1.9 An introduction to cascade control 18
2 Process Management and Transducers 21
2.1 Objectives 21
2.2 The definition of transducers and sensors 21
2.3 Listing of common measured variables 21
2.4 The common characteristics of transducers 22
2.5 Sensor dynamics 24
2.6 Selection of sensing devices 24
2.7 Temperature sensors 25
2.8 Pressure transmitters 31
2.9 Flow meters 39
2.10 Level transmitters 45
2.11 The spectrum of user models in measuring transducers 47
2.12 Instrumentation and transducer considerations 48
2.13 Selection criteria and considerations 50
2.14 Introduction to the smart transmitter 52
3 Basic Principles of Control Valves and Actuators 55
3.1 Objectives 55
3.2 An overview of eight of the most basic types of control valves 55
3.3 Control valve gain, characteristics, distortion and rangeability 70
3.4 Control valve actuators 75
3.5 Control valve positioners 80
3.6 Valve sizing 80
Fundamentals of Control Systems 83
4.1 Objectives 83
4.2 ON-OFF control 83
4.3 Modulating control 84
4.4 Open loop control 85
4.5 Closed control loop 86
4.6 Dead time processes 90
4.7 Process responses 91
4.8 Dead zone 92
5 Stability and Control Modes of Closed Loops 93
5.1 Objectives 93
5.2 The industrial process in practice 93
5.3 Dynamic behavior of the feed heater 94
5.4 Major disturbances of the feed heater 95
5.5 Stability 95
5.6 Proportional control 98
5.7 Integral control 101
5.8 Derivative control 104
5.9 Proportional, integral and derivative modes 107
5.10 ISA versus “Allen Bradley” 107
5.11 P I and D relationships and related interactions 107
5.12 Applications of process control modes 108
6 Digital Control Principles 111
6.1 Objectives 111
6.2 Digital vs analog: A revision of their definitions 111
6.3 Action in digital control loops 111
6.4 Identifying functions in the frequency domain 112
6.5 The need for digital control 114
6.6 Scanned calculations 117
6.7 Proportional control 117
6.8 Integral control 117
6.9 Derivative control 118
6.10 Lead function as derivative control 118
6.11 Example of incremental form (Siemens S5 - 100V) 119
7 Real and Ideal PID Controllers 121
7.1 Objectives 121
7.2 Comparative descriptions of real and ideal controllers 121
7.3 Description of the IDEAL or the non-interactive PID controller 121
7.4 Description of the real (interactive) PID controller 122
7.5 Lead function - derivative control with filter 123
7.6 Derivative action and effects of noise 124
7.7 Example of the KENT K90 controllers PID algorithms 125
8 Tuning of PID Controllers in Both Open and Closed Loop Control Systems 127
8.1 Objectives 127
8.2 Objectives of tuning 127
8.3 Reaction curve method (Ziegler Nichols) 129
8.4 Ziegler Nichols open loop tuning method (1) 131
8.5 Ziegler-Nichols open loop method (2) using POI 132
8.6 Loop time constant (LTC) method 134
8.7 Hysteresis problems that may be encountered in open loop tuning 136
8.8 Continuous cycling method (Ziegler Nichols) 136
8.9 Damped cycling tuning method 138
8.10 Tuning for no overshoot on start up (Pessen) 142
8.11 Tuning for some overshoot on start up (Pessen) 143
8.12 Summary of important closed loop tuning algorithms 143
8.13 PID equations: Dependent and independent gains 143
9 Controller Output Modes, Operating Equations and Cascade Control 147
9.1 Objectives 147
9.2 Controller output 147
9.3 Multiple controller output configurations 148
9.4 Saturation and non-saturation of output limits 149
9.5 Cascade control 150
9.6 Initialization of a cascade system 152
9.7 Equations relating to controller configurations 153
9.8 Application notes on the use of equation types 155
9.9 Tuning of a cascade control loop 156
9.10 Cascade control with multiple secondaries 157
10 Concepts and Applications of Feedforward Control 159
10.1 Objectives 159
10.2 Application and definition of feedforward control 159
10.3 Manual feedforward control 160
10.4 Automatic feedforward control 160
10.5 Examples of feedforward controllers 161
10.6 Time matching as feedforward control 161
11 Combined Feedback and Feedforward Control 165
11.1 Objectives 165
11.2 The feedforward concept 165
11.3 The feedback concept 166
11.4 Combining feedback and feedforward control 166
11.5 Feedback - Feedforward summer 166
11.6 Initialization of a combined feedback and feedforward control system 167
11.7 Tuning aspects 167
12 Long Process Dead-time in Closed Loop Control and the Smith Predictor 169
12.1 Objectives 169
12.2 Process deadtime 169
12.3 An example of process deadtime 170
12.4 The Smith Predictor model 171
12.5 The Smith Predictor in theoretical use 172
12.6 The Smith Predictor in reality 173
12.7 An exercise in deadtime compensation 173
13 Basic Principles of Fuzzy Logic and Neural Networks 175
13.1 Objectives 175
13.2 Introduction to fuzzy logic 175
13.3 What is fuzzy logic? 176
13.4 What does fuzzy logic do? 176
13.5 The rules of fuzzy logic 176
13.6 Fuzzy logic example using five rules and patches 178
13.7 The Achilles heel of fuzzy logic 179
13.8 Neural networks 180
13.9 Neural back-propagation networking 181
13.10 Training a neuron network 183
13.11 Conclusions, and then the next step 184
14 Self-Tuning Intelligent Control and Statistical Process Control 185
14.1 Objectives 185
14.2 Self-tuning controllers 185
14.3 Gain scheduling controller 186
14.4 Implementation requirements for self tuning controllers 187
14.5 Statistical process control 187
14.6 Two ways to improve a production process 188
14.7 Obtaining the information required for SPC 189
14.8 Calculating control limits 194
14.9 The logic behind control charts 196
Appendix A Some Laplace Transform Pairs 199
Appendix B Block Diagram Transformation Theorems 203
Appendix C Getting started with PC-ControLAB 205
Appendix D Practical exercises 211
Appendix E Quiz 313
Appendix F Intelligent Trial and Error 317
As a result of studying this chapter, the student should be able to:
- Describe the three different types of processes;
- Indicate the meaning of a Time constant;
- Describe the meaning of Process Variable, Set Point and Output;
- Outline the meaning of 1st and 2nd order systems
- List the different modes of operation of a control system.
To succeed in process control, the designer must first establish a good understanding of the process to be controlled. Since we do not wish to become too deeply involved in chemical or process engineering we need to find a way of simplifying the representation of the process we wish to control. This is done by adopting a technique of block diagram modeling of the process.
All processes have some basic characteristics in common and if we can identify these the job of designing a suitable controller can be made to follow a well proven and consistent path. The trick is to learn how make a reasonably accurate mathematical model of the process and use this model to find out what typical control actions we can use to make the process operate at the desired conditions.
Let us then start by examining the component parts of the more important dynamics that are common to many processes. This will be the topic covered in the next few sections of this chapter, and upon completion we should be able to draw a block diagram model for a simple process, for example one that says: “It is a system with high gain and a 1st order dynamic lag and as such we can expect it to perform in the following way”, regardless of what the process is manufacturing or its final product.
From this analytical result an accurate selection of the type of measuring transducer can be selected, this being covered in Chapter 2 and likewise the selection of the final control element can be correctly selected, this being covered in Chapter 3.
From thereon Chapters 4 through 14 deal with all the other aspects of Practical Process Control, namely the controller(s), functions, actions and reactions, function combinations and various modes of operation. By way of introduction to the controller itself, the last sections of this chapter are introductions to the basic definitions of controller terms and types of control modes that are available.
1.3 Basic definitions and terms used in process control
Most basic process control systems consist of a control loop as shown in Figure 1.1, having four main components, these being:
- A measurement of the state or condition of a process;
- A controller calculating an action based on this measured value against a pre-set or desired value (Set Point);
- An output signal resulting from the controller calculation which is used to manipulate the process action through some form of actuator;
- The process itself reacting to this signal, and changing its state or condition.
As we will see in Chapters 2 and 3, two of the most important signals used in process control are called:
PROCESS VARIABLE or PV.
MANIPULATED VARIABLE or MV.
In industrial process control, the Process Variable or PV is measured by an instrument in the field and acts as an input to an automatic controller which takes action based on the value of it. Alternatively the PV can be an input to a data display so that the operator can use the reading to adjust the process through manual control and supervision.
The variable to be manipulated, in order to have control over the PV, is called the Manipulated Variable or MV. If we control a particular flow for instance, we manipulate a valve to control the flow. Here, the valve position is called the Manipulated Variable and the measured flow becomes the Process Variable.
In the case of a simple automatic controller, the Controller Output Signal (OP) drives the Manipulated Variable. In more complex automatic control systems, a controller output signal may drive the target values or reference values for other controllers.
The ideal value of the PV is often called Target Value. and in the case of an automatic control, the term Set Point Value is preferred.
1.4 Process modeling
To perform an effective job of controlling a process we need to know how the control input we are proposing to use will affect the output of the process. If we change the input conditions we shall need to know:
- Will the output rise or fall?
- How much response will we get?
- How long will it take for the output to change?
- What will be the response curve or trajectory of the response?
The answers to these questions are best obtained by creating a mathematical model of the relationship between the chosen input and the output of the process in question. Process control designers use a very useful technique of block diagram modeling to assist in the representation of the process and its control system. The following section introduces the principles that we should be able to apply to most practical control loop situations.
The process plant is represented by an input/output block as shown in Figure 1.2.
In Figure 1.2 we see a controller signal that will operate on an input to the process, known as the manipulated variable. We try to drive the output of the process to a particular value or set point by changing the input. The output may also be affected by other conditions in the process or by external actions such as changes in supply pressures or in the quality of materials being used in the process. These are all regarded as disturbance inputs and our control action will need to overcome their influences as best as possible.
The challenge for the process control designer is to maintain the controlled process variable at the target value or change it to meet production needs whilst compensating for the disturbances that may arise from other inputs. So for example if you want to keep the level of water in a tank at a constant height whilst others are drawing off from it you will manipulate the input flow to keep the level steady.
The value of a process model is that provides a means of showing the way the output will respond to the actions of the input. This is done by having a mathematical model based on the physical and chemical laws affecting the process. For example in Figure 1.3 an open tank with cross sectional area A is supplied with an inflow of water Q1 that can be controlled or manipulated. The outflow from the tank passes through a valve with a resistance R to the output flow Q2. The level of water or pressure head in the tank is denoted as H. We know that Q2 will increase as H increases and when Q2 equals Q1 the level will become steady.
The block diagram version of this process is drawn in Figure 1.4.
Note that the diagram simply shows the flow of variables into function blocks and summing points so that we can identify the input and output variables of each block.
We want this model to tell us how H will change if we adjust the inflow Q1 whilst we keep the outflow valve at a constant setting. The model equations can be written as follows:
The first equation says the rate of change of level is proportional to the difference between inflow and outflow divided by the cross sectional area of the tank. The second equation says the outflow will increase in proportion to the pressure head divided by the flow resistance, R.
For turbulent flow conditions in the exit pipe and the valve, the effective resistance to flow R, will actually change in proportion to the square root of the pressure drop so we should also note that that R = a constant x √ H. This creates a non-linear element in the model which makes things more complicated. However, in control modeling it is common practice to simplify the nonlinear elements when we are studying dynamic performance around a limited area of disturbance. So for a narrow range of level we can treat R as a constant. It is important that this approximation is kept in mind because in many applications it often leads to problems when loop tuning is being set up on the plant at conditions away from the original working point.
The process input/output relationship is therefore defined by substituting for Q2 in the linear differential equation:
dH/dt = Q1/A – H/RA
Which is rearranged to a standard form as:
(R.A.) (dH/dt) + H = R. Q1
When this differential equation is solved for H it gives:
H = R. Q1 (1-e-t/RA).
Using this equation we can show that if a step change in flow ? Q1 is applied to the system, the level will rise by the amount ? Q1.R by following an exponential rise versus time. This is the characteristic of a first order dynamic process and is very commonly seen in many physical processes. These are sometimes called capacitive and resistive processes and include examples such as charging a capacitor through a resistance circuit (see Figure 1.5) and heating of a well mixed hot water supply tank (see Figure 1.6).
1.5 Process dynamics and time constants
Resistance, capacitance and inertia are perhaps the most important effects in industrial processes involving heat transfer, mass transfer, and fluid flow operations. The essential characteristics of first and second order systems are summarized below and they may be used to identify the time constant and responses of many processes as well as mechanical and electrical systems. In particular it should be noted that most process measuring instruments will exhibit a certain amount of dynamic lag and this must be recognized in any control system application since it will be a factor in the response and in the control loop tuning.
1.5.1 First order process dynamic characteristics
The general version of the process model for a first order lag system is a linear first order differential equation:
T = the process response time constant
Kp = the process steady state gain (output change/input change)
t = time
c(t) = process output response
m(t) = process input response
The output of a first order process follows the step change in put with a classical exponential rise as shown in Figure 1.7.
Important points to note;
T = is the time constant of the system and is the time taken to reach 63.2% of the final value after a step change has been applied to the system. After 4 time constants the output response has reached 98% of the final value that it will settle at.
The initial rate of rise of the output will be Kp/T.
Application to the tank example:
If we apply some typical tank dimensions to the response curve in figure 1.7 we can predict the time that the tank level example in Figure 1.3 will need to stabilize after a small step change around a target level H.
For example: Suppose the tank has a cross sectional area of 2 m2 and operates at H = 2m when the outflow rate is 5m3 per hour. The resistance constant R will be H/Q2 = 2 m/5 m3 /hr = 0.4 hr/m2 and the time constant will be AR = 0.8 hrs. The gain for a change in Q1 will also be R.
Hence if we make a small corrective change at Q1 of say 0.1 m3 /hr the resulting change in level will be: R.Q1 = 1 x 0.4 = 0.4 m and the time to reach 98% of that change will be 3.2 hours.
1.5.2 Resistance process
Now that we have seen how a first order process behaves we can summarize the possible variations that may be found by considering the equivalent of resistance, capacitance and inertia type processes.
If a process has very little capacitance or energy storage the output response to a change in input will be instantaneous and proportional to the gain of the stage. For example: If a linear control valve is used to change the input flow in the tank example of Figure 1.3 the output will flow will rise immediately to a higher value with a negligible lag.
1.5.3 Capacitance type processes
Most processes include some form of capacitance or storage capability, either for materials (gas, liquid or solids) or for energy (thermal, chemical, etc.). Those parts of the process with the ability to store mass or energy are termed 'capacities'. They are characterized by storing energy in the form of potential energy, for example, electrical charge, fluid hydrostatic head, pressure energy and thermal energy.
The capacitance of a liquid or gas storage tank is expressed in area units. These processes are illustrated in Figure 1.8. The gas capacitance of a tank is constant and is analogous to electrical capacitance.
The liquid capacitance equals the cross-sectional area of the tank at the liquid surface; if this is constant then the capacitance is also constant at any head.
Using Figure 1.8 consider now what happens if we have a steady condition where flow into the tank matches the flow out via an orifice or valve with flow resistance r. If we change the inflow slightly by Δv the outflow will rise as the pressure rises until we have a new steady state condition. For a small change we can take r to be a constant value. The pressure and outflow responses will follow the first order lag curve we have seen in Figure 1.7. and will be given by the equation Δp = r. Δv (1-e-t/r.C) and the time constant will be r.C.
It should be clear that this dynamic response follows the same laws as those for the liquid tank example in Figure 1.3 and for the electrical circuit shown in Figure 1.5.
A purely capacitive process element can be illustrated by a tank with only an inflow connection such as Figure 1.9. In such a process, the rate at which the level rises is inversely proportional to the capacitance and the tank will eventually flood. For an initially empty tank with constant inflow, the level c is the product of the inflow rate m and the time period of charging t divided by the capacitance of the tank C.
1.5.4 Inertia type processes
Inertia effects are typically due to the motion of matter involving the storage or dissipation of kinetic energy. They are most commonly associated with mechanical systems involving moving components, but are also important in some flow systems in which fluids must be accelerated or decelerated. The most common example of a first-order lag caused by kinetic energy build up is when a rotating mass is required to change speed or when a motor vehicle is accelerated by an increase in engine power up to a higher speed until the wind and rolling resistances match the increased power input.
For example: Consider a vehicle of mass M moving at V= 60 km/hr where the driving force F of the engine matches the wind drag and rolling resistance forces. If B is the coefficient of resistance the steady state is F = V.B and for a small change of force Δ F the final speed change will be Δ V = Δ F /B
The speed change response will be given by:
Δ V = (Δ F /B )x (1-e-tB/M)
This equation is directly comparable to the versions for the tank and the electrical RC circuit. In this case the time constant is given by M/B. Obviously the higher the mass of the vehicle the longer it will take to change speed for the same change in driving force. If the resistance to speed is high the speed change will be small and the time constant will be shorter.
1.5.5 Second-order response
Second order processes result in a more complicated response curves. This is due to the exchange of energy between inertia effects and interactions between first order resistance and capacitance elements. They are described by the second order differential equation:
T = the time constant of the second-order process
ξ = the damping ratio of the system
Kp = the system gain
t = time
c(t) = process output response
m(t) = process input response
The solutions to the equation for a step change in m(t) with all initial conditions zero can be any one of a family of curves as shown in Figure 1.10. There are three broad classes of response in the solution, depending on the value of the damping ratio:
ξ < 1.0, the system is under damped and overshoots the steady-state value.
If ξ < 0.707, the system will oscillate about the final steady-state value.
ξ > 1.0, the system is over damped and will not oscillate or overshoot the final steady-state value.
ξ = 1.0, the system is critically damped. In this state it yields the fastest response without overshoot or oscillation. The natural frequency of oscillation will be ωn = 1/T and is defined in terms of the 'perfect' or 'frictionless' situation where ξ = 0.0. As the damping factor increases the oscillation frequency decreases or stretches out until the critical damping point is reached.
For practical application in control systems the most common form of second order system is found wherever two first order lag stages are in series, in which the output of the first stage is the input to the second. As we shall see in Section 1.4, where the lags are modeled using transfer functions, the time constants of the two first order lags are combined to calculate the equivalent time constant and damping factor for their overall response as a second order system
Important note: When a simple feedback control loop is applied to a first order system or to a second order system, the overall transfer function of the combined process and control system will usually be equivalent to a second order system. Hence the response curves shown in Figure 1.10 will be seen in typical closed loop control system responses.
1.5.6 Multiple time constant processes
In multiple time constant processes, say where two tanks are connected in series, the process will have two or more two time lags operating in series. As the number of time constants increases, the response curves of the system become progressively more retarded and the overall response gradually changes into an S-shaped reaction curve as can be seen in Figure 1.11.
1.5.7 High order response
Any process that consists of a large number of process stages connected in series, can be represented by a set of series connected first order lags or transfer functions. When combined for the overall process they represent a high order response but very often one or two of the first order lags will be dominant or can be combined. Hence many processes can be reduced to approximate first or second order lags but they will also exhibit a dead time or transport lag as well.
1.5.8 Dead time or transport delay
For a pure dead time process whatever happens at the input is repeated at the output θd time units later, where θd is the dead time. This would be seen for example in a long pipeline if the liquid blend was changed at the input or the liquid temperature was changed at the input and the effects were not seen at the output until the travel time in the pipe has expired.
In practice, the mathematical analysis of uncontrolled processes containing time delays is relatively simple but a time delay, or a set of time delays, within a feedback loop tends to lend itself to very complex mathematics.
In general, the presence of time delays in control systems reduces the effectiveness of the controller. In well-designed systems the time delays (deadtimes) should be kept to the minimum.
1.5.9 Using transfer functions
In practice differential equations are difficult to manipulate for the purposes of control system analysis. The problem is simplified by the use of transfer functions.
Transfer functions allow the modeling blocks to be treated as simple functions that operate on the input variable to produce the output variable. They operate only on changes from a steady state condition so they will show us the time response profile for steps changes or disturbances around the steady state working point of the process.
Transfer functions are based on the differential equations for the time response being converted by Laplace transforms into algebraic equations which can operate directly on the input variable. Without going into the mathematics of transforms it is sufficient to note that the transient operator (symbol S) replaces the differential operator such that d(variable)/dt=S.
A transfer function is abbreviated as G (s) and it represents the ratio of the Laplace transform of a process out put Y(s) to that of an input M(s) as shown in Figure 1.12. From this, the simple relationship is obtained: Y (s) = G(s). M(s).
When applied to the first order system we have already described the transfer function representing the action of a first order system on a changing input signal is as shown in Figure 1.13 where T is the time constant.
As we have already seen many processes involve the series combination of two or more first order lags. These are represented in the transfer function blocks as seen in Figure 1.14. If the two blocks are combined by multiplying the functions together they can be seen to form a second order system as shown here and as described in Section 1.4.5.
Block diagram modeling of the control system proceeds in the same manner as for the process and is shown by adding the feedback controller as one or more transfer function blocks. The most useful rule for constructing the transfer function of a feedback control loop is shown in Figure 1.15.
The feedback transfer function H(s) (typically the sensor response) and the controller transfer function Gc(S) are combined in the model to give an overall transfer function that can be used to calculate the overall behavior of the controlled process.
This allows the complete control system working with its process to be represented in a equation known as the closed loop transfer function. The denominator of the right hand side of this equation is known as the open loop transfer function. You can see that if this denominator becomes equal to zero the output of the process approaches infinity and the whole process is seen to be unstable. Hence control engineering studies place great emphasis on detecting and avoiding the condition where the open loop transfer function becomes negative and the control system becomes unstable.
1.6 Types or modes of operation of process control systems
There are five basic forms of control available in Process Control. These are:
- Open Loop
- Feed Forward
- Closed loop
The next five sections; 1.6.1 to 1.6.5; examine each of these in turn.
1.6.1 On-off control
The most basic control concept is ON-OFF Control as found in a modern iron in our households. This is a very crude form of control, which nevertheless should be considered as a cheap and effective means of control if a fairly large fluctuation of the PV (Process Variable) is acceptable.
The wear and tear of the controlling element (solenoid valve etc) needs special consideration. As the bandwidth of fluctuation of a PV is increased, the frequency of switching (and thus wear and tear) of the controlling element decreases.
1.6.2 Modulating control
If the output of a controller can move through a range of values, we have modulating control. It is understood that modulating control takes place within a defined operating range (with an upper and lower limit) only.
Modulating control can be used in both open and closed loop control systems.
1.6.3 Open loop control
We have open loop control, if the control action (Controller Output Signal OP) is not a function of the PV (Process Variable) or load changes. The open loop control does not self-correct, when these PV’s drift.
1.6.4 Feed forward control
Feed forward control is a form of control based on anticipating the correct manipulated variables required to deliver the required output variable. It is seen as a form of open loop control as the PV is not used directly in the control action. In some applications the feed forward control signal is added to a feedback control signal to drive the manipulated variable (MV) closer to its final value. In other, more advanced control applications a computer based model of the process is used to compute the required MV and this applied directly to the process as shown in Figure 1.16.
A typical application of this type of control is, for example, to incorporate this with feedback; or closed loop control. Then the imperfect feedforward control can correct up to 90% of the upsets, leaving the feedback system to correct the 10% deviation left by the feedforward component.
1.6.5 Closed loop or feedback control
We have a Closed Loop Control System if the PV, the objective of control, is used to determine the control action. The principle is shown below in Figure 1.17.
The idea of Closed Loop Control is to measure the PV (Process Variable); compare this with the SP (Setpoint), which is the desired or target value; and determine a control action which results in a change of the OP (Output) value of an automatic controller.
In most cases, the ERROR (ERR) term is used to calculate the OP value.
ERR = PV - SP
If ERR = SP -PV has to be used, the controller has to be set for REVERSE control action.
1.7 Closed loop controller and process gain calculations
In designing and setting up practical process control loops one of the most important tasks is to establish the true factors making up the loop gain and then to calculate the gain.
Typically the constituent parts of the entire loop will consist of a minimum of 4 functional items;
Controller Gain :
The measuring transducer or Sensor gain (refer to Chapter 2), KS and
The Valve Gain KV.
The total loop gain is the product of these four operational blocks.
For simple loop tuning two basic methods have been in use for many years. The Zeigler and Nichols method is called the “Ultimate cycle method” and requires that the controller should first be set up with proportional only control. The loop gain is adjusted to find the ultimate gain, Ku. This is the gain at which the MV begins to sustain a permanent cycle. For a proportional only controller the gain is then reduced to 0.5 Ku. Therefore for this tuning the loop gain must be considered in terms of the sum of the four gains given above and the tuning condition is given by the following equation :
Normally only the controller gain can be changed but it remains very important that the other gain components be recognized and calculated. In particular the valve gain and process gain may change substantially with the working point of the process and this is the cause of many of the tuning problems encountered on process plants.
Other gain settings are used in the Zeigler and Nichols method for PI and PID controllers to ensure stability when integral and derivative actions are added to the controller. See the next section for the meaning of these terms.
The alternative tuning method is known as the 1/4 damping method. This suggests that the controller gain should be adjusted to obtain an under-damped overshoot response having a quarter amplitude of the initial step change in set point. Subsequent oscillations then decay with 1/4 of the amplitude of the previous overshoot. This method does not change the gain settings as integral and derivative terms (see next section) are added in to the controller.
Rule of thumb guidelines for loop tuning should be treated with reservation since each application has its own special characteristics. There is no substitute for obtaining a reasonably complete knowledge of the type of disturbances that are likely to affect the controlled process and it is essential to agree with the process engineers on the nature of the controlled response that will best suit the process. In some cases the above tuning methods will lead to loop tuning that is too sensitive for the conditions, resulting in high degree of instability.
1.8 Proportional, integral and derivative control modes
Most Closed loop Controllers are capable of controlling with three control modes which can be used separately or together:
- Proportional Control (P)
- Integral, or Reset Control (I)
- Derivative, or Rate Control (D)
The purpose of each of these control modes is as follows:
- Proportional control
This is the main and principal method of control. It calculates a control action proportional to the ERROR (ERR) . Proportional control cannot eliminate the ERROR completely.
- Integral Control ...(Reset)
This is the means to eliminate the remaining ERROR or OFFSET value, left from the Proportional action, completely. This may result in reduced stability in the control action.
- Derivative Control ...(Rate)
This is sometimes added to introduce dynamic stability to the control LOOP.
Note: The terms RESET for integral and RATE for derivative control actions are seldom used nowadays.
- Derivative control has no functionality on its own.
The only combinations of the P, I and D modes are:
|P||PFor use as a basic controller|
|PI||Where the offset caused by the P mode is removed|
|PID||To remove instability problems that can occur in PI mode|
|PD||Used in cascade control; a special application|
|I||Used in the primary controller of cascaded systems|
1.9 An introduction to cascade control
Controllers are said to be “In Cascade” when the output of the first or Primary controller is used to manipulate the set point of another or Secondary controller. When two or more controllers are cascaded, each will have its own measurement input or PV but only the primary controller can have an independent set point (SP) and only the secondary, or the most down-stream controller has an output to the process.
Cascade control is of great value where high performance is needed in the face of random disturbances, or where the secondary part of a process contains a significant time lag or has non-linearity.
The principal advantages of cascade control are:
Disturbances occurring in the secondary loop are corrected by the secondary controller before they can effect the primary, or main, variable.
The secondary controller can significantly reduce phase lag in the secondary loop thereby improving the speed or response of the primary loop.
Gain variations due to non-linearity in the process or actuator in the secondary loop are corrected within that loop.
The secondary loop enables exact manipulation of the flow of mass or energy by the primary controller.
Figure 1.18 shows an example of cascade control where the primary controller, TC, is used to measure the output temperature, T2, and compare this with the Setpoint value of the TC, and the secondary controller, FC, is used to keep the fuel flow constant against variables like pressure changes.
The primary controller’s output is used to manipulate the SP of the secondary controller thereby changing the fuel feed rate to compensate for temperature variations of T2 only. Variations and inconsistencies in the fuel flow rate are corrected solely by the secondary controller; the FC controller.
The secondary controller is tuned with a high gain to provide a proportional (linear) response to the set point range thereby removing any non-linear gain elements from the action of the primary controller.
Process Measurement and Transducers
At the conclusion of this chapter, the student should:
- Be able to explain the meaning of the terms accuracy, precision, sensitivity, resolution, repeatability, rangeability, span and hysteresis;
- Be able to make an appropriate selection of sensing devices for a particular process;
- Describe the sensors used for measurement of temperature, pressure, flow, and liquid level;
- List the methods of minimizing the interference effects of noise on our instrumentation system.
2.2 The definition of transducers and sensors
A transducer is a device that obtains information in the form of one or more physical quantities and converts this into an electrical output signal.
Transducers consist of two principle parts, a primary measuring element referred to as a sensor, and a transmitter unit responsible for producing an electrical output that has some known relationship to the physical measurement as the basic components.
In more sophisticated units, a third element may be introduced which is quite often microprocessor based. This is introduced between the sensor and the transmitter part of the unit and has amongst other things, the function of linearizing and ranging the transducer to the required operational parameters.
2.3 Listing of common measured variables
In descending order of frequency of occurrence, the principal controlled variables in process control systems comprise:
- Flow rate
- Liquid level
Sections 2.4 through to 2.6 of this chapter list and describe these different types of transducers; ending with a methodology of selecting sensing devices.
2.4 The common characteristics of transducers
All transducers, irrespective of their measurement requirements, exhibit the same characteristics, such as Range, Span etc. This section explains and demonstrates the interpretation of the most common of these characteristics.
2.4.1 Dynamic and static accuracy
The very first, and most common, term Accuracy is also the most misused-used and least understood.
It is nearly always quoted as “this instrument is +/- X% Accurate, when in fact it should be stated as “this instrument is +/- X% Inaccurate”. In general, accuracy can best be described as how close the measurement’s indication is to the absolute or real value of the process variable. In order to obtain a clear understanding of this term, and all of the other ones that are associated with it, the term Error should first be defined.
The definition of error in process control
Error means a mistake or transgression, and is the difference between a perfect measurement and what was actually measured, at any point, time and direction of process movement in the process measuring range.
There are two types of accuracy, static or steady-state accuracy and dynamic accuracy.
Static accuracy is the closeness of approach to the true value of the variable when that true value is constant.
Dynamic accuracy is the closeness of approach of the measurement when the true value is changing, remembering that a measurement lag occurs here, that is to say, by the time the measurement reading has been acted upon, the actual physical measured quantum may well have changed.
In addition to the term Accuracy, a sub-set of terms appear, these being Precision, Sensitivity, Resolution, Repeatability and Rangeability all of which have a relationship and association with the termERROR.
Precision is the accuracy with which repeated measurements of the same variable can be made under identical conditions.
In process control, precision is more important than accuracy, i.e. it is usually preferable to measure a variable precisely than it is to have a high degree of absolute accuracy. The difference between these two properties of measurement is illustrated in Figure 2.1.
Using a fluid as an example, the dashed curve represents the actual or real temperature. The upper measurement illustrates a precise but inaccurate instrument while the lower measurement illustrates an imprecise but more accurate instrument. The first instrument has the greater error, the latter has the greater drift.
(DRIFT: An undesirable change in the output to input relationship over a period of time)
Generally, sensitivity is defined as:
‘the amount of change in the output signal from a transducers transmitting element to a specified change in the input variable being measured’; i.e. it is the ratio of the output signal change to the change in the measured variable and is a steady-state ratio or the steady-state gain of the element.
So, the greater the output signal change from the transducers transmitter for a given input change, the greater the sensitivity of the measuring element.
Highly sensitive devices, such as thermistors, may change resistance by as much as 5% per °C, while devices with low sensitivity, such as thermocouples, may produce an output voltage which changes by only 5µV ( 5 x 10-6 Volts) per °C.
The second kind of sensitivity important to measuring systems is defined as:
‘the smallest change in the measured variable which will produce a change in the output signal from the sensing element..
In many physical systems, particularly those containing levers, linkages, and mechanical parts, there is a tendency for these moving parts to stick and to have some free play.
The result of this is that small input signals may not produce any detectable output signal. To attain high sensitivity, instruments need to be well-designed and well-constructed. The control system will then have the ability to respond to small changes in the controlled variable; it is sometimes known as resolution.
Precision is related to Resolution, which is defined as the smallest change of input that results in a significant change in transducer output.
The closeness of agreement between a number of consecutive measurements of the output for the same value of input under identical operating conditions, approaching from the same direction for full range transverses. It is usually expressed as Repeatability in percent of span. It does not include hysteresis.
This is the region between stated upper and lower range-values of which the quantity is measured. Unless otherwise stated, Input range is implied.
Example: If the range is stated as 50°C to 320°C then the Range is quoted as 50°C to 320°C.
Span should not be confused with rangeability, although the same points of reference are used. Span is the ALGEBRAIC difference between the upper and lower range values.
If the range is stated; as in Section 2.4.6; as 50°C to 320°C then the Span is 320 - 50 = 270°C.
This is a dynamic measurement, and shows as the dependency of an output value, given for an excursion of the input, as compared with the history of prior excursions and the direction of the transverse.
If an input into a system is moved between 0% and 100% and the resultant output recorded and then the input is returned back to 0%, again with the output recorded the difference between the two values, 0% ⇒ 100% ⇒ 0% , as recorded, gives the hysteresis value of the system at all points in its range. Repetitive tests must be done under identical conditions.
2.5 Sensor dynamics
Process dynamics have been discussed in Chapter 1, and these same factors will apply to a sensor making it important to gain an understanding of sensor dynamics. The speed of response of the primary measuring element is often one of the most important factors in the operation of a feedback controller. As process control is continuous and dynamic, the rate at which the controller is able to detect changes in the process will be critical to the overall operation of the system.
Fast sensors allow the controller to function in a timely manner, while sensors with large time constants are slow and degrade the overall operation of the feedback loop. Due to their influence on loop response, the dynamic characteristics of sensors should be considered in their selection and installation.
2.6 Selection of sensing devices
A number of factors must be considered before a specific means of measuring the Process Variable (PV) can be selected for a particular loop:
- The normal range over which the PV may vary, and if there are any extremes to this.
- The accuracy, precision and sensitivity required for the measurement.
- The sensor dynamics required.
- The reliability that is required.
- The costs involved, including installation and operating costs as well as purchase costs.
- The installation requirements and problems, such as size and shape restraints, remote transmission, corrosive fluids, explosive mixtures, etc.
2.7 Temperature sensors
Temperature is the most common PV measured in process control. Due to the vast temperature range that needs to be measured (from absolute zero to thousands of degrees) with spans of just a few degrees and sensitivities down to fractions of a degree, there is a vast range of devices that can be used for temperature measurements.
The five most common sensors – Thermocouples, Resistance Temperature Detectors or RTDs, Thermistors, IC Sensors and Radiation Pyrometers – have been selected for this chapter as they illustrate most of the application, range, accuracy and linearity aspects that are associated with temperature measurements.
Thermocouples cover a range of temperatures, from –262°C up to +2760°C and are manufactured in many materials, are relatively cheap, have many physical forms, all of which make them a highly versatile device.
Thermocouples suffer from two major problems that cause errors when applying them to the process control environment.
The first is the small voltages generated by them, for example a 1°C temperature change on a platinum thermocouple results in an output change of only 5.8 μV. = (5.8 x 10-6 Volts).
The second is their nonlinearity, requiring polynomial conversion; look up tables or related calibration to be applied to the signaling and controlling unit. See Table 2.1.
|Metal Composition||Temperature Span||Seebeck Coefficient|
|K||Chromel vv Alumel||-190 to +1371°C||40 μV/°C|
|J||Iron vv Constantan||-190 to +760°C||50 μV/°C|
|T||Copper vv Constantan||-190 to +760°C||50 μV/°C|
|E||Chromel vv Constantan||-190 to +1472°C||60 μV/°C|
|S||Platinum vv 10% Rhodium/Platinum||0 to +1760°C||10 μV/°C|
|R||Platinum vv 13% Rhodium/Platinum||0 to +1670°C||11 μV/°C|
Principles of thermocouple operation
A thermocouple could be considered as a heat operated battery, consisting of two different types of homogeneous (of the same kind and nature) metal or alloy wires joined together at one end, the measuring point and connected, usually via special compensating cable, to some form of measuring instrument. At the point of connection to the measuring device a second junction is formed, called the reference or cold junction, this completes the circuit.
The Peltier & Thomson effects on thermocouple operation
The PELTIER EFFECT is the cause of the emf’s generated at every junction of dissimilar metals in the circuit. This effect involves the generation or absorption of heat at the junction as current flows through it and temperature is dependent on current flow direction.
The THOMSON EFFECT, where a second emf can also be generated along the temperature gradient of a single homogeneous wire can also contribute to measurement errors. It is essential that all the wire in a thermocouple measuring circuit is homogeneous as then the emfs generated will be dependant solely on the types of material used. Any thermal emfs generated in the wire when it passes through temperature gradients will also be cancelled from one to the other.
Additionally, if both junctions of a homogeneous metal are held at the same temperature, the metal will not contribute additional emfs to the circuit. It follows then that if all junctions in the circuit are held at a constant temperature, except the measuring one, measurement can be made of the hot, or measuring, junction value against the constant value or cold junction reference value.
Reference or cold junction compensation
As described in Section 2.7.1, we have to ensure that all the junctions in the measuring circuit, with the exception of the one being used for the actual process measurement, must either:
Be held at a constant known temperature, usually 0°C, and called a “Cold Junction”.
Or the temperature of these junctions should be measured and the measuring instrument take this into consideration when calculating its final output.
Both methods are commonly used, the first one, the cold junction, utilizes an Isothermal block held at a known temperature and in which the connections from the thermocouple wires to copper wires is made. The second method is to measure the temperature, usually by a thermistor, at the point of copper to thermocouple connections, feed this value into the measuring system and have that calculate a corrected output (see Figure 2.2).
2.7.2 Resistance temperature detectors (RTDs)
In the same year as the discovery of the thermocouple by Thomas Seebeck, Sir Humphrey Davy noted the temperature/Resistivity dependence of metals, but it was H C Meyers who developed the first RTD in 1932.
Construction of RTDs
RTDs consist of a Platinum or Nickel wire element encased in a protective housing having, in the case of the Platinum version a base resistance of 100 Ohms at 0°C and the Nickel type a resistance of 1000 ohms, again at 0°C.
They come packaged in either 2, 3 or 4 wire versions, the 3 and 4 wire being the most common. 2 wire versions can be very inaccurate as the lead resistance is in series with the measuring circuit, and the measuring element relies on resistance change to indicate the temperature change.
Thermocouple/RTD inputs can be directly taken as low voltage signals in PLC/DCS systems with special types of Input Modules/Cards without adding transmitter hardware (see Figure 2.3).
Range sensitivity and spans of RTDs
RTDs operate over a narrower range than thermocouples, from -247°C to + 649°C. Span selection has to be made for correct operation as typically the sensitivity of a PT100 is 0.358 Ohms/°C about the nominal resistance of 100 Ohms at 0 °C.
This corresponds to a single resistance range of (100-88 Ohms = -247 °C to 100+232 Ohms = 649 °C ) resulting in 12 to 332 Ohms, which is outside the range of a single transducer.
Example of RTD Application in a Digital Environment
Figure 2.4 shows the configuration of a 3 wire RTD used in a digital process control application.
Modern digital controllers use these 3-wire RTDs in the following manner:
A constant current generator drives a current through the circuit [A to C] consisting of 2RL + RX . A voltage detector reads a voltage, VB, proportional to RX + RL between points [B and C] and a second voltage VA which is proportional to RX + 2RL between points [A and C].
As VA - VB is proportional to RL so VB - VA - VB is proportional to RX alone
RL = The resistance of each of the three RTD’s leads.
RX = The measuring element of the RTD.
VA = The voltage supply to the RTD’s measuring element RX from the constant current source.
VB = The final measured voltage, or output from the RTD (3 wire version).
The measurements are made sequentially, digitized and stored until differences can be computed.
RTDs are reasonably linear in operation, but this depends to a great extent on the area of operation being used within the total span of the particular transducer in question.
Self heating problems associated with RTDs
RTDs suffer also from an effect of self-heating, where the excitation current heats the sensing element, thereby causing an error, or temperature offset. Modern digital systems can overcome this problem by energizing the transducer just before a reading is taken. Alternatively the excitation current can be reduced but this is at the expense of lower measuring voltages occurring across the transducers output, and subsequently induced electrical noise can become a problem. Lastly the error caused by self-heating can be calculated and adjustment made to the measuring algorithms.
These elements are the most sensitive and fastest temperature measuring devices in common use, unfortunately the price paid for this is terrible non-linearity (see Figure 2.6), and a very small temperature range.
Thermistors are manufactured from metallic oxides, and have a negative temperature coefficient, that is their resistance drops with temperature rise. They are also manufactured in almost any shape and size from a pin head to discs up to 25 mm diameter x 5 mm thick.
Thermistor values, range and sensitivity
Most thermistors have a nominal quoted resistance of about 5000 ohms and because of their sensitivity, this base resistance is quoted at a specific temperature; reference having to be made to the relative type in the manufacturer’s published specifications.
Thermistor values can change by as much as 200 ohms per °C which, in this case would give a maximum range of only +25 degrees from the quoted base temperature.
2.7.4 IC sensors
Integrated Circuitry Sensors have only recently began to make their presence felt in the process control world. As such they are still limited in the variability of shape, size and packaging that is advisable.
Their main advantages are their low cost (below $10.00) along with their linear and high output signals.
IC sensor ranges and accuracy
As these sensors are formed from integrated silicon chips, their range is limited to -55 °C to +150 °C but easily have calibrated accuracies to 0.05 °C to 0.1°C.
Cryogenic temperature measurements
An exception to the normal operating temperature range of IC Sensors is a version that can be used for cryogenic temperatures -271°C to +202°C by the application of special diodes designed exclusively to operate at these sub-normal temperatures (absolute zero = -273.16°C).
2.7.5 Selection of temperature transducer design and thermowells
Temperature measurement transducers, in particular thermocouples; need different housings and mountings depending on the application requirements.
Sensing devices are usually mounted in a sealed tube, more commonly known as a thermowell; this has the added advantages of allowing the removal or replacement of the sensing device without opening up the process tank or piping. Thermowells need to be considered when installing temperature-sensing equipment. The length of the thermowell needs to be sized for the temperature probe.
Consideration of the thermal response needs to be taken into account. If a fast response is required, and the sensor probe already has adequate protection, then a thermowell may impede system performance and response time. Note that when a thermowell is used, the response time is typically doubled.
Thermowells can provide added protection to the sensing equipment, and can also assist in maintenance and period calibration of equipment.
Thermopaste assists in the fast and effective transfer of thermal dynamics from the process to the sensing element. Application and maintenance of this material needs to be considered. Regular maintenance and condensation can affect the operation of the paste.
Figure 2.5 shows the three typical designs of thermocouple probes:
- Open Ended; Subject to damage and should not be used in a hostile environment.
- Sealed and both thermally and electrically isolated from the outside world.
- Sealed but with thermal (and/or electrical) connection to the outside world.
2.7.6 Radiation pyrometers
At the other end of the scale is the requirement to measure high temperatures up to 4000 °C or more. Total radiation pyrometers operate by measuring the total amount of energy radiated by a hot body. Their temperature range is 0 °C to 3890 °C.
The infrared (IR) pyrometer is rapidly replacing this older type of measurement, and these work by measuring the dominant wavelength radiated by a hot body. The basis of this is in the fact that as temperature increases the dominant wavelength of hot body radiation gets shorter.
Developments in infrared optical pyrometry
Two recent developments in the world of Pyrometry that should be mentioned are the utilization of lasers and fiber optics.
Lasers are used to automatically correct errors occurring due to changes in surface emissivity as the object’s temperature changes.
Fiber optics can focus the temperature measurements on inaccessible or unfriendly areas. Some of these units are capable of very high accuracy, typically 0.1% @ 1000 °C and can operate from 500 °C up to 2000 °C. Multiplexing of the optics is also possible, reducing costs in multi-measuring environments (see Figure 2.6).
2.8 Pressure transmitters
Pressure is probably the second most commonly used and important measurement in process control. The most familiar pressure measuring devices are manometers and dial gauges, but these require a manual operator.
For use in process control, a pressure-measuring device needs a pressure transmitter that will produce an output signal for transmission, e.g. an electric current proportional to the pressure being measured. A transmitter typically produces an output of a 4-20 mA signal, is rugged, and can be used in flammable or hazardous service.
2.8.1 Terms of pressure reference
Pressure is defined as force per unit area, and may be expressed in units of Newtons per square meter, millimeters of mercury, atmospheres, bars or Torrs.
There are three common references against which it can be measured:
- If measured against a vacuum, the measured pressure is called absolute pressure.
- Against local ambient pressure it is gauge pressure.
- If the reference pressure is user supplied, differential pressure is measured.
There are seven principal methods of electronically measuring pressure for use in process control and each of these is listed and described under its numeric heading, in principle detail below:
- 2.8.2 Strain Gauge (Bonded or Un-bonded wire or foil, bonded or diffused semiconductor)
- 2.8.3 Capacitive
- 2.8.4 Potentiometric
- 2.8.5 Resonant wire
- 2.8.6 Piezoelectric
- 2.8.7 Magnetic (Inductive and Reluctive)
- 2.8.8 Optical
- 2.8.2 Strain Gauge
In process control applications, one of the most common ways to measure pressure is using a strain gauge sensor. There are two basic types of strain gauge, bonded and unbonded, each utilizing wire or foil, but both working in the same electrical manner. A thin wire (or foil strip), usually made from Chrome-Nickel alloys and sometimes Platinum, is subjected to stretching, and hence its resistance increases as its length increases.
Strain gauges are commonly made using a thin metal wire or foil that is only a few micrometers thick, so they can also be known as thin film based bonded pressure sensors.
The unbonded strain gauge consists of open wire wound round two parallel mounted posts which are flexed or pulled apart, hence imparting a stretching dynamic to the wire, reducing its resistance, these being physically much larger units. See Figure 2.7.
Composition of strain gauges
Bonded strain gauges are the most common type in use. They comprise an insulated bonded sandwich usually made of two sheets of paper, with the gauge wire laid in a specific pattern between them. Strain gauge wires of less than 0.001 in. (0.025 mm.) diameter are used as they have a surface area several thousand times greater than the cross sectional area.
Foil gauges have been commercially made where the foil thickness can be as low as 0.0001 in. (0.0025 mm).
Semiconductor types are available which have sensitivities close to one hundred times greater than the wire types.
Strain gauge sensitivity and gauge factor
The ratio of the percentage change in resistance to the percentage change in length is a measure of the sensitivity (S) of the gauge:
L is the initial length of the wire or foil.
R is the specific resistivity in the unstrained position.
Many things affect the axial strain and gauge resistance, such as the geometry of the wire or foil in the gauge, and direction of strain. This is expressed by the constant called Gauge Factor or GF:
Typically, four strain gauges are bonded to a metal or plastic flexible diaphragm and connected into a Wheatstone bridge circuit to yield an electrical signal proportional to the strain caused by the displacement of the diaphragm by the pressure applied to it.
The changes of resistance in a strain gauge are very small and as such precise and accurate instrumentation is required in order to obtain useable and accurate readings. The most common form of measuring is in a Wheatstone bridge circuit. Figure 2.8 shows a typical arrangement using this type of instrument.
Effects of temperature on the gauge’s resistance is minimal due to their influence on the output being subtractive.
The active gauges are located on opposite arms of the bridge making effects additive, the other 2 gauges are for compensation and are either “dummy” gauges or resistors with equal resistance to the active gauges.
This circuit is suited to both static and dynamic strain measurements, the output is, however, a differential output and care must be exercised in “grounding” any part of this circuit.
An alternative arrangement – called a ‘ballast’ or potentiometric circuit is arrived at by removing R2 and making the value of R3 equal to 0 as indicated in Figure 2.8.
Unfortunately this circuit is best used for dynamic sensing only, however one of the signal leads can now be grounded or electrically referenced to 0 volts.
Measurement errors for strain gauges
A number of errors exist when strain is measured using the Wheatstone bridge arrangement. Typical these are:
- Gauge factor uncertainty (typically 1%).
- Bridge non-linearity (typically 1%). This is a result of the assumption that the change in strain gauge resistance is very small compared to the nominal gauge resistance.
- Matching of compensation resistors to the strain gauge (typically 0.5%).
- Measurement errors caused by the accuracy and resolution of the measuring device and lead resistances.
- Temperature effects. Resistance varies with changes from the temperature at which a bridge is calibrated.
- Self heating of gauges.
Strain gauge pressure transducer specifications
Strain gauge elements can detect absolute gauge, and differential spans from 30 in. H2O to upwards of 200,000 psig (7.5 kPa to 1400MPa). Their inaccuracy is around 0.2% and 0.5% of Span. Units are available to work in the temperature range of –50 °C to +120 °C with special units going up to 320 °C.
2.8.2 Vibrating wire or resonant wire transducers
This type of sensor consists of an electronic oscillator circuit, which causes a wire to vibrate at its natural frequency when under tension. The principle is similar to that of a guitar string. A thin wire inside the sensor is kept in tension, with one end fixed and the other attached to a diaphragm. As pressure moves the diaphragm, the tension on the wire changes thereby changing its resonant vibration frequency.
These frequency changes are a direct consequence of pressure changes and as such is detected and shown as pressure.
The frequency can be sensed as digital pulses from an electromagnetic pickup or sensing coil. An electronic transmitter would then convert this into an electrical signal suitable for transmission.
This type of pressure measurement can be used for differential, absolute or gauge installations. Absolute pressure measurement is achieved by evacuating the low-pressure diaphragm. A typical vacuum pressure for such a case would be about 0.5 Pa (see Figure 2.9).
- Good accuracy and repeatability
- Low hysteresis
- Absolute, gauge or differential measurement
Transducer limitations, or disabilities
- Sensitivity to vibrations
- Temperature variations require temperature compensation within the sensor, this problem limits the sensitivity of the device
- The output generated is non-linear which can cause continuous control problems
- This technology is seldom used any more. Being older technology it is typically found with analog control circuitry.
Capacitive pressure measurement involves sensing the change in capacitance that results from the movement of a diaphragm. The sensor is energized electrically with a high frequency oscillator. As the diaphragm is deflected due to pressure changes, the relative capacitance is measured by a bridge circuit.
Two designs are quite common. The first is the two-plate design and is configured to operate in the balanced or unbalanced mode. The other is a single capacitor design.
The balanced mode is where the reference capacitor is varied to give zero voltage on the output. The unbalanced mode requires measuring the ratio of output to excitation voltage to determine pressure.
This type of pressure measurement is quite accurate and has a wide operating range. Capacitive pressure measurement is also quite common for determining the level in a tank or vessel (see Figure 2.10).
- Inaccuracy 0.01 to 0.2%
- Range of 80Pa to 35Mpa
- Fast response
- Temperature sensitive
- Stray capacitance problems
- Limited over pressure capability
Many of the disadvantages above have been addressed and their problems reduced in newer designs.
Temperature controlled sensors are available for applications requiring a high accuracy.
With strain gauges being the most popular form of pressure measurement, capacitance sensors are the next most common solution.
2.8.4 Linear variable differential transformer
This type of pressure measurement relies on the movement of a high permeability core within transformer coils. The movement is transferred from the process medium to the core by use of a diaphragm, bellows or bourdon tube.
The LVDT operates on the inductance ratio between the coils. Three coils are wound onto the same insulating tube containing the high permeability iron core. The primary coil is located between the two secondary coils and is energized with an alternating current.
Equal voltages are induced in the secondary coils if the core is in the center. The voltages are induced by the magnetic flux. When the core is moved from the center position, the result of the voltages in the secondary windings will be different. The secondary coils are usually wired in series (see Figure 2.11).
Transducer limitations or disabilities
- Mechanical wear
This is an older technology, used before strain gauges were developed. It is typically found on old weigh-frames or may be used for position control applications.
Very seldom used anymore; strain gauge types have superseded these transducers in most applications.
Optical sensors can be used to measure the movement of a diaphragm due to pressure. An opaque vane is mounted onto a diaphragm and moves in front of an infrared light beam. As the light is disturbed, the received light on the measuring diode indicates the position of the diaphragm.
A reference diode is used to compensate for the aging of the light source. Also, by using a reference diode, the temperature effects are cancelled as they affect the sensing and reference diodes in the same way (see Figure 2.12).
- Temperature corrected
- Good Repeatability
- Negligible hysteresis as optical sensors require very little movement for accurate sensing
- Optical pressure measurement provides very good repeatability with negligible hysteresis
Transducer limitations or disabilities
2.8.6 Pressure measurement applications
There are a number of requirements that need to be considered with applications in pressure measurement. Some of the more important of these are listed below:
- Location of process connections
- Isolation valves
- Use of impulse tubing
- Test and drain valves
- Sensor construction
- Temperature effects
- Remote diaphragm seals
- Corrosion may cause a problem to the transmitter and pressure sensing element
- The sensing fluid contains suspended solids or is sufficiently viscous to clog the piping
- The process temperature is outside of the normal operating range of the transmitter
- The process fluid may freeze or solidify in the transmitter or impulse piping
- The process medium needs to be flushed out of the process connections when changing batches
- Maintaining sanitary or aseptic conditions
- Eliminating the maintenance required with wet leg applications
- Making density or other measurements
2.9 Flow meters
In many industrial applications it is convenient and useful to measure flow and so a large percentage of transmitter sales are for measuring flow. As a result, there is a huge range of flowmeters to suit a variety of applications. The operation of these may conform to one of two approaches.
2.9.1 Energy extractive flowmeters
This is the older of the two approaches, and uses flow-measurement devices that reduce the energy of the system. The most common of these are the differential pressure producing flowmeters, such as the orifice plate, flow nozzle and venturi tube (see Figure 2.13).
A standard orifice plate is simply a smooth disc with a round, sharp-edged inflow aperture and mounting rings. In the case of viscous liquids, the upstream edge of the bore can be rounded. The shape of the opening and its location do vary widely, and this is dependent on the material being measured. Most common are concentric orifice plates with a round opening in the center. They produce the best results in turbulent flows when used with clean liquids and gases.
When measuring liquids the bore can be positioned at the top of the pipeline to allow the passage of gases. The same applies when allowing suspended solids to pass by positioning the bore at the bottom and gaining a more accurate liquid flow measurement.
Standard orifice meters are primarily used to measure gas and vapor flow. Measurement is relatively accurate; however because of the obstruction of flow there is a relatively high residual permanent pressure loss. They are well understood, rugged and relatively inexpensive for large pipe sizes and are suited for most clean fluids and aren’t influenced by high temperatures.
- Simple construction
- Easily fitted between flanges
- No moving parts
- Large range of sizes and opening ratios
- Suitable for most gases and liquids
- Well understood and proven
- Price does not increase dramatically with size
Transducer limitations or disabilities
- Inaccuracy, typically 1%
- Low rangeability, typically 4:1
- Accuracy is affected by density, pressure and viscosity fluctuations
- Erosion and physical damage to the restriction affects measurement accuracy
- Cause some unrecoverable pressure loss
- Viscosity limits measuring range
- Require straight pipe runs to ensure accuracy is maintained
- Pipeline must be full (typically for liquids)
- The inaccuracy with orifice-type measurement is due mainly to process conditions and temperature and pressure variations
- They are also affected by ambient conditions and upstream and downstream piping, as this affects the pressure and continuity of flow
Turbine or rotor flow transducer
Turbine meters have rotor-mounted blades that rotate when a fluid pushes against them. They work on the reverse concept to a propeller system. In a propeller system, the propeller drives the flow; in this case the flow drives and rotates the propeller. Since it is no longer propelling the fluid, its now called a turbine.
The rotational speed of the turbine is proportional to the velocity of the fluid.
Different methods are used to convey rotational speed information. The usual method is by electrical means where a magnetic pick-up or inductive proximity switch detects the rotor blades as they turn. As each blade tip on the rotor passes the coil it changes the flux and produces a pulse. The rate of pulses indicates the flow rate through the pipe.
Turbine meters require a good laminar flow. In fact 10 pipe diameters of straight line upstream, and no less than 5 pipe diameters downstream from the meter are required. They are therefore not accurate with swirling flows.
Turbine meters are specified with minimum and maximum linear flow rates that ensure the response is linear and the other specifications are met. For good rangeability, it is recommended that the meter be sized such that the maximum flow rate of the application be about 70 to 80% of that of the meter.
Density changes have little effect on the meter’s calibration.
- High accuracy, repeatability and rangeability for a defined viscosity and measuring range
- Temperature range of fluid measurement: -220°C to +350°C
- Very high pressure capability: 9300psi
- Measurement of non-conductive liquids
- Capability of heating measuring device
- Suitable for very low flow rates
Transducer limitations or disabilities
- Not suitable for high viscous fluids
- Viscosity must be known
- 10D upstream and 5D downstream of straight pipe is required
- Not effective with swirling fluids
- Only suitable for clean liquids and gases
- Pipe system must not vibrate
- Specifications critical for measuring range and viscosity
As turbine meters rely on the flow, they do absorb some pressure from the flow to propel the turbine.
The pressure drop is typically around 20 to 30 kPa at the maximum flow rate and does vary depending on flow rate.
It is a requirement in operating turbine meters that sufficient line pressure be maintained to prevent liquid cavitation.
The minimum pressure occurs at the rotor, however the pressure recovers substantially are the turbine.
If the back-pressure is not sufficient, then it should be increased or a larger meter chosen to operate in a lower operating range - this does have the limitation of reducing the meter flow range and accuracy
Turbine meters provide excellent accuracy, repeatability and rangeability for a defined viscosity and measuring range, and are commonly used for custody transfer applications of clean liquids and gases.
2.9.2 Energy additive flow meters
A common example of the energy additive approach is the magnetic flowmeter, illustrated in Figure 2.14. This device is used to make flow measurements on a conductive liquid. A charged particle moving through the magnetic field produces a voltage proportional to the velocity of the particle. A conductive liquid consisting of charged particles will then produce a voltage proportional to the volumetric flow rate.
The advantages of magnetic flowmeters are:
- They have no obstructions or restrictions to flow
- No pressure drop or differential
- No moving parts to wear out
- They can accommodate solids in suspension
- No pressure sensing points to block up
- They measure volume rate at the flowing temperature independent of the effects of viscosity, density, pressure or turbulence
- Another advantage is that many magmeters are capable of measuring flow in either direction
Most industrial liquids can be measured by magnetic flowmeters, these include acids, bases, water, and aqueous solutions. However some exceptions are most organic chemicals and refinery products which have insufficient conductivity for measurement. Also pure substances, hydrocarbons and gases cannot be measured.
In general the pipeline must be full, although with newer models, level sensing takes this factor into account when calculating a flow-rate.
Magnetic flowmeters are very accurate and have a linear relationship between the output and flow-rate. Alternatively, the flow rate can be transmitted as a pulse per unit of volume or time.
The accuracy of most magnetic flowmeter systems is 1% of full-scale measurement. This takes into account both the meter itself and the secondary instrument. Because of its linearity, the accuracy at low flow rates exceeds that of such devices as the Venturi tube. The magnetic flowmeter can be calibrated to an accuracy of 0.5% of full scale and is linear throughout.
Selection, sizing and liners
Sizing of magmeters is done from manufacturer’s nomographs to determine suitable diameter meters for flow rates.
The principle of operation of the magmeter requires the generation of a magnetic field, and the detection of the voltage across the flow.
If the pipe is made of a material with magnetic properties, then this will disrupt the magnetic field and effectively short circuits the magnetic field. Likewise if the inside of the pipe is conductive, then this will short circuit the electrodes used to detect the voltage across the flow.
The meter piping must be manufactured from a non-magnetic material such as stainless steel in order to prevent short-circuiting of the magnetic field.
The lining of the meter piping must also be lined with an insulating material to prevent short-circuiting of the electric field.
The liner has to be chosen to suit the application, particularly the resistance it has to the following:
- Chemical Corrosion
Teflon (PTFE – Polytetrafluoroethylene resin)
Widely used due to its high temperature rating
Anti-stick properties reduce problems with build-up
Approved for food and beverage environments
Resistant to many acids and bases
Good abrasion resistance
Good chemical resistance
High resistance to abrasion
Used mainly for slurry applications
Used mainly for water and soft slurries
High abrasion resistance
High corrosion resistance
High temperature rating
Less expensive to manufacture
Also suited to sanitary applications
Strong compressive strength, but poor tensile strength
May crack with sudden temperature changes, especially downward
Cannot be used with oxidizing acids or hot concentrated caustic
For correct operation of the magmeter, the pipeline must be full. This is generally done by maintaining sufficient back-pressure from downstream piping and equipment. Meters are available that make allowance for this problem, but are more expensive and are specialized. This is mainly a problem in gravity feed systems.
Magmeters are not greatly affected by the profile of the flow, and are not affected by viscosity or the consistency of the liquid. It is however recommended that the meter be installed with 5 diameters of straight pipe upstream and 3 diameters of straight pipe downstream from the meter.
Applications requiring reduction in the pipe diameter for the meter installation need to allow for the extra length of reducing pipe. It is also recommended that in those applications that the reducing angle not be greater than 8°, although manufacturers’ data should be sought.
Grounding is another important aspect when installing magmeters, and manufacturers’ recommendations should be adhered to. Such recommendations would require the use of copper braid between the meter flange and pipe flange at both ends of the meter. These connections provide a path for stray currents and should also be grounded to a suitable grounding point. Magmeters with built in grounding electrodes eliminate this problem, as the grounding electrode is connected to the supply ground.
- No restrictions to flow
- No pressure loss
- No moving parts
- Good resistance to erosion
- Independent of viscosity, density, pressure and turbulence
- Good accuracy
- Large range of flow rates and diameters
Transducer limitations or disabilities
- Most require a full pipeline
- Limited to conductive liquids
As mentioned earlier, a magnetic flowmeter consists of either a lined metal tube, usually stainless steel because of its magnetic properties, or an unlined non metallic tube. The problem can arise if the insulating liners and electrodes of the magnetic flowmeter become coated with conductive residues deposited by the flowing fluid.
Erroneous voltages can be sensed if the lining becomes conductive.
Maintaining high flow rates reduces the chances of this happening. However, some manufacturers do provide magmeters with built in electrode cleaners.
Block valves are used on either side of AC-type magmeters to produces zero flow and maintain full pipe to periodically check the zero. DC units do not have this requirement.
Ultrasonic flow measurement
There are two types of ultrasonic flow measurement:
- Transit time measurement, used for clean fluids
- Doppler effect, used for dirty, slurry type flows
Transit time ultrasonic flow measurement
The transit-time flowmeter device sends pulses of ultrasonic energy diagonally across the pipe. The transit-time is measured from when the transmitter sends the pulse to when the receiver detects the pulse.
Each location contains a transmitter and receiver. The pulses are sent alternatively upstream and downstream and the velocity of the flow is calculated from the time difference between the two directions.
Transit-time ultrasonic flow measurement is suited for clean fluids. Some of the more common process fluids consist of water, liquefied gases and natural gas.
Doppler effect ultrasonic flow measurement
The Doppler effect device relies on objects with varying density in the flow stream to return the ultrasonic energy. With the Doppler effect meter a beam of ultrasonic energy is transmitted diagonally through the pipe. Portions of this ultrasonic energy are reflected back from particles in the stream of varying density. Since the objects are moving, the reflected ultrasonic energy will has a different frequency. The amount of difference between the original and returned signals is proportional to the flow velocity.
Most ultrasonic flowmeters are mounted to the outside of the pipe and as such operate without coming in contact with the fluid. Apart from not obstructing the flow, they are not affected by corrosion, erosion or viscosity. Most ultrasonic flowmeters are Bi-directional, and sense flow in either direction.
- Suitable for large diameter pipes
- No obstructions, no pressure loss
- No moving parts, long operating life
- Fast response
- Installed on existing installations
- Not affected by fluid properties
Transducer limitations or disabilities
- Accuracy is dependant on flow profile
- Fluid must be acoustically transparent
- Errors cause by build up in pipe
- Only possible in limited applications
- Pipeline must be full
- Turbulence or even the swirling of the process fluid can affect the ultrasonic signals
In typical applications the flow needs to be stable to achieve good flow measurement, and typically this is done by allowing sufficient straight pipe up and downstream of the transducers.
The straight section of pipe upstream would need to be 10 to 20 pipe diameters with the downstream requirement of 5 pipe diameters.
For the transit time meter, the ultrasonic signal is required to traverse across the flow, therefore the liquid must be relatively free of solids and air bubbles.
Anything of a different density (higher or lower) than the process fluid will affect the ultrasonic signal.
Doppler flowmeters are not high accuracy or high performance devices, but do offer an inexpensive form of flow monitoring. Their intended operation is for dirty fluids, and find applications in sewage, sludge and waste water processes.
Being dependent on sound characteristics, ultrasonic devices are dependent on the flow profile, and are also affected by temperature and density changes.
2.10 Level transmitters
There are numerous ways to measure level that require differing technologies and to encompass all the various units of measurement:
- Ultrasonic, transit time
- Pulse echo
- Pulse radar
- Pressure, hydrostatic
- Weight, strain gauge
For continuous measurement, the level is detected and converted into a signal that is proportional to the level. Microprocessor based devices can indicate level or volume.
Different techniques also have different requirements. For example, when detecting the level from the top of a tank, the shape of the tank is required to deduce volume.
When using hydrostatic means, which detects the pressure from the bottom of the tank, then the density is to be known and remain constant.
Level sensing is a simpler concept than most other process variables and allows a very simple form of control. The sensors can be roughly grouped into categories according to the primary level sensing principle involved.
The signals produced by these means must then be converted into a signal suitable for process control applications, such as an electrical, pneumatic or digital signal.
2.10.1 Installation considerations
The following are outlines of the more important considerations that need to be considered when installing either atmospheric or pressurized vessels.
Most instruments involved with level detection can be easily removed from the vessel. Top mounting of the sensing device also eliminates the possibility of process fluid entering the transducer or sensor housing should the nozzle or probe corrode or breaks off. Many level measurement devices have the added advantage that they can be manually gauged. This provides two important factors:
- Measurements are still possible in the event of equipment failure
- Calibration and point checks can provide vital operational information
One common installation criteria for point detection devices is that they be mounted at the actuation level, which may present accessibility problems.
Two main considerations apply with level measurement devices in pressurized vessels:
- Facilities for removal and installation while the vessel is pressurized.
- The pressure rating of the equipment for the service.
Pressurized vessels can also be used to prevent fugitive emissions, where an inert gas such as hydrogen can be used to pressurize the process materials. Compensation within the level device needs to also be accounted for as the head-pressure changes.
The accuracy of the measuring device may be dependent on the following:
- Gravity variations
- Temperature effects
- Dielectric constant
Also the presence of foam, vapor or accumulated scum on the transducer affects the performance.
Impact on the overall control loop
Level sensing equipment is generally fast responding, and in terms of automated continuous control, does not add much of a lag to the system.
It is good practice though, to include any high and low switch limits into the control system. If the instrumentation does fail or goes out of calibration, then the process information can be acquired from the high and low limits. Apart from the hard-wired safety circuits, it is good practice to incorporate this information into the control system.
The cost of sensing equipment is not a major consideration compared with the economics of controlling the process. There is therefore a growing demand for accuracy in level measuring equipment.
Newer models incorporate better means of compensation, but not necessarily new technologies. Incorporating a temperature compensation detector in the pressure-sensing diaphragm provides compensation and also an alternative to remote pressure seals and ensures the accuracy and stability of the measurement.
Greater demands in plant efficiency may require an improved accuracy of a device, not just for the actual measurement, but also to increase the range of operation. If the safety limits were set at 90% due to inaccuracies with the sensing device, then an increased range could be achieved by using more accurate equipment.
Demands are also imposed on processes to conform to environmental regulations. Accurate accounting of materials assist in achieving this. Such technologies as RF admittance or ultrasonic minimize the expense of this environmental compliance.
Problems occur in trying to sense level in existing vessels that may be non-metallic. RF flexible cable sensors have an integral ground element which eliminates the need for an external ground reference when using the sensor to measure the level of process materials in non-metallic vessels.
2.11 The spectrum of user models in measuring transducers
As an example of the extremes that can occur between the same type of measuring transducer, consider the case of the thermocouple. Firstly in its most simple form, it consists of two dissimilar metal wires joined together to form a loop consisting of two junctions or connections. The Seebeck effect. (If the temperature of the two junctions is different, a current will flow in the loop) then comes into play. Looked at in practice, a thermocouple measuring circuit actually measures the difference between the two junctions forming the circuit.
Unfortunately three major problems occur with this form of temperature measurement:
2.11.1 Voltage generation of a thermocouple
Only very low emfs are generated, typically from approximately (1.8 to 6.0) x 10-12 Volts per 0°C. Therefore, electrically induced noise – either as the normal or common mode type, can become a problem. Normal mode noise is the more difficult type to remove from a system, this usually being achieved by the introduction of guard lead wires, or careful cable screening.
2.11.2 Thermocouple linearity
The output of any type of thermocouple is not linear relative to the applied or measured temperature range, This can causes linearity, scaling, ranging and calibration problems.
2.11.3 Cold junction compensation
To complete any electrical circuit, requires the formation of a loop, so in the case of the thermocouple, as second junction has to exist to achieve this. This is called the “Cold Junction” and usually sits at ambient temperature, which of course varies and introduces measurement errors which can be extremely large, especially if measurement of the physical quantity is close to the ambient or cold junction temperature.
The simplest form of thermocouple application is in the form of a galvanometer which has the sensitivity to measure the low voltages involved. This is equipped with a temperature sensitive compensating resistor, located next to the input terminals where the measuring circuit’s cold junction is. This resistor forms part of that measuring circuit and corrects the effect, by changing its resistance and hence the current flow in the circuit, of ambient temperature changes.
The problem with this arrangement is that it is direct reading, and hence does not easily lend itself to inclusion in process control systems, and the physical circuit from the indicator to the thermocouple measuring tip has to be “tuned” to a specific resistance for the cold junction compensator to be accurate. To overcome the non-linearity problem, the scale of the instrument is scaled to the “profile” of the related response curve of the thermocouple type being used.
2.12 Instrumentation and transducer considerations
There are many considerations that have to be taken into account when selecting instruments and transducers, the following is an explanation and index of the more important aspects of choice.
2.12.1 Signal transmission pneumatic versus electronic
Electronic means for signal transmission and control is becoming more favored, however pneumatic controls are still used and do have advantages in different applications.
Advantages - Electronic
- Lower installation cost
- Lower maintenance
- Higher accuracy (esp. smart instruments)
- Faster dynamic response
- Suitable for long distances
- Digital control system compatible
The primary reason for selecting electronic devices is their compatibility with the control system. With data exchange highways becoming more common it is also easier to obtain more information from the sensor with smart electronics.
Advantages - Pneumatic
- Lower initial hardware cost
- Simple design
- Less affected by corrosive environments
- Easily connected with control valves
Pneumatics have a prime advantage because of their safety in hazardous locations.
2.12.2 Signal conditioners
Signal conditioners change or alter signals so that different process devices can effectively and accurately communicate with each other. They are typically used to link process instruments with indicators, recorders, and microprocessor-based control and monitoring systems.
They consist of either:
- Signal conversion
A signal converter is used to change an analog signal from one form to another. This enables equipment with differing signals to communicate.
- Signal boosting
For analog signals (voltage) that are required to be transmitted over long distances, it is possible that the signal may attenuate, or fade. For analog signals (current) in loops that have a number of loop-powered devices, the signal may not be strong enough.
Electrical noise, or interference, is unwanted electrical signals that cause disruptive errors, or even completely disable electronic control and measuring equipment.
There are two main categories of electrical measurement noise:
- Radio Frequency Interference (RFI)
- Electromagnetic Interference (EMI)
Some examples of the more commonly encountered sources of interference are:
- Hand-held (walkie-talkie)
- Cellular phones
- AC and DC motors
- Arc welders
- Large solenoids, contactors and relays
- High power cabling, both voltage and current
- High speed power switching, such as SCR’s and thyristor
- Variable frequency drives
- Static discharges
- Induction heating systems
- Radar devices
- Fluorescent lights
Radio Frequency Interference and Electromagnetic Interference can cause unpredictable performance in instrumentation. These types of interference can often be non-repeatable, making it hard to detect, isolate and rectify the problem. RFI and EMI can also degrade an instrument’s performance and possible cause the instrument to fail completely.
Any of these problems can result in reduced production rates, process inefficiency, plant shutdowns, and possibly even create dangerous safety hazards.
There are two basic approaches to protecting instrumentation systems from the harmful effects of RFI and EMI.
The first is to keep the interference from entering the system by:
- Proper grounding
- Terminal filters
The second is to design the system so that it is unaffected by RFI and EMI.
Noise reduction techniques
Some of the more common techniques for reducing or even eliminating electrically induced noise are:
- Use of transmitters, i.e. for thermocouples.
The signal is more robust to noise over long distances. Typically 4-20mA.
- Shielded/twisted pair cable.
Twisting is done to decouple the wires from induced currents from varying electric and magnetic fields that may exist. The principle of twisting is that equal voltages are induced in each loop of the twisted wires, but of opposite phase which makes them cancel.
AC inductive load circuits
For AC inductive loads, use a properly rated MOV across the load in parallel with a series RC snubber.
An effective RC snubber circuit would consist of a 0.1 uF. capacitor of suitable voltage rating, and a 47 ohm 0.5W resistor.
DC inductive load circuits
For DC inductive loads, use of a diode across the load is effective, provided the polarity is correct. Use of an RC snubber circuit can be added as an enhancement.
2.12.4 Materials of construction
Often when selecting measurement or control equipment, options are available for the various materials of construction. The primary concern is that the process material will not cause deterioration or damage to the device.
Below is a brief list of other qualities or characteristics that assist in the selection of the material of construction.
- Hastelloy C-276
- Carbon steel
- Beryllium copper; good elastic qualities
- Ni-Span C ;very low temperature coefficient of elasticity
- Inconel; extreme operating temperatures and corrosive process
- Stainless Steel; extreme operating temperatures and corrosive process
- Quartz; Minimum hysteresis and drift
2.12.5 Signal linearization
When the output of a device responds at a proportional rate to changes in the input, then the device is linear and there is a constant gain (output/input) over the full range of operation and the resolution remains constant. If the response or reaction of some device in a system is not linear then it may need to be made linear because there are two main problems, when the device is not linear:
- The gain changes
- The resolution and accuracy change
In a control system there are three ways to account for non-linear equipment:
- Base application on the highest gain
- Measure the gain at a number of points
- Modify the gain as a function of the process variable
The simpler way to overcome any non-linearity is to linearize the signal before the control system calculations.
2.13 Selection criteria and considerations
Reasons for selecting one type of measuring equipment over another vary, but typically the decisions are based on the perceived advantages and disadvantages of the range of devices available.
A comprehensive list would take into account the following:
- Purchase price
- Installed cost
- Cost of ownership
- Ease of use
- Process medium, liquid/ stem/ gas
- Degree of smartness
- Sizes available
- Sensitivity to vibration
In addition particular requirements for flow would include:
- Capability of measuring liquid, steam and gas
- Pressure drop
- Reynolds number
- Upstream and downstream piping requirements
A more systematic approach to selection process measurement equipment would cover the following steps:
This is the requirement and purpose of the measurement.
- Point or continuous
2.13.2 Processed material properties
Many process-measuring devices are limited by the process material that they can measure.
- Solids, liquids, gas or steam
- Multiphase, liquid/gas ratio
This relates to the performance required in the application.
- Range of operation
- Linearity (Accuracy may include linearity effects)
- Repeatability (Accuracy may include repeatability effects)
- Response time
Mounting is one of the main concerns, but the installation does involve the access and other environmental concerns.
- Line size
The associated costs determine whether the device is within the budget for the application.
- Purchase cost
- Installation cost
- Maintenance cost
- Reliability/ replacement cost
2.13.6 Environment and safety
This relates to the performance of the equipment to maintain the operational specifications, and also failure and redundancy should be considered:
- Process emissions
- Hazardous waste disposal
- Leak potential
- Trigger system shutdown
2.13.7 Measuring devices and technology
At this stage the selection criteria is established and weighed up with readily available equipment.
A typical example for flow is shown:
|1. DP||-||Orifice plate|
|3. Positive||-||Oval gear|
|4. Mass flow||-||Coriolis|
2.13.8 Vendor supply
Limitations may be imposed, particularly with larger companies that have preferred suppliers, in which case the selections may be limited, or the procedure for purchasing new equipment may not warrant the time and effort for the application.
2.14 Introduction to the smart transmitter
The most elaborate form of thermocouple transducer, quite often refereed to as a “Smart transmitter” as shown in Figure 2.15; comprises:
An electronic cold junction compensator
A highly stable DC amplifier to get the thermocouples low voltages up to a reasonable operating level
Some form of microprocessor that perform a linearization function on the thermocouples generated voltage
A ranging function
An output or transmitter part of the system, where selectable types of output can be selected.
And, of course, the thermocouple itself
This then gives the availability to use a thermocouple of a type with a total range of 0°C to 650°C, to be ranged, or set for 100°C to 300°C and this, with a 4 to 20 mA transmitter output, gives an output. = 12.5 °C/mA, linearly through the required or selected range.
In normal use, using the entire range of the thermocouple, we would have an output sensitivity of 650°C/20-4 mA or 40.625°C /mA giving a sensitivity ratio of 3.25:1.
This concept can be applied to most types of measurement transducers, conceptually saying that the output of these devices can be ranged and made linear before being introduced to the controllers input itself.
Basic Principles of Control Valves and Actuators
This chapter serves to review the basic types and principle of operation of process control valves and their associated actuator and positioner systems.
As a result of studying this chapter, the student should be able to:
- List the common types of process control valves, and briefly describe their design and basic construction;
- Explain the meanings of valve characteristics, rangeability and sizing;
- Describe the types of actuators commonly found in process control systems, and list their applications.
3.2 An overview of eight of the most basic types of control valves
In most process control systems the final control element, driven by the output of the process controller, is usually some form of valve. This chapter serves to introduce the student to eight of the most common types of control valves, flow throttling devices and the basic range of actuators used to control them.
Sections 3.2.1 through to 3.2.8 describe the various types of valves in question, starting with an overview and general description, the types and variances within their manufactured ranges, sizes, design pressures and temperature ranges and their rangeability.
Any special attributes or uses a valve may have is also described.
Section 3.2.9 introduces the reader to some of the more unusual types of valves, their design and usages.
3.2.1 Ball valves
The rotary ball valve, which used to be considered as an on-off shut-off valve is now used quite extensively as a flow control device. Some of the advantages include lower cost and weight, high flow capacity, tight shut-off and fire safe designs. The ball valve contains a spherical plug that controls the flow of fluid through the valve body. Ball and Cage valves are close to linear in terms of % of flow or CV to % of stem or ball rotation. The 3 basic types of ball valve are listed below.
Types of ball valves
Conventional; or 1/4 turn pierced ball type.
Characterized; V and U notched along with a Parabolic ball type.
Cage; Positioning a ball by means of a cage in relation to a seat ring and discharge port, is used for control (see Figure 3.1).
Size and design pressure
1/2 in. to 42 inch (12.5 mm to 1.06 m) in ANSI Class 150; to 12 in. (300 mm) in ANSI class 2500.
Segmented Ball - 1 in. to 24 in. (50 mm to 600 mm) in ANSI class 150 to 16 inch (400 mm) in ANSI class 300.
Pressure up to 2500 psig (17 MPa).
Varies with design and material but typically -160 °C to +310 °C.
Special designs extend this range from -180 °C to > +1000 °C.
Generally claimed to be about 50:1. Refer to Section 3.3.3 (see Figure 3.2).
3.2.2 Butterfly valves
This is one of the oldest types of valves still in use, dating back from the 1920s. It acts as a damper or as a throttle valve in a pipe; and consists of a disk turning on a diametral axis. Like the ball valve its actuation rotation from fully closed to fully open is 900. Due to the fact that the disc can act like an airfoil in the main stream flow it is controlling care must be exercised to ensure that any resultant increase in torque can be absorbed by the control actuator being used.
Types of butterfly valves
General Purpose, aligned shaft, where the vane, disk, louver or flapper is rotated via the shaft to which it is attached.
High-performance Butterfly Valve (HPBV); Offset (eccentric) shaft; This design combines tight shut-off, reduced operating torque, and good throttling capabilities of the swing-through special disk shapes.
Size and design pressure
To 48 in. (51 mm to 1.22 m) are typical.
Units have been made from 0.75 in. to 200 in. (19 mm to 5 m).
Typically -260 °C to +540 °C.
Special designs extend this range up to > +1200 °C (see Figure 3.3).
Generally claimed to be about 50:1 Refer to Section 3.3.4 .
The flow characteristics of butterfly valves are affected by the position of the shaft (aligned or eccentric) and the relative size of the shaft compared to the valve size. Due to the design of the Butterfly Valve Plate and seat, the Valve rotation varies between 80 to 87 degrees (not exactly 90 degrees). During initial 0% to 20% opening, and final 80% to 100% opening - the flow control is not linear (see Figure 3.4).
3.2.3 Digital valves
Digital valves comprise a group of valve elements, or ports, assembled into a common manifold. Each element has a binary relationship with its neighbor which means that, starting with the smallest port, the next port is twice the size of the previous one. The main advantages of this type of valve is their high speed, high precision and practically unlimited rangeability. Their biggest disadvantage is their high cost.
3/4 in. to 10 in. (19 mm to 250 mm) in both line and angle patterns.
Cryogenic to +670 °C.
Design pressure limits
Up to 10,000 psig (690 bars) (see Figure 3.5).
|No. Of “Bits”||8||10||12||14||16|
Where high speed, (25 to 100 milliseconds), accuracy, large rangeability and tight shut-off is needed.
3.2.4 Globe valves
Twenty or so years ago the majority of throttling control valves were of the globe type, characterized by linear plug movements and actuated by spring/diaphragm operators. The main advantages of the globe valve include the simplicity of the spring/diaphragm actuator, a wide range of characteristics, low cavitation and noise, a wide range of designs for corrosive, abrasive, high temperature and high pressure applications, a linear relationship between the control signal and valve stem movements and relative small amounts of dead band and hysteresis values (see Figure 3.6).
Types of globe valves
- Single Ported with Characterized Plug
- Single Ported, Cage Guided
- Single Ported, Split Body
- Double Ported, Top-Bottom or Skirt-Guided Plug
- Eccentric Disk, Rotary Globe
- Three way or Y type
These characteristics are determined by the valve plug profile:
- Equal Percentage
- Quick Opening
- Size and Design Pressure
- Generally 1/2 in. to 14 in. (20 mm to 356 m)
- Maximum size for type C is 6 in. (152 mm)
- Maximum size for type E is 12 in. (305 mm)
- Maximum size for type D is 16 in. (406 mm)
- Type F (the Angle Type) has been made in sizes up to 42 in. (1.05 m)
Typically all pressure ratings are available up to ANSI Class 1500, with types B and D available through ANSI Class 2500; Types C and E are limited to ANSI Class 600.
- Generally from -200 °C to +540 °C.
- Type B is limited to a maximum temperature of +400 °C.
- Type C can operate down to -260 °C.
If it is defined as the region within which the valve gain remains within 25% of the theoretical, it seldom exceeds 20:1.
Manufacturers using other definitions claim 35:1. Refer to Section 3.3.3 (see Figure 3.7).
3.2.5 Pinch valves
These type of valves are called either Pinch or Clamp valves depending on the configuration of the flexible tube and the means used for tube compression. They are also manufactured from a large range of materials such as Teflon, PVC, Neoprene, and Polyurethane, each type of elastomer or plastic having its own particular application use.
This type of control valve, if carefully selected, has many advantages like high abrasion and corrosion resistance, packless construction, reasonable flow control rangeability, smooth flow, low replacement costs and a longer life than metal valves where abrasion and corrosion are present.
As in all things though, this valve has limitations such as pressure and temperature restraints due to the nature of the material used for the sleeves, and the number of operations, or flexing, that a particular type of liner can cope with, although a life span of >50,000 opening and closing cycles should be considered as the minimum.
1 to 24 in. (25 mm to 610 mm).
Special units from 0.1 in. to 72 in. (2.5 mm to 1.8 m) (see Figure 3.8).
Generally up to ANSI class 150 with special units up to class 300.
Varies with design and material but typically -30 °C to +200 °C.
Generally claimed to be between 5:1 and 10:1. Refer to Section 3.3.3 (see Figure 3.9).
3.2.6 Plug valves
Plug valves are probably the oldest type of valve in existence, being used in water distribution systems in ancient Rome and they probably pre-date the butterfly valve.
Consisting of a tapered vertical cylinder with a horizontal opening or flow-way inserted into the cavity of the valve body and due to the taper and lubricating system they use they are virtually leak proof to both gases and liquids (see Figure 3.10).
A very common use for this type of valve is in the tapping of beer barrels. They afford quick opening and closing action with tight leakproof closures under working pressures from vacuum to as high as 10,000 psig (70 MPa). They can be used for liquids, gases and non-abrasive slurries, and eccentric and can be styled with lift plugs for use with sticky fluids. Again, like the butterfly and ball valves, they operate through an actuator having angular motion of 900.
1/2 in. to 36 in. (12.5 mm to 960 mm).
V-ported: This style is used for both ON-OFF and Throttling control, utilizing a V shaped
plug and a V shaped notched body. This is ideal for fibrous or viscous materials.
Three, Four and Five Way or Multiported designs are available (see Figure 3.10).
Typically from ANSI class 125 to ANSI class 300 ratings and up to 720 psig (5 MPa) pressure, with special units available for ANSI class 2500.
Typically -70 °C to +200 °C.
Special units are available -160 °C to +315 °C.
Generally claimed to be between 20:1. Refer to Section 3.3.3 (see Figure 3.11).
3.2.7 Saunders diaphragm valves
The Saunders or Diaphragm valve is sometimes also referred to as a weir valve. This valve operates by moving a flexible diaphragm toward or away from a weir. This valve can be considered as a half pinch valve as only one diaphragm is used, moving relative to a fixed weir, because of this however their flow characteristics are similar. The normal Saunders valve has a body, that in side section, is in the form of an inverted U shape, with the diaphragm closing the orifice at the top. A full-Bore type is also available that has, when fully open a fully rounded bore which is an important feature for ball-brush cleaning as required in applications like the food industry. It should be noted that mechanical damage can occur when opening this type of valve against a process vacuum (see Figure 3.12).
1/2 in. to 12 in. (12.5 mm to 300 mm).
Special units manufactured up to 20 in. (500 mm).
- Full Bore
- Dual Range
- Sizes ⟸ 4 in. (100 mm) 150 psig (10.3 Bar)
- 6 in. (150 mm) 125 psig (8.6 Bar)
- 8 in. (200 mm) 100 psig (6.9 Bar)
- 10 in. to 12 in. ( 250 mm to 300 mm) 65 psig (4.5 Bar)
With most elastomer diaphragms -12 °C to +65 °C
With PTFE diaphragms -34 °C to +175 °C
Generally claimed to 10:1 Refer to Section 3.3.3 (see Figure 3.13).
3.2.8 Sliding gate valves
In this type of valve, the flow rate is controlled by sliding a plate with a hole in it past a stationary plate, usually placed at 900 to the line of flow, with a corresponding hole in it. These holes can be round, or shaped to profile the flow characteristic of the valve. This valve is sometimes used in automatic control but is not really considered a control valve. However, this type of valve can operate with pressures up to 10,000 psig (70 MPa). The accuracy of these valves, particularly in proportional control, depends solely on the accuracy of the chosen actuator.
- Knife Gate
- Plate and Disk (Multi-orifice)
- Positioned Disk
- ON-OFF: 2 in. to 120 in. (50 mm to 3.0 m).
- Throttling: 1/2 to 24 in. (12 mm to 600 mm).
- Throttling: 1/2 in. to 6 in. (12 mm to 150 mm).
- Throttling: 1 in. and 2 in. (25 mm and 50 mm) (see Figure 3.14).
Types A and B. Up to ANSI class 150.
Type C. Up to ANSI class 300.
Type D. Up to 10,000 psig (70 MPa).
Types A and B. Cryogenic to 260 °C
Type C and D. -30 °C to +600 °C
Types A. 10:1
Type B. 20:1
Type C. Up to 50:1 is claimed.
Refer to Section 3.3.3 (see Figure 3.15).
Special valve designs
This section is included to expose the student to some of the more uncommon types of valves that are currently being used. The reason for this is that with the current technical advancements and stringent requirements of accuracy etc. that are now being made of process control systems, these valves are becoming more common. These valves are neither linear or rotary in operation, but use other methods such as fluidics or static pressure of the process fluid, in throttling the valve.
Dynamically balanced plug valves
This family of valves is used where there is no external power available to operate the valve, and therefore the static pressure of the process fluid is used to achieve throttling. The upstream, or back, pressure is used to move a plug against the force of a return spring. Variances in supply pressure effect the position of the plug relative to the spring tension. Control is achieved with a pilot valve poppet assembly (see Figure 3.16).
Diaphragm-operated cylinder in-line valves
This valve is used for high pressure gas services due to its low level of vibration, turbulence and noise. It consists of a low convolution diaphragm for positive sealing. Inlet to Outlet pressures of 1400 and 600 MPa respectively are possible in the 2 in. (50 mm) size (see Figure 3.17).
Expandable element in-line valves
Streamlined flow of gas occurs in a valve where a solid rubber cylinder is expanded or contracted to change the area of an annular space. Control occurs via an hydraulic actuator or piston that is used to vary the rubber cylinders expansion. Pressures up to 1200 psid can be controlled with this valve (see Figure 3.18).
An expandable element or diaphragm is stretched over a perforated dome shutting off the flow of the valve when the pressure above the diaphragm is greater than the line pressure. By externally varying the pressure to the exterior of the element control of the mainstream flow can be achieved.
Fluid interaction valve
The COANDA EFFECT, the basics of FLUIDICS, is used in this type of diverting valve which comes in sizes from 1/2 in. to 4 in. (12.5 mm to 100 mm). This valve has a flip-flop type of action used to divert a discharge from one port to another in a Y configuration by use of lateral control ports located at the base of the V intersection of the Y. This type of valve has numerous uses particularly in the chemical industry, the ability to divert a flow rapidly, usually in less than 100 msec, makes it an important member of the control valve family (see Figure 3.19).
3.3 Control valve gain, characteristics, distortion and rangeability
The characteristics, rangeabilities and gains of control valves are interrelated and a good understanding of these is necessary to be able to relate to the “personality profile” of a process control valve.
3.3.1 Valve and loop gain
Gain is defined as and for a LINEAR (Constant Gain) valve, valve gain (KV) is defined as or the maximum flow divided by the valve stroke in percentage.
The loop gain of a process control system (KLOOP) should ideally be 0.5 to obtain quarter amplitude damping, an ideal and very stable state. Most process control loops consist of a minimum of four active units as listed below, each with their respective abbreviation indicated in [ ]’s as:
- A Process Control; P or proportional mode Controller [ KC ].
- The controller output driving a control valve [ KV ].
- The valve effecting a process [ KP ].
- A sensor/transducer measuring the process and feeding this as an input to the controller [ KS ].
For the system to be stable all four components should have a linear gain and the overall product of their gains should equal 0.5 (for quarter damping).
KLOOP = KC x KV x KP x KS = 0.5
When a linear controller and sensor are used, and the gain of the process is also linear, a linear (KV = Constant) valve is needed to maintain the overall total loop gain constant at a value of 0.5.
However, if the process is non-linear (KP varies with load), while the gains of KC, KV and KS are constant the value of KLOOP will also vary about the optimum value of 0.5 resulting in either a sluggish or unstable operation of the process. The only way to maintain stability is for another component in the loop to change its gain in the opposite direction and magnitude to that of the process gain change.
This can be either the controller gain (KC) or the control valve gain (KV).
Here we will consider changes in the Valve gain (KV).
When the control valve gain varies with its load (Flow) it is named according to its characteristics, these being:
|EQUAL PERCENTAGE,||KV increases at a constant rate with flow|
|VARIABLE RATE,||KV increases according to the profile; Parabolic, Hyperbolic etc.|
|QUICK-OPENING,||KV drops when the flow through the valve increases|
The theoretical valve gain invariably changes in actual use if the valve pressure differential varies with load, this is the case in most pumping systems where the valve differential drops with increasing flow rates thereby reducing the valve gain KV. This tends to shift the gain of equal percentage valves toward that of the linear type.
In this case installing an equal percentage valve into the system often greatly assists in keeping the valve gain linear.
The inherent characteristics of a control valve describes the relationship between the controller output as received by the actuator and the flow through the valve, assuming that:
The Actuator is Linear (Valve travel is proportional with controller output).
The pressure difference P across the valve is constant.
The process fluid is not flashing, subject to cavitation or at sonic velocity (see Figure 3.20).
Selecting a valve characteristic can be a prolonged and complicated procedure, Driskell derived a general rule-of-thumb in selecting valve characteristics for the more common loops (see Table 3.1):
In many cases the choice of valve characteristic has minimal effect on the loop parameters, and just about any type is acceptable for:
- Process with short time constants, such as flow control, most pressure control loops and temperature control when mixing a hot and cold stream;
- Control loops operated by a narrow proportional band (high gain) controllers, such as most regulators;
- Processes with a load variation of less than 2:1.
3.3.2 Valve distortion
Fluid flow through a valve is subjected to frictional losses, the consequence of this is shown below in Figure 3.21. It can be seen from these curves that installation criteria can have substantial effects on a valve’s flow characteristics and rangeability.
The linear valve has a constant gain at all flow rates and an equal percentage valve has a gain directly proportional to flow.
Therefore, if a loop tends toward oscillation at low flow rates (Indicating a loop gain ⟹ 1) and is sluggish at high flow (Indicating a gain < 0.25) one should switch from a linear to equal percentage valve. The opposite therefore applies if oscillations occur at high flow rates, and a sluggish performance at low flow rates, change from an equal percentage to a linear control valve model.
3.3.3 Valve rangeability
Traditionally, rangeability has been defined as the ratio between minimum and maximum “controllable” flow through a valve. The term “minimum flow” (FMIN) is defined as the flow below which the valve tends to close completely. Using this definition manufacturers usually claim:
- 50:1 Rangeability for equal percentage valves.
- 33:1 Rangeability for linear valves.
- 20:1 Rangeability for quick opening valves.
This indicates that the flow through these valves can be controlled down to 2%, 3% and 5% respectively of their rated CV. However it can be seen in Figure 3.21 that the minimum controllable flow rises as the distortion coefficient (DC) drops.
Due to the fact that at minimum valve opening, the pressure drop through a valve, , is at a maximum, the valve will proportionally pass more flow.
Rangeability should be calculated as the ratio of the CV required at maximum flow (minimum pressure drop) and the CV required at minimum flow (maximum pressure drop) (see Table 3.2).
3.4 Control valve actuators
An actuator is the part of a valve assembly that responds to the output signal of the process controller, causing a mechanical motion to occur which, in turn, results in modification of fluid motion through the valve.
An actuator has to be able to perform two basic functions:
- To respond to an external signal and cause a valve to move accordingly and with correct selection, other functions can be integrated into this assembly, such as desired fail-safe actions.
- To provide support (if required) for accessories such as positioners, limit switches, solenoid valves and local controllers.
There are five basic forms of valve actuator, as listed below, and a description of each follows in Sections 3.4.1:
The first four have a totally different method of operation and application use as compared to the last one, the pneumatic actuator.
3.4.1 Digital, electric, hydraulic and solenoid actuators
This section describes the common factors of these valves:
- Stepping motors in smaller size valves.
- Reversible motors and gearboxes for larger size valves.
- Electrohydraulic (The pump being driven by stepping or servo motors.)
- Solenoid operation
- Energy Sources
- Electrical or Electrohydraulic.
- Speed Reduction Techniques
- Worm gear, Spur gear or gearless.
0.5 to 30 ft.-lb. (0.6 to 40 N-m) for type 2a above.
1 to 75,000 ft.-lb. (1.3 to 100,000 N-m) for type 2b.
Speeds of rotation
From 5 to 300 seconds for a complete opening or closing cycle.
Linear thrust ranges
- Maximum of 500 lb.. (225 kg.)output force from type 2a actuators.
- 100 to 10,000 lb. (45 to 4500 kg.) output force from type 2a actuators.
- 100,000 lb.. (45,000 kg.) output force from type 3 actuators.
- Speeds of full stroke
- Small solenoids can close within 8 to 12 milliseconds.
- Throttling solenoids can stroke in about 1 second.
- Electromechanical motor-driven valves stroke in 5 to 300 seconds.
- Electrohydraulic actuators usually move at 0.25 in./sec. (6 mm/sec).
In essence, actuators have to be able to exert some form of higher torque to overcome the resistance of some types of valve opening; usually described as a High Breakaway Force. Operation in the opposite direction, that is closing a valve, also sometimes requires extra torque to ensure firm seating is obtained, however some form of spring retentive clutch is also needed in case a foreign object is trapped in the valve body.
Limit switches, of many types, ranging from reed, IR, to micro-switches, can also be mounted within the mechanical assembly of these actuators to signal, and thereby prevent, over-run and excessive movements occurring.
3.4.2 Pneumatic actuators
Pneumatic actuators respond to a control air signal by moving the valve trim into a corresponding throttling position. There are two basic types, Linear and Rotary, the specifications of both being listed below.
Types and applications of pneumatic actuators
- Spring Diaphragm
- Cylinder with scotch yoke
- Cylinder with Rack and Pinion
- Dual Cylinder
- Spline or Helix
- Air motor
The above actuators are applicable to the following valve size:
Type a1: 0.5 in. to 8 in. (12.5 mm to 200 mm).
Type a2: 0.5 in. to 16 in. (12.5 mm to 400 mm).
Type B: 2 in. to 30 in. (50 mm to 750 mm).
Maximum actuator pressure rating
Type a1: 60 PSI (414 kPa); some higher
Type a2: 150 PSI (1035 kPa)
Type B: 250 PSI (1725 kPa)
Type a1: 25 to 500 sq. Ins. (0.016 to 0.323 sq.m)
Type a2 and B: 10 to 600 sq. Ins. (0.006 to 0.38 sq.m)
Bore diameters from 2 to 44 ins. (50 mm to 1.1 m)
Strokes up to 24 in. (0.61 m)
Linear thrust (Stem force ranges)
Type a1: 200 to 45,000 lbf. ( 100 to 20,400 kgf.)
Type a2: 200 to 32,000 lbf. ( 100 to 14,500 kgf.)
Specials up to 186,000 lbf. ( 84,000 kgf.)
Speeds of full stroke
Type a1: 1 to 5 seconds.
Type a2: 0.33 to 6.0 in second (8 mm to 150 mm/sec.).
3.4.3 The steady state equation
In pneumatic spring and diaphragm actuators, valve stem positioning is achieved by a balance of forces on the stem. Referring to Figure 3.22, the following equation can be derived from a summation of the forces involved, adopting a positive direction downward (Closing), and flow is left-to-right, Figure 3.22a:
PA - KX - PVAV = 0
With a reverse flow, right-to-left, Figure 3.22a:
PA - KX + PVAV = 0
The inverse of this, where the stem is moving in a negative direction upwards (Opening), and flow is left-to-right, Figure 3.22b:
-PA + KX - PVAV = 0
With a reverse flow, right-to-left, Figure 3.22b:
-PA + KX + PVAV = 0
|A||is the effective diaphragm area|
|AV||is the effective inner valve area|
|K||is the spring rate|
|P||is the diaphragm pressure|
|PV||is the valve pressure drop|
|X||is the stem travel|
These equations are simplified because they do not consider friction occurring in the valve stem packing, in the actuator guide and in the valve plug guide(s) or inertia (see Figure 3.22).
Figure 3.23 serves to illustrate the relationship between pressure on a diaphragm and the amount of travel of the valve stem, showing yet another area that generates non-linearity and distortion.
Table 3.3 indicates the Advantages/Disadvantages and Application for the four most common types of actuator.
3.5 Control valve positioners
Probably the most significant accessory that can be used for valve control is the Positioner, sometimes referred to as “Smart Valve Electronics” many of which are microprocessor controlled.
A positioner is a high gain proportional controller which measures the stem position, to within 0.1 mm, compares this position to a set-point, which should be considered as the output of the main process controller, and performs correction on any resultant error signal. The Open-Loop gain of these positioners ranges from 10 to 200 giving a proportional band of between 10% and 0.5%) and their periods of oscillation ranges from 0.3 to 10 seconds, a frequency response of 3 to 0.1 Hz. In other words it is a very sensitive tuned proportional only controller.
The STARPACK system manufactured by Valtek shows, what could be considered a full-house Positioner. (Figure 3.24).
Not only will it control and measure the flow through the valve, but also measure Up and Down stream pressures and as such the pressure differential, stem position and temperatures. It has the advantage of being able to store valve “profiles” to enable software correction or modifications to flow characteristics.
3.5.1 When NOT to use positioners
Remembering that a positioner becomes an intrinsic part of the full control loop very much like the secondary controller in a cascaded system care must be exercised in their uses. A rule of thumb is that the time constant of the slave should be 10 times shorter (Open Loop Gain 10 times higher), and the period of oscillation of the slave 3 times shorter (Frequency Response 3 times higher) than that of the primary or master controller.
3.6 Valve sizing
The methods that can be used for the calculations of valve size are many and varied and sometimes very complicated and as such are beyond the scope of this publication. As a rule though, the minimum and maximum CV requirements for the valve should be determined, and taken into account.
Requirements like “Process start up”; “Any abnormal process functions required” and, very importantly “Reactions required to any Emergency conditions occurring” must first be taken into account and the valve should be selected to operate adequately over the range of 0.8 CV Min to 1.2 CV Max. If this results in a rangeability which exceeds the capabilities of one valve, then two or more valves should be used.
Control valves should NOT be used outside their rangeability specification. Also, care should be exercised that in summing up all the pressure drops that can occur in a constant pumping speed application, that the result is not applied to the valve for correction, as this always results in OVER SIZING the valve and as such having it operate for most of its time in a nearly closed position.
Fundamentals of Control Systems
This chapter reviews the basic principles of process control.
As a result of studying this chapter, and after having completed the relevant exercises, the student should be able to:
Clearly explain the concepts of:
- On-off Control
- Modulating Control
- Open Loop Control
- Ratio Control
- List the ten most common acronyms and basic terminology used in the process control (e.g. PV, MV, OP);
- Describe the differences between a reverse and a direct acting controller;
- Indicate what deadtime is and how it impacts on a process.
4.2 ON-OFF control
The oldest strategy for control is to use a switch giving simple on-off control, as illustrated in Figure 4.1. This is a discontinuous form of control action, and is also referred to as two- position control. The technique is crude, but can be a cheap and effective method of control if a fairly large fluctuation of the process variable (PV) is acceptable.
A perfect on-off controller is 'on' when the measurement is below the set point (SP) and the manipulated variable (MV) is at its maximum value. Above the SP, the controller is 'off' and the MV is a minimum.
On-off control is widely used in both industrial and domestic applications. Most people are familiar with the technique as it is commonly used in home heating systems and domestic water heaters. Consider the control action on a domestic gas fired boiler for example. When the temperature is below the Setpoint, the fuel is 'on'; when the temperature rises above the Setpoint, the fuel is 'off', as illustrated in Figure 4.2.
There is usually a dead zone due to mechanical delays in the process. This is often deliberately introduced to reduce the frequency of operation and wear on the components. The end result of this mode of control is that the temperature will oscillate about the required value.
4.3 Modulating control
If the output of a controller can move through a range of values, this is modulating control.
Modulation Control takes place within a defined operating range only.
That is, it must have upper and lower limits. Modulating control is a smoother form of control than step control. It can be used in both open loop and closed loop control systems.
4.4 Open loop control
In open loop control, the control action (Controller Output Signal OP) is NOT a function of the Process Variable (PV). The open loop control does not self-correct when the PV drifts, and this may result in large deviations from the optimum value of the PV.
4.4.1 Use of open loop control
This control is often based on measured disturbances to the inputs to the system. The most common type of open loop control is feedforward control. In this technique the control action is based on the state of a disturbance input without reference to the actual system condition, i.e. the system output has no effect on the control action, and the input variables are manipulated to compensate for the impact of the process disturbances (see Figure 4.3).
4.4.2 Function of open loop or feed forward control
Feedforward control results in a much faster correction than feedback control but requires considerably more information about the effects of the disturbance on the system, and greater operator skill.
4.4.3 Examples of open loop control
A common domestic application that illustrates open loop control is a washing machine.
The system is pre-set and operates on a time basis, going through cycles of wash, rinse and spin as programmed.
In this case, the control action is the manual operator assessing the size and dirtiness of the load and setting the machine accordingly.
The machine does not measure the output signal, which is the cleanliness of the clothes, so the accuracy of the process, or success of the wash, will depend on the calibration of the system.
An open loop control system is poorly equipped to handle disturbances which will reduce or destroy its ability to complete the desired task.
Any control system operating on a time base is an open loop. Another example of this is traffic signals.
It is difficult to implement open loop control in a pure form in most process control applications, due to the difficulty in accurately measuring disturbances and in foreseeing all possible disturbances to which the process may be subjected.
As the models used and input measurements are not perfectly accurate, pure open loop control will accumulate errors and eventually the control will be inadequate (see Figure 4.4).
4.4.4 Introduction to ratio control
Ratio control, as its name implies, is a form of feedforward control that has the objective of maintaining the ratio of two variables at a specific value.
For example; if it is required to control the ratio of two process variables XPV and YPV the variable PVR is controlled rather than the individual PV’s (XPV and YPV).
Thus: PVR = XPV / YPV
A typical example of this is maintaining the fuel to air ratio into a furnace constant, regardless of maintaining or changing the furnace temperature. This is sometimes known as Cross Limiting Control.
4.5 Closed control loop
In closed loop control, the objective of control, the PV, is used to determine the control action. The concept of this is shown in Figure 4.6 and the principle is shown in Figure 4.7.
This is also known as feedback control and is more commonly used than feedforward control. Closed loop control is designed to achieve and maintain the desired process condition by comparing it with the desired condition, the Set Point Value (SP), to get an Error Value (Err).
4.5.1 Reverse or direct acting controllers
As the controller’s corrective action is based on the magnitude-in-time of the Error (ERR); which is derived from either SP-PV or PV-SP it is of no concern to the P, I or D functions of the controller which algorithm is used as the algorithms only change the sign of the Error term.
However; if we refer to Figure 4.5, which illustrates a controller, performing the same function, but in different ways:
In case one we Manipulate the OUTLET flow through V2 to control the tank level; this is DIRECT action.
Where as the PV increases (Tank Filling) the OP increases (Opening the outlet valve more) to drain the tank faster.
Direct Acting = PV ⇑ → OP ⇑ then Err = PV - SP
In case two we control the INLET flow through V1 to control the tank level; this is REVERSE action.
Whereas the PV increases (Tank Filling) the OP decreases (Closing the inlet valve more) to reduce the filling rate.
Reverse Acting = PV ⇑ → OP ⇓ then Err = SP - PV
The controller output changes, by the same magnitude and sign, based on the resultant Error value and sign.
4.5.2 Control modes in closed loop control
Most closed loop controllers can be controlled with three control modes, either combined or separately.
These modes, Proportional (P), Integral(I), and Derivative (D), are discussed in depth in the next chapter.
4.5.3 Illustration of the concepts of open and closed loop control
The diagrams in Figures 4.4 and 4.6 illustrate the concepts of open loop and closed loop controls in a water heating system.
In the open loop, feedforward example, the steam flow rate is varied according to the temperature of the cool water entering the system. The operator must have the skills to determine what change in the valve position will be sufficient to bring the cool water entering the system to the desired temperature when it leaves the system.
In the closed loop, feedback example, the steam flow rate is varied according to the temperature of the heated water leaving the system. The operator must determine the difference between that measurement and the desired temperature and change the valve position until this error is eliminated.
The above example is for manual control but the concept is identical to that used in automatic control, which should allow greater accuracy of control.
4.5.4 Combination of feedback and feedforward control
The advantages of feedback control are its relative simplicity and its potentially successful operation in the event of unknown disturbances. Feedforward control has the advantage of faster response to a disturbance in the input which may result in significant cost savings in a large-scale operation (see Figure 4.8).
In general, the best industrial process control can be achieved through the combination of both open and closed loop controls. If an imperfect feedforward model corrects for 90% of the upset as it occurs and the remaining 10% is corrected by the bias generated by the feedback loop, then the feedforward component is not pushed beyond its abilities, the load on the feedback loop is reduced, and much tighter control can be achieved.
4.6 Dead time processes
In processes involving the movement of mass, dead time is a significant factor in the process dynamics. It is a delay in the response of a process after some variable is changed, during which no information is known about the new state of the process. It may also be known as the transportation lag or time delay.
Dead time is the worst enemy of good control and every effort should be made to minimize it.
All process response curves are shifted to the right by the presence of dead time in a process (see Figure 4.9).
Once the dead time has passed, the process starts responding with its characteristic speed, called the process sensitivity.
4.6.1 Reduction of dead time
The aim of good control is to minimize dead time, and to minimize the ratio of dead time to the time constant. The higher this ratio, the less likely it is that the control system will work properly.
Dead time can be reduced by reducing transportation lags, which can be done by increasing the rates of pumping or agitation, reducing the distance between the measuring instrument and the process, etc.
4.6.2 Dead time effects on P, I and D modes and sample-and-hold algorithms
If the nature of the process is such that the dead time of a loop exceeds its time constant then the traditional PID (proportional-integral-derivative) control is unlikely to work, and a sample and hold control is used. This form of control is based on enabling the controller so that it can make periodic adjustments, then effectively switching the output to a hold state and waiting for the process dead time to elapse before re-enabling the controller output. The algorithms used are identical to the normal process control ones except that they are only enabled for short periods of time Figure 4.10 illustrates this action.
The only problem is that the controller has far less time to make adjustments, and therefore it needs to do them faster. This means that integral setting must be increased in proportion to the reduction in time when the loop is in automatic.
4.7 Process responses
The dynamic response of a process can usually be characterized by three parameters; process gain, dead time and process lag (time constant) (see Figure 4.11).
The following sections define the three constitutional parts of the process response curve as illustrated in Figure 4.10.
4.7.1 Response process gain
The process gain is the ratio of the change in the output (once it has settled to a new steady state) to the change in the input. This is the ratio of the change in the process variable to the change in the manipulated variable,
It is also referred to as the process sensitivity, as it describes the degree to which a process responds to an input.
A slow process is one with low-gain, where it takes a long time to cause a small change in the MV. An example of this is home heating, where it takes a long time for the heat to accumulate to cause a small increase in the room temperature. A high gain controller should be used for such a process.
A fast process has a high gain, i.e. the MV increases rapidly. This occurs in systems such as a flow process or a pH process near neutrality where only a droplet of reagent will cause a large change in pH. For such a process, a low gain controller is needed.
The three component parts of process gain , from the controllers perspective is the product of the gains of the measuring transducer (KS), the process itself (KC) and the gain of what the PV or controller output drives (KV).
This becomes: Process Gain = KS x Kc x KV
4.7.2 Response dead time
The dead time (L) is the delay between the manipulated variable changing and a noticeable change in the process variable.
Dead time exists in most processes because few, if any, real world events are instantaneous. A simple example of this is a hot water system. When the hot tap is switched on there will be a certain time delay as hot water from the heater moves along the pipes to the tap. This is the dead time.
4.7.3 Response process lag
The process lag (T) is caused by the system’s inertia and affects the rate at which the process variable responds to a change in the manipulated variable.
It is equivalent to the time constant.
4.8 Dead zone
In most practical applications, there is a narrow bandwidth due to mechanical friction or arcing of electrical contacts through which the error must pass before switching will occur.
This may be known as the dead zone, differential gap, or neutral zone.
The size of the dead zone is generally 0.5% to 2% of the full range of the PV fluctuation, and it straddles the Setpoint.
When the PV lies within the dead zone no control action takes place, thus its presence is usually desirable to minimize the cycling of the process. One problem with on-off control is wear and tear of the controlling element. This is reduced as the bandwidth of fluctuation of the process is increased and thus frequency of switching decreased.
Stability and Control Modes of Closed Loops
As a result of studying this chapter, and after having completed the relevant exercises, the student should be able to:
- Indicate what stability is and mathematically what causes instability;
- Describe the function and use of Proportional, Integral and Derivative control and various combinations of these terms;
- Indicate what problems in closed loop control are caused by and how to correct them.
5.2 The industrial process in practice
We have seen the basic principles of closed loop control in the previous chapter. A control action is calculated, based on the deviation of the PV from the desired value of control as defined by the SP (ERR = PV - SP).
We have to consider the industrial process as it works in the real world. As an example of this, which we will now review, is a feed heater which is used to heat up material before it is fed into a distillation column (see Figure 5.1).
The objective of the system is temperature control of the outlet temperature (T2) that should be kept constant. The manipulated variable is the fuel valve position.
It should be noted, that for economic and environmental reasons, cross limiting control of the combustion is normally required to minimize the output of carbon monoxide. In this example for simplicity, we will neglect cross limiting control totally and manipulate the valve position directly.
This example of feed heater control will serve as an example for us to look into the practical implications of stability, different control modes, control strategies and practical exercises. For this reason we will first have a closer look into the basic dynamic behavior and the most common disturbances of the process which affect this control system.
5.3 Dynamic behavior of the feed heater
There are two major types of systems lag, control and disturbance, that effect the dynamic behavior of this heater system.
5.3.1 Control lag
A Lag between positioning of the fuel valve and the outlet temperature exists. The main reason for this Lag can be seen by virtue of the fact that not all feed material in the heater will be heated up at the same time after a change of the fuel valve position. Some part of the feed material in the heater at the time of fuel valve change will leave the heater shortly after; some other part later. A minor dead time is also a part of the control reaction.
5.3.2 Disturbance lags
The impact of disturbances on the outlet temperature also has a lag action. Every disturbance has its own Lag Time constant. Most disturbances have a minor dead time as well.
Note: There is no measurable difference between two high order lags one with a minor dead time and the other without.
5.4 Major disturbances of the feed heater
There are four MAJOR disturbances that can, and will be considered as being critical to the stable operation of the system, these being:
5.4.1 Fuel flow pressure changes
Increasing pressure increases the fuel flow and results in a higher outlet temperature (T2) and vice versa.
5.4.2 Feed flow changes
Since the feed heater serves another (unpredictable) process downstream of it, there is no way of keeping the feed flow constant. The feed flow depends totally on the need for material by the following process. An increase in the feed flow (demanded by the downstream process) decreases the outlet temperature and vice versa.
5.4.3 Feed inlet pressure changes
If the feed material is in the form of gas, this becomes an important issue. It is important to know the mass-flow rather than the volumetric flow of the feed material. With increasing pressure we increase the mass-flow which results in a decrease of the outlet temperature and vice versa.
5.4.4 Feed inlet temperature changes
The higher the inlet temperature, the less we have to heat. An increase in inlet temperature results in an increase of the outlet temperature and vice versa.
We have stability in a closed loop control system if we have no continuous oscillation. We must not confuse the problems and the different effects that disturbances, noise signals and instability have on a system. A noisy and disturbed signal may show up as a varying trend; but it should never be confused with loop instability.
The criteria for stability are these two conditions:
- The Loop Gain (KLOOP) for the critical frequency <1;
- Loop Phase Shift for the critical frequency < 180°.
5.5.1 Loop gain for critical frequency
Consider the situation where the total gain of the loop for a signal with that frequency has a total loop phase shift of 180°. A signal with this frequency is decaying in magnitude, if the gain for this signal is below 1. The other two alternatives are:
Continuous oscillations which remain steady (Loop Gain = 1);
Continuous oscillations which are increasing, or getting worse (Loop Gain > 1).
5.5.2 Loop phase shift for critical frequency
Consider the situation where the total phase shift for a signal with that frequency has a total loop gain of 1. A signal with this phase shift of 180° will generate oscillations if the loop gain is greater than 1. This situation is illustrated in Figure 5.2.
Increasing the Gain or Phase Shift destabilizes a closed loop, but makes it more responsive or sensitive.
Decreasing the Gain or Phase Shift stabilizes a closed loop at the expense of making it more sluggish.
The gain of the loop (KLOOP) determines the OFFSET value of the controller; AND offset varies with Setpoint changes.
5.6 Proportional control
This is the principal means of control. The automatic controller needs to correct the controllers OP, with an action proportional to ERR. The correction starts from an OP value at the beginning of automatic control action.
5.6.1 Proportional error and manual value
We will call this starting value MANUAL. In the past, this has been referred to as “MANUAL RESET”.In order to have an automatic correction made, that means correcting from the MANUAL starting term, we always need a value of ERR. Without an ERR value there is no correction and we go back to the value of MANUAL. We therefore always need a small “left over” ERROR to keep the corrective control up.
This left over ERROR is called the OFFSET.
ERR0 is the ERROR value we would have without any control at all.
KC is the Gain applied to scale the size of the control action based on ERR. LOOP is the total loop gain which is the product of CONTROLLER GAIN (KC) and PROCESS GAIN ( KP).
The only tuning constant for proportional control is KC (Controller Gain). The larger we make the value of KC, the more difficult or sensitive (reduced stability) is the control of the system.
With larger values of KC, the OFFSET value becomes smaller. If the Gain is made too large, we may face a stability problem.
The following relationships follow from the above:
5.6.2 Proportional relationships
1. OP = KC x ERR + Manual
2. KLOOP = KC x KP
3. Offset = ERR0 /(KLOOP + 1)
ERR = SP-PV
OP = KC x ERR
PV = ERR x KLOOP
ERR = SP - PV
= SP - Err x KLOOP
∴ERR + ERR x KLOOP = SP
EERR (1 + KLOOP) = SP
At a steady state
ERR = SP/(1 + KLOOP)
The error term (ERR) is defined as “Error = Indicated - Ideal” and is produced as:
ERR (t) = SP (t) - PV (t)
Although this indicates that the Set-point (SP) can be time-variable, in most process control problems it is kept constant for long periods of time. For a proportional controller the output is proportional to this error signal, being derived as:
OPC (t)= P + Kc E(t)
OPC = The controller output
P = The controller output bias, or MANUAL starting value
KC = The controller gain (usually dimensionless)
E = The ERROR value
This leads the way to evaluating a set of concepts for proportional control.
5.6.3 Evaluation of proportional control concepts
The controller gain (KC) can be adjusted to make the controller output, (OPC), changes as sensitive as desired to differences that occur between the SP and PV values.
The sign of KC can be chosen (+ or -) to make OPC either increase or decrease as the deviation or ERR value increases.
In proportional controllers, the MANUAL or starting value of the OUTPUT is adjustable. Since the controller output equals the value of MANUAL when the error value is zero (SP = PV), the value of MANUAL is adjusted so that the controller output, (and consequently the manipulated variable, MV) are at their nominal steady state values.
For example, if the controller output drives a valve, MANUAL is adjusted so that the flow through the valve is equal to the nominal steady state value when ERR = 0.
The gain KC is then adjusted and for general controllers is dimensionless, that is the terms MANUAL and ERR have the same unit terms of measurement.
The disadvantage of Proportional controllers is that they are unable to eliminate the steady-state errors that occur after a set-point or a sustained load change.
5.6.4 Proportional band
A controllers Proportional Band is usually defined, in percentage terms, as the ratio of the input value, or PV to a full or 100% change in the controller output value or MV.
Its relationship to Proportional, or controller Gain ( KC ) is given by:
Proportional: ΔMV = Kc × ΔPV
Proportional Band %:
ΔMV = 100%
As shown in Figure 5.3, if the PB, or Proportional Band, of a controller is set at 100% (KC = 1) then a full change of the PV, or input, from 0 to 100% will result in a change of the MV, or output, from 0 to 100%., resulting in 100% of valve motion or operation.
If the PB is set at 20% (KC = 5) then a change in the PV, or input, from 40% to 60% will result in the same change of the MV, or output, from 0 to 100%. With the same resultant motion of the valve from fully closed to fully open.
Likewise, a PB value of 500% (KC = 0.2) will result in the MV, or output, changing from 40% to 60% when the PV, or input, changes from 0 to 100%.
High percentage values of the PB therefore constitute a less sensitive response from the controller while low percentage values result in a more sensitive response.
5.7 Integral control
Integral action is used to control towards no OFFSET in the output signal.
This means that it controls towards NO ERROR (ERR = 0). Integral control is normally used to assist proportional control. We call the combination of both PI-control.
5.7.1 Integral and proportional with integral formula
Formula for I-Control:
Formula for PI-Control:
Tint is the Integral Time Constant.
Since integral control (I-Control) integrates the error over time, the control action grows larger the longer the error persists. This integration of the error takes place until no error exists. Every integral action has a phase lag of 90°. This phase shift has a destabilizing effect. For this reason, we rarely use I-Control without P-Control.
5.7.2 Integral action
Let us review a few principles of calculus and trigonometry in relation to integral calculation, especially the integration of a sine wave. Figure 5.4 shows the phase lag of the integral calculation on a sine wave. The same effect exists if integral action is used in a closed loop control system.
The integral action adds to the existing phase lag. The maximum of the integrated sine wave is when the sine wave swings back.
If we consider a “steady-state” value exists for the ERR term; then the Integral output will, at the completion of each of its time constants; TINT increase its output value by Err x KC in the form of a ramp as shown in Figure 5.5(a).
5.7.3 Integral action in practice
In practice, as the Integral output increases and passes through the process the PV will move towards the SP value and the ERR term will reduce in magnitude. This will reduce the rate-of-change during the integral time interval, resulting in the classic first-order “curve” response shown in Figure 5.5(b).
If the rate-of-change; or the value on TINT is too small, along with the 90O phase lag in the integral action, oscillations may occur. That is, in effect, applying over-correction-in-time to the value of the offset term.
If this happens with a closed loop control system in the industry, we have a stability problem.
The conclusion that we get from this is that we have to be careful in the use of Integral-Control if we have a closed loop control system which has a tendency towards instability.
Integral-Control eliminates OFFSET at the expense of stability.
5.8 Derivative control
The only purpose of derivative control is to add stability to a closed loop control system.
The magnitude of derivative control (D-Control) is proportional to the rate of change (or speed) of the PV.
Since the rate of change of noise can be large, using D-Control as a means of enhancing the stability of a control loop is done at the expense of amplifying noise. As D-Control on its own has no purpose, it is always used in combination with P-Control or PI-Control. This results in a PD-Control or PID-Control. PID-Control is mostly used if D-Control is required.
5.8.1 Derivative formula
Formula for D-Control:
OP = K *Tder (dERR /dt )
Tder is the Derivative Time Constant.
Again, using the principles of calculus and trigonometry in relation to the derivative calculation, especially the case of differentiation of a sine wave we can derive the following principles.
Figure 5.6 shows the phase lead of derivative calculation on a sine wave. The same effect exists if derivative action is used in a closed loop control system.
Derivative action can remove part or all of an existing phase lag.
This is theoretically achieved by the output of the derivative function going immediately to an infinite value when the ERR value is seen to change.
5.8.2 Derivative action in practice
In practice the output will be changed to +8 times the value of the change of the ERR Value. Then the output will decrease at a rate of 63.2% in every derivative time unit, as shown in Figure 5.7(a) and (b).
5.8.3 Summary of integral and derivative functional relationships
Integration can be considered as charging a capacitor, from a constant voltage source, via a resistor. The voltage across the capacitor rises from a zero value in an exponential form. This is caused by the difference between the supply and capacitor voltage reducing in time.
Derivative action is in essence the inverse of the example for integral action. Taking a fully charged capacitor and discharging it through a resistor results in an exponential decay, as the difference in capacitor voltage reduces from its maximum value to zero.
At first glance, it would appear that the Integral and Derivative functions, one being the inverse of the other, would effectively cancel each other out. However it has to be remembered that the ERR term is dynamic and constantly changing.
There is a fairly strict ratio between TINT and TDER and the process or loop time TPROC. These relationships are explained in Chapter 8 (System Tuning Procedures).
5.9 Proportional, integral and derivative modes
Most controllers are designed to operate as PID-Controllers.
5.9.1 Enabling/disabling integral and derivative functions
If no derivative action is wanted, TDER (Derivative Time Constant) has to be set to zero.
If no integral action is wanted, TINT (Integral Time Constant) has to be set to a large value (999 minutes, for example).
Most controllers work as an I-Controller only if K is set to zero. In such cases, a unit gain of 1 is active for integral action only. The concept of a PID-Controller is shown in Figure 5.8.
In the Chapter 8, we will review the most common methods for tuning of P-Controllers, PI-Controllers and PID-Controllers. At this stage you should be aware of the balancing act necessary to optimize the control action.
5.10 ISA versus “Allen Bradley”
The PID functions, considered within a digital (PLC) system, equate to a process where the output of a controller is designed to drive the process variable (PV) toward the Setpoint (SP) value. The difference between the PV and SP values is the system error value, upon which the PID functions operate. The greater the error value the greater the output signal.
ISA (Instrument Society of America) has a set of rules that make the P, I and D functions dependent on each other, and for example, the Allen Bradley PLC system operates either on ISA (Dependent) or Independent gains.
Chapter 8 illustrates the differences.
5.11 P I and D relationships and related interactions
P-Control is the principle method of control and should do most of the work.
I-Control is added carefully just to remove the OFFSET left behind by P-Control.
D-Control is there for stability only. It should be set up so that its stabilizing effect is larger than the destabilizing effect of I-Control (see Figure 5.8).
In cases where there is no tendency towards instability, D-Control is not used. This includes most flow applications.
5.12 Applications of Process Control Modes
5.12.1 Proportional mode (P)
The most basic form of control. This can be used if the resultant offset in the output is constant and acceptable. Varied by the controller gain KC.
5.12.2 Proportional and integral mode (PI)
Integral control can be added to the proportional control to remove the offset from the output. This can be used if there are no stability problems such as in a tight flow control loop.
5.12.3 Proportional, integral and derivative mode (PID)
This is a full 3-term controller, used where there is instability caused by the integral mode being used. The derivative function amplifies noise and this must be considered when using the full three terms.
5.12.4 Proportional and derivative mode (PD)
This mode is used when there are excessive lag or inertia problems in the process.
5.12.5 Integral mode (I)
This mode is used almost exclusively in the primary controller in a cascaded configuration. This is to prevent the primary controllers output from performing a “STEP CHANGE” in the event of the controllers Setpoint being moved.
5.12.6 Typical PID controller outputs
See Figure 5.9.
Digital Control Principles
As a result of studying this chapter, and after having completed the relevant exercises, the student should be able to:
- Identify and describe the mathematical form of the most important building blocks used in industrial control;
- Describe the principles applied in computer based digital controllers;
- Indicate what a real time program is.
In order to best understand the control algorithms used in industrial control, it is appropriate to look at the building blocks first.
When selecting the type of control system required one must examine the alternatives that exist between digital and analogue systems. Digital systems are compatible with computers, distributed control systems, programmable controllers and digital controllers.
6.2 Digital vs analog: a revision of their definitions
6.2.1 Analog definition
Quantities or representations that are variable over a continuous range. These variables can take an infinite number of values, while the digital representation of these same variables is limited to defined states or values. Analogue systems are more accurate in their representations of a value but at a cost, induced or additive noise and difficulty in accurate transmission being two of the major problems associated with this type of system.
6.2.2 Digital definition
This is a term used to define a quantity with discrete levels rather than over a continuous range.
6.3 Action in digital control loops
Digital control loops differ from continuous control loops, their analog cousins, in that a continuous controller is replaced by a SAMPLER. This is some form of a computer performing discrete control algorithms and storing the individual results.
Action is based on comparing the difference between previous sampled value(s) and the current value and generating an output which is used to increment or decrement the final controller output, in conjunction with any other existing digital function.
(P or P + I or P + I + D etc).
6.4 Identifying functions in the frequency domain
As control algorithms are often expressed in terms of f(s) which refers to a function in the frequency domain, we will review these expressions.
This text is not intended to go into the theory of the Laplace Transforms, but to provide a basic understanding of the expressions needed to understand the composition of most control algorithms. However a quick and simple revision and overview follows:
6.4.1 Laplace conceptual revision
The principle of a transform operation is to change a difficult problem into an easier problem or form that is more convenient to handle. Once the result from a transformation has been obtained an inverse transformation can be made to determine the solution to the original problem. For example: Logarithms are a transform operation by which problems of multiplication and division can be transformed into summing and negation operations.
Laplace transforms perform a similar function in the solution of differential equations. The Laplace transform of a LINEAR ORDINARY DIFFERENTIAL EQUATION results in a LINEAR ALGEBRAIC EQUATION. This is usually much simpler to solve than the corresponding differential equation.
Once the Laplace domain solution has been found, the corresponding TIME DOMAIN solution can de determined by using an inverse transformation.
The Laplace function of a time domain function f(t) is denoted by the symbol F(s), and is defined as follows:
λ[f (t)] is the symbol for the Laplace transformation in the brackets.
the variable s is a complex variable ( s = a + jb )introduced by the transformation.
All time dependant functions in the time domain become functions of s in the Laplace domain (s domain).
The following example illustrates an Integrator as an integral block with its step function input in the frequency domain being represented as an Integral Calculation.
Appendix A illustrates some of the Laplace Transform Pairs.
6.4.2 Common building blocks
The most commonly used building blocks are:
|Ts||Derivative Block with Derivative Time Constant|
|1 / Ts||Integral Block with Integral Time Constant|
|1 / (1 + Ts)||First Order Lag with Lag Time Constant Block|
We can work with these blocks using the BLOCK DIAGRAM TRANSFORMATION THEOREMS also referred to as BLOCK DIAGRAM ALGEBRA.
An example of this is the building of a Lead Algorithm. The Lead Algorithm is the derivative of a Lag Algorithm, where the Derivative Time Constant (Tder) has to be significantly larger than the Lag Time Constant (Tlag).
LEAD = DERIVATIVE * LAG
LEAD = s Tder * 1 / (1 + s Tlag)
LEAD = s Tder / (1 + s Tlag)
Approaching the problem from the other direction, we will analyze existing Control Algorithms by building block diagrams with blocks using the above terms. Then we will review the way these blocks are implemented in digital computers.
In Figure 6.1 we see the block diagram of a REAL controller as used as an ultimate secondary, or field controller, driving the actual variable of the process.
The formula in terms of f(s) for the Control Algorithm of controllers, based on the block diagram in Figure 6.1 can be stated as (see Figure 6.2):
|Tint||Integral Time Constant|
|Tder||Derivative Time Constant (Lead = α times Lag)|
|Alpha||α (α = 8 for training applications)|
Industrial controllers use a value between 8 and 12 for .
6.4.3 Algorithms in the frequency domain
Algorithms expressed in the frequency domain do not show any static constants. Therefore, the algorithms have to be calculated independently of any constant.
For example, such a constant could be the manual starting position of an OP value.
This coincides with the need to have all dynamic control calculations made to be independent of the absolute value of OP.
The requirement is there because the OP value has to be modified from the destination, (the slave controller), of the value if the destination is capable of Initialization.
We will review Initialization in the chapter on Cascade Control.
If no Initialization takes place, the OP value is calculated by the Controller Algorithms (Automatic Control). Every time we change from the Initialization state into Automatic Control, the OP value has to be accepted as it is. Otherwise there would be a “bump” in the OP value in changing from the initial manual state into automatic mode which could cause a process upset.
6.5 The need for digital control
There is a requirement to modify the OP value from different independent calculations like Initialization and automatic control, and so neither of these calculations must have control over the absolute value of OP.
These calculations are allowed to increment and decrement an existing OP value only. They do not determine the absolute value of OP.
Therefore the absolute value of OP reflects the destination value only.
6.5.1 Incremental algorithms
The OP value for example, can show the true valve position and no calculation is permitted to force an absolute value on OP. Only changes, that means movements of the valve positions are permitted.
This approach uses what we call an Incremental Algorithm where the control calculations calculate CHANGES and NOT ABSOLUTE VALUES.
Once this principle is established, it can be used to calculate PID-Control in separate:
each incrementing (or decrementing) the OP value without knowing the other Control Mode Calculations. Every calculation is merely Incrementing (or decrementing) the OP and does not care about the absolute value of the OP (see Figure 6.3).
The principle of incremental OP calculation for Automatic Control based on the block diagram in Figure 6.2: the IDEAL controller
OPn = OPn-1 + ΔOPP + ΔOPI + ΔOPD
The principle of incremental OP calculation for Automatic Control based on block diagram in Figure 6.1: the REAL controller
OPn = OPn-1 + ΔOPp + ΔOPi
OPn = Output Value after current scan
OPn-1 = Output Value after the last scan time
ΔOPP = Change to output value required by the Proportional Action
ΔOPI = Change to output value required by the Integral Action
ΔOPD = Change to output value required by the Derivative Action
If in CASCADE Control (see chapter 'Cascade Control') and Initialization, the SP of a
Secondary Controller drives the OP of a Primary Controller
OP = SPS
SPS = Setpoint of Secondary Controller
The letter D (or the DELTA symbol) in italics; has been used as prefix for parameter names to represent the changes of parameters from one calculation to the next, as in DERR, DOP or DPV.
The time from one calculation to the next is called the scan time.
For full value representation of the parameters, no prefix has been used, as in ERR, OP or PV.
6.6 Scanned calculations
A digital computer cannot perform a number of related calculations simultaneously. A series of repeated calculations is thus made.
If the repetition interval between calculations is constant, we call it a fixed scan time.
A fixed scan time is used in all controllers designed for continuous (modulating) control.
If the scan time is not constant as with some Programmable Logic Controllers (PLCs), the scan time has to be calculated for each scan of the computer system.
This is especially important, since all time constants used for the actual scanned (repetitive) calculation have to be used in units of scan.
Therefore to summarize for Scanned (repetitive) Calculations:
All Time Constants are in UNITS OF SCAN.
All Time Constants must be far greater than the scan time to ensure that the digital
calculation is the equivalent, or a good approximation to that of an analog calculation.
6.7 Proportional control
Let us compare the general formula shown before with the formula used for Incremental P-Control:
OP = K * ERR + MANUAL
dOP/dt = K * dERR/dt
Note that we have lost our constant MANUAL. This makes this algorithm a dynamic calculation only. If the process reaction is insignificant between scan times, we can simplify the calculation into a difference calculation with the interval of scan time:
ΔOP = K * ΔERR
ΔERR is the change of error from the last scan to the present scan. ERR in a difference equation is the equivalent of ERR dt in a differential equation.
6.8 Integral control
Let us compare the general formula shown before with the formula used for Incremental I-Control:
dOP/dt = (K/TINT) * ERR
If the process reaction is insignificant between scan times, we can simplify the calculation into a difference calculation with the interval of scan time:
ΔOP = K/TINT * ERR
TINT[scan units] = (TINT[min] * 60 / SCAN[sec]
TINT has to be in units of scan, (or number of scans); not in minutes or seconds. For example, if the interval of repeated calculation (scan time) is 0.5 seconds and TINT is 1.5 minutes or 90 seconds, then TINT in units of scan is 180. Put another way, TINT is 180 units each of 0.5 seconds duration.
6.9 Derivative control
Let us compare the general formula shown before with the formula used for Incremental D-Control:
OP = K * TDER (dERR/dt)
dOP/dt = K * TDER (d2ERR/dt2)
If the process reaction is insignificant between scan times, we can simplify the calculation into a difference calculation with the interval of scan time:
DOP = K * TDER * Δ(ΔERR)
TDER[scan units] = [TDER[min] * 60) / SCAN[sec]
TDER has to be in units of scan, (or number of scans).
Δ(ΔERR) is the change of the change of error from the last scan to the present scan.
Δ(ΔERR) in a difference equation is the equivalent of d2ERR/dt2 in a differential equation.
6.10 Lead function as derivative control
The real algorithm used for the field controller does not use the idealistic and mathematically simplest approach. Instead of a mathematically defined derivative action, the field controller uses a Lead Algorithm for derivative control. The block diagram is shown in Figure 6.5. The formula in terms of f(s) for the Control Algorithm of a field controller using a Lead Algorithm is shown below (see Figure 6.4):
The LEAD part, acting for DERIVATIVE control is explained in detail in Figure 6.5 below:
If we consider: ; then this means derivative is 8 times more powerful than the low-pass-filter.
This approach keeps the adverse effect noise has on the derivative term to an acceptable minimum.
6.11 Example of incremental form (Siemens S5 - 100V)
Change of output for Proportional Action = ΔOPP = K x (XWK - XWK-1) . R
(see Figure 6.6)
Change of output for Integral Action = ΔOPI = K.TINT .WK
Change of output for Derivative Action =
ΔOPD = K . TDER . (( XWK - XWK-1) - (XWK-1 - XWK-2))
= K . TDER . ( XWK - 2XWK-1+ XWK-2)
Real and Ideal PID Controllers
- Select the correct PID-Control algorithm for field interaction and for computer optimized calculations;
- Clearly distinguish between process noise and control loop instability, which are often similar in appearance;
- List the correct sequence of steps to handle the different problems of noise and instability.
7.2 Comparative descriptions of real and ideal controllers
The Ideal PID-Controller is not suitable for direct field interaction, therefore it is called the Non-Interactive PID-Controller. It is highly responsive to electrical noise on the PV input if the Derivative function is enabled.
The Real PID-Controller is especially designed for direct field interaction, is therefore called the Interactive PID-Controller. Due to internal filtering in the Derivative block the effects of electrical noise on the PV input is greatly reduced.
7.3 Description of the IDEAL or the non-Interactive PID controller
The Non-Interactive form of controller is the classical teaching model of PID algorithms. It gives a student a clear understanding of P, I and D control, since P-Control I-Control and D-Control can be seen independently of each other.
Then, PID is effectively a combination of independent P, I and D-control actions. This can be seen in Figure 7.1.
Since P, I and D algorithms are calculated independently in an Ideal PID-Controller, this form of controller is recommended if an ideal process variable exists.
7.3.1 Ideal process variables
An ideal process variable is a noise-free, refined and optimized variable. They are a result of computer optimization, process modeling, statistical filtering and value prediction algorithms.
These types of ideal process variables do not come from field sensors.
In these cases, it is of great benefit that the actual formula of the Ideal PID algorithm is simple, as shown in Figure 7.2 below:
7.4 Description of the real (interactive) PID controller
The Interactive form is the PID algorithm used for direct field control. That is either or both of its input (PV) and output (MV) are directly connected to field or process equipment. It is designed to cope with any electrical noise induced into its circuits by equipment in the plant or factory.
Full understanding of the Interactive PID algorithm is rather difficult, since P-Control, I-Control and D-Control cannot be seen independently from each other. Therefore, Interactive PID is not just a sum of independent P, I and D control.
This can be seen in Figure 7.3.
Since the Interactive PID-Controller makes use of a Lead algorithm rather than using the classical mathematical derivative, it is best suited for real (field) process variables.
7.4.1 Real process variables (field originated)
A real process variable has electrical noise that come from field sensors or the connecting cables.
It is therefore of great benefit that the PID algorithm has some noise reduction built in.
The formula below represents an Interactive PID algorithm (see Figure 7.4):
7.5 Lead function - derivative control with filter
The following is an extract from chapter 6 (Digital Control Principles) to remind us of the LEAD part acting as a Derivative function.
The field controller uses a Lead Algorithm for derivative control. The block diagram is shown in Figure 7.5.
7.6 Derivative action and effects of noise
The most important difference between Non-Interactive and Interactive PID controllers is the different impact noise has on a controller’s output.
It must be remembered that derivative control multiplies noise.
7.6.1 Introduction to filter requirements
Both Non-Interactive PID and Interactive PID controllers make use of a noise filter for process noise (known as the process variable filter time constant TD).
Since the derivative control of a Non-Interactive PID has no noise suppression of its own, noise will always be a major problem, even though a Process Variable filter may be used.
Since the derivative control of an Interactive PID already has some noise suppression of its own, noise is not so much of a problem, and is even less if a process variable filter is used.
It is recommended that a PV filter should be used in all cases where derivative control is being used. The author has observed numerous derivative control systems having excessive movement of the controller outputs due to the lack of PV filters.
This type of problem is often incorrectly interpreted by personnel (in industrial plants) as being a problem of stability.
Hence an important rule is:
Make a clear distinction between noise and instability in industrial control applications.
As discussed earlier, noise and instability require treatment with different methodologies, as they are totally different problems.
Remember, a process variable filter, due to its LAG action, reduces noise but may add to loop instability.
7.7 Example of the KENT K90 controllers PID algorithms
Derivative Control =
Tuning of PID Controllers in Both Open and Closed Loop Control Systems
- Apply the procedures for open and closed loop tuning;
- Calculate the tuning constants according to Ziegler & Nichols and according to Pessen.
- Demonstrate how to perform fine-tuning of closed loop control systems.
8.2 Objectives of tuning
There are often many and sometimes contradictory objectives, when tuning a controller in a closed loop control system. The following list contains the most important objectives for tuning of a controller:
Minimization of the integral of the error
The objective here is to keep the area enclosed by the two curves, the SP and PV trends; to a minimum.
This is the aim of tuning, using the methods developed by Ziegler and Nichols as illustrated in Figure 8.1.
Minimization of the integral of the error squared
As Figure 8.2 shows, it is possible to have a small area of error but an unacceptable deviation of PV from SP for a start time. In such cases special weight must be given to the magnitude of the deviation of PV from SP. Since the weight given is proportional to the magnitude of the deviation, the weight is multiplied by the error. This gives us error squared (error squared = error x weight). Many modern controllers with automatic and continuous tuning work on this basis.
In most cases fast control is a principle requirement from an operational point of view, however this is principally achieved by operating the controller with a high gain, this quite often resulting in instability, or prolonged settling times from the effects of process disturbances. Careful balances need to be obtained between the Proportional or( KC) function and the settings of the Integral and particularly the Derivative time constants TINT and TDER respectively.
Minimum wear and tear of controlled equipment
A valve or servo system for instance should not be moved unnecessarily frequently, fast or into extreme positions. In particular, the effects of noise, excessive process disturbances and unrealistically fast controls have to be considered here.
Continual “hunting” of the PV against the SP can result in a proportion of this, the magnitude depending on the controller gain, appearing on the controller’s output. This, in many cases, can cause the driven actuator to “vibrate” and this is quite often misconstrued as being caused by “noise” when in fact it is caused by the gain of the controller, and as such the entire loop, being set too high in an attempt to “speed-up” the response to the process (see 8.2.1).
No overshoot at start up
The most critical time for overshoot is the time of start up of a system. If we control an open tank, we do not want the tank to overflow as a result of overshoot of the level. More dramatically, if we have a closed tank, we do not want the tank to burst. Similar considerations exist everywhere, where danger of some sort exists. A situation of a tank having a maximum permissible pressure that may not be exceeded under any circumstances is an example here.
Start Up is not the equivalent of a change of Setpoint.
Minimizing the effect of known disturbances
If we can measure disturbances, we may have a chance to control these before the effect of them becomes apparent. See Feed Forward Control for an example of an approach to this problem.
8.3 Reaction curve method (Ziegler Nichols)
The reaction curve method of tuning relies on making a step change to the output of a controller and recording the process response. This method can be considered as an OPEN LOOP approach, as the controller is NOT used in any way except for changing the OP value (in Manual Mode) to give the process the required step-change to the MV.
The criteria we need to record are:
- The Effective LAG, or how long after the step change is made does a noticeable change occur in the PV.
- The process reaction time, or the maximum rate-of-change that occurs as represented by change in the PV value.
- The time taken for the PV to reach 63.2% of its maximum value.
There are many variances of this tuning method; all utilizing the results from this reaction curve record. Three of the most common are discussed following the next section on how to generate a record of a systems reaction time.
8.3.1 The procedure to obtain an OPEN LOOP reaction curve
Recording the PV response
Connect some form of recorder to the input (PV) signal to the controller. The recorder should ideally be capable of displaying 2 channels of information, the PV from the system into the controller, and the SP movement of the controller.
The record has to be plotted against a 0 to 100% PV Vertical scale and a reasonably fast Horizontal scale calibrated in minutes and FRACTIONS of minutes (not seconds). The vertical scale should be adjustable if using a paper strip recorder, so that the resultant change of the PV value covers a big a span as possible across the chart, this being required for measurement accuracy.
Place the controller in MANUAL MODE, this will ensure that we have an open loop in which the controller’s action has no influence whatsoever when the PV value moves. This is because we are not interested in the controller’s behavior, but only in the process’s reaction characteristics.
Changing the process
When we make a step change to the output value of the controller, an appropriate reaction from the process will occur, appearing as a change-in-time of the PV value. This is the reaction characteristic of the process (see Figure 8.3).
We must have enough process knowledge to know by how much we can change the output value of the controller without danger to the process itself.
Obtaining and analyzing the reaction curve
Observe the record of the reaction of the process. The plot we require is shown in Figure 8.3, where we can observe and measure the indicated parameters that are required to enable calculation of the P, I and D components of the controller; these being some or all of the ones listed below depending on which analysis method you select to use.
The point-in-time when the SP value was changed (the amount of this change is NOT important, it should be as large as possible as long as the process is NOT adversely effected by magnitude of the change).
The time (in minutes and fractions of minutes) that elapses before a NOTICEABLE change is seen in the PV; this being measured as L or Effective Lag.
The point of Inflection (POI) on the PV curve.
The point where the PV has changed by 63.2% (which is NOT necessarily the POI) to enable calculation of the LTC (Loop Time Constant.
We cannot calculate the tuning constants before we have analyzed the curve using a few common-sense considerations. The effective lag time (L) will be the principle effect and component of the Integral time (TINT) value. The slope, or rate-of-change of the process, (N), will be the major factor influencing the controller gain setting, KC, as it represents the gain or sensitivity of the process itself.
This leaves the derivative time constant to be determined (TDER) and as this is introduced to correct the destabilizing effect of the Integral action, a relationship between TDER and TINT must exist.
Ziegler and Nichols have derived formulas for optimum tuning, that takes into account, and relates the P, I and D values to each other. The optimum tuning obtained with these formulas is aimed at minimizing the integral of the error term (minimum area of error).
It does not take into account the magnitude of the error. Optimum tuning constants are invariably based on processes with a small dead time and a first order lag.
As mentioned at the beginning of this section, there are three variations to this tuning method, sections 8.4, 8.5 and 8.6 describe each of these in detail.
8.4 Ziegler Nichols open loop tuning method (1)
From Figure 8.4, we have to derive a value for the effective Lag (L); the time taken in decimal minutes until a noticeable rate of change is observed; and a value of N (the slope of the PV at the point of maximum-rate-of change).
From these two values we can calculate the tuning constants for P, PI and PID controllers according to the following Ziegler-Nichols formulae.
Ziegler Nichols P control algorithm
Note that we obtain different tuning constants with the different combinations of control modes; and that a relationship exists between them that is echoed through the different modes shown here.
8.4.1 Ziegler Nichols PI control algorithm
If we need to have integral action, the gain of the controller is reduced by 10% and the integral time constant; introduced to help eliminate the “offset” value between the SP and PV in the ERR term; is set at 3 times the lag period (L in mins.) As the Integral output is summed with the Proportional output contained within the controller gain, Kc can be reduced slightly, making the loop more stable. The loss in output resulting from this is gradually made up, in the integral time TINT, by the integral action.
8.4.2 Ziegler Nichols PID control algorithm
Next, if we need to introduce some help in stabilizing the loop we introduce the derivative control. In doing this we see that the controller gain is increased by 20%. The integral time is made 33% faster (or shorter) and the derivative time constant is four times faster, or shorter, than the Integral time.
Put another way, the relationship between TINT and TDER is 4:1.
Tint = 2 x L (mins.)
Tder = 0.5 x L (mins.)
8.4.3 Examples of Ziegler Nichols P, I and D open loop control algorithms
If we substitute the following values:
OP% = DOP = 12.5%
N = 35% per minute
L = 0.65 Minutes.
The Settings for P, I and D can be summarized as follows:
MODE KC TINT TDER
8.5 Ziegler-Nichols open loop method (2) using POI
This version or method of deriving the Gain, Integral and Derivative times uses the same response curve but which is made in a slightly different manner to the previous example.
It is used where the process is controlled by a valve.
To obtain the process curve, the following procedure is used:
- Bring the process to a desired setpoint on MANUAL control.
- Change the VALVE POSITION a small amount, , The change should be large enough to produce a measurable response in the process, but not large enough to drive the process beyond normal operating range. A 5% valve change is a good starting point.
- Measure and L on the process response curve.
The POI (Point of Inflection) is determined on the PV curve (Point of Maximum-rate-of-change) and a tangential line is drawn through this, down through the horizontal axis and on until it crosses the vertical axis. (The time when the SP value was changed) (see Figure 8.5).
From this can be calculated the following constants PGU and TU :
Controller settings are determined from Table 8.1:
|Controller||Proportional only||Proportional Integral||Proportional (Derivative note below)||Proportional Integral Derivative|
|Gain KC||0.5 PGU||0.45 PGU||0.71 PGU||0.6 PGU|
|Integral Time TINT||0.83TU||0.5 TU|
|Derivative Time TDER||0.51 TU||0.125 TU|
The settings for a PD controller do NOT originate from the original Ziegler-Nichols paper.
It should be noted that a similar relationship of gain and integral / derivative times exists between this method and the previous one.
The gain KC in P mode =0.5, in PI Mode = 0.45 and PID = 0.6 or the gain ratio’s relate as 1 to 0.9 to 1.2.
In PID Mode the ratio of TINT to TDER is again 4:1 (0.5 : 0.125).
Using this method, the Slope or Rate-of-Change is quite often much easier to evaluate from a recorded chart.
8.6 Loop time constant (LTC) method
This method of tuning, as in the previous two examples, makes use of the reaction curve and is applicable when the system has a first order lag response as defined by a linear first order differential equation
This equation is expressed as:
c = output
r = input
K = gain
τ = time constant
Inspection of a first order response curve will show that it is always falling off. i.e. the rate of response is at maximum in the very beginning and is continuously decreasing from that time onward. If the system continued to change at its maximum response rate, the rate that occurs at the origin, it would reach its final value (100%) in one time constant (TINT time).
Figure 8.6 illustrates a first order curve, derived from a step input. This curve gives numerical values to the change, and in the first period of time (in our case the Integral time constant set by TINT) the change equals 63.2%. In the second time period 63.2% of the remaining 36.8% will take place, and so on in every time interval. Theoretically the response never reaches 100%, but it does approach it asymptotically.
By measuring both the Loop Deadtime and the loop time constant, the time from a noticeable change in the PV value to the time (in minutes) that a value of 63.2% is reached as shown in Figure 8.7 the following can be determined:
PG = 1/PG (Open Loop)
IG = LTC
DG = 0.25 x IG
8.7 Hysteresis problems that may be encountered in open loop tuning
In the real operational world it is good practice to perform open-loop tuning with as big a step as possible, over the normal operating range, and in both directions, i.e. after making say a 20% STEP UP and recording the systems response, return the output back to its original starting value and again record the systems reaction response.
In most systems the incremental and decremental responses will be different. If this difference is only a few percent (< 5% to 6%); take the average values of the two recordings and apply the results to the tuning algorithms being used.
If the differences are large, then tuning to either response can lead to instability or poor control when the process responds to the other response that was NOT used for tuning. Re-engineering of the process system itself, or introduction of corrective algorithms, will be required in order to reduce the hysteresis to an acceptable level. An example of one method to correct this problem is illustrated in Chapter 11; where correcting the time difference between heating and cooling a boiler is discussed.
The PID controller itself cannot be set or tuned to alleviate this type of problem.
8.8 Continuous cycling method (Ziegler Nichols)
This method of tuning requires that we determine the critical value of controller Gain (KC) that will produce a continuous oscillation of a control loop. This will occur when the total loop gain (KLOOP) is equal to one. The controller gain value (KC) then becomes known as the Ultimate gain (KU).
Chapter 5, sections 5.5 and 5.5.1, describe the requirements needed for a system to be considered stable.
We have to remember here that the loop is made up of several component parts, all of which contribute to the total gain of the loop (KLOOP), and the only one that we can adjust is normally the controller’s gain (KC).
If we consider a basic liquid flow control loop utilizing:
- A venturi flow meter with a 4-20 mA. Output feeding
- A PID controller with in turn has a 4-20 mA. Output that feeds
- A valve actuator that in turn varies the flow rate of
- The process.
When the product of the gains of all of four of these component parts equals one, the system will become unstable when a process disturbance occurs (a set-point change). It will oscillate at its natural frequency which is determined by the process lag and response time, and caused by the loop gain becoming one.
For example, if the system listed above, had the following gain characteristics:
Venturi Gain = 0.75
Control Valve Gain = 1.12
Process gain = 0.98
Then the PROCESS gain (as “seen” by the controller) is calculated as:
0.75 x 1.12 x 0.98 = 0.8232.
With KP equal to 0.8232, then to make KLOOP equal 1, the value of KC has to be
Giving KLOOP = to 0.75 x 1.12 x 0.98 x 1.215 = 1
In order to observe the process dynamic characteristics only, we must not use any Integral or Derivative control during the process (as explained below) of determining the value of KC in order to obtain a total loop gain, KLOOP, equal to one (with no “corrupting” phase shift introduced by the controller).
We can then measure the frequency of oscillation (the period of one cycle of oscillation), this being the ultimate period PU.
In addition, we know that the final value of KC is the critical gain of the controller (KU). This gain value when multiplied with the unknown process Gain(s), will give a Loop Gain, KLOOP, of 1.
From there we can stabilize the loop by reducing the value of KC.
8.8.1 The stages of obtaining closed loop tuning (continuous cycling method)
1. Put Controller in P-Control Only
In order to avoid the controller influencing the assessment of the process dynamic, no Integral or Derivative control should be active. Make TINT = 999 and TDER = 0.
2. Select the P-Control to ERR = (SP - PV)
Make sure that P-Control is working with PV changes as well as with SP changes. This enables us to make changes to the ERR term, and hence the controller output, by changing the SP value.
3. Put the Controller into Automatic Mode
We need a closed loop situation to obtain continuous cycling at the critical gain setting.
4. Make a Step Change to the Setpoint
To observe how the PV settles after a disturbance, change the SP value to simulate one. Before making this step change to the SP make sure the process is steady with only minor dynamic fluctuations visible.
5. Actions based on the Observation
If any oscillations that occur settle down quickly (or indeed there are no oscillation at all), then increase the value of KC. The amount of increase to KC depends on the rate and magnitude of change of the PV as a result of the last SP change.
Then repeat 4 above, returning the Setpoint back to its original value.
When oscillations appear, and if they seem to be increasing in amplitude, terminate the exercise immediately and reduce the value of KC to enable the process to stabilize. The total loop gain was >1, hence it amplified the SP change value.
Repeat the exercise again, being more cautious with high values of KC.
6. Conclusion of Tuning Procedure
Once you obtain continuous cycling of the process, measure the cycle time and the value of KC obtained for continuous cycling. This time is the Ultimate Period (Pu), and the value of KC is the Ultimate Gain (KU).
REDUCE THE VALUE OF KC BY 50% TO STOP THE OSCILLATIONS AND RETURN THE SP TO ITS ORIGINAL VALUE TO STABILISE THE PROCESS.
8.8.2 Calculation of tuning constants (continuous cycling method)
We will obtain different tuning constants with P, P I and PID control modes. However, your attention is drawn to the fact that the same relationships as discovered in the Reaction Curve Method of tuning re-appear here.
Controller settings are determined from Table 8.2:
|Controller||Proportional only||Proportional Integral||Proportional Integral Derivative|
|Gain KC||KC = 0.5 x KU||KC = 0.45 x KU||KC = 0.6 x KU|
|Integral Time TINT|
|Derivative Time TDER|
THERE ARE NO VALUES GIVEN FOR PD CONTROL WITH THIS METHOD, BUT THE RATIOS USED FOR OPEN-LOOP TUNING CAN BE APPLIED IF REQUIRED.
8.9 Damped cycling tuning method
This method is a variation of the Continuous Cycling Method. It is used whenever continuous cycling imposes danger to the process, but a damped oscillation of some extent is acceptable.
The steps of closed loop tuning (Damped Cycling Method) are as follows:
8.9.1 Tuning method
1. Put the Controller into P-Control Only
In order to avoid the controller influencing the assessment of the process dynamics, no I-Control or D-Control must be active.
2. P-Control on ERR = (SP - PV)
Make sure that the P-Control is working with PV changes as well as with SP changes. This enables us to make changes to the ERR term by changing the SP value.
3. Put Controller in Automatic Mode
We need a closed loop situation to obtain damped cycling.
4. Step Change to the Setpoint
A step change to the SP causes a disturbance and we observe how the PV settles. Before making a step change to the SP, the process must be steady with only minor dynamic fluctuations visible.
5. Actions based on the Observation
If any oscillations that occur settle down quickly (or indeed there are no oscillation at all), then increase the value of KC .
The amount of increase to KC depends on the rate and magnitude of change of the PV as a result of the last SP change.
Then repeat 4 above, returning the Setpoint back to its original value.
When oscillations appear, and if they seem to be increasing in amplitude, terminate the exercise immediately and reduce the value of KC to enable the process to stabilize. The total loop gain was >1, hence it amplified the SP change value.
Repeat the exercise again being more cautious with high values of KC.
When a damped oscillation is obtained, as shown in Figure 8.8 note the value of KC ; this now being denoted as KD. Then terminate the test by reducing the value of KC. KD is used to determine the gain later in this exercise.
By measuring and dividing the amplitude of the first overshoot by the amplitude of the second overshoot the Delay Ratio P is found:
The time (in minutes) between these two measured points gives a value for Pd (Period of damping).
Then calculate the damping ratio δ from:
In most cases the damping factor; δ ; having a value of around 0.5 for a damped oscillation is acceptable.
We then need to evaluate Rd from where PU represents the Ultimate Period and P represents the Actual Period:
This leads to the following formula for TINT and TDER:
Next, we have to turn our attention to calculating the gain setting for the controller (KC). Some manuals inform us that KC is determined by good operator judgment, however this would be very much a hit and miss approach.
As we have a value of controller gain (KD) that we used to obtain the damped cycle response used to evaluate the Integral and Derivative time constants we can use this to obtain a value for KU.
First we need to calculate the Overshoot Ratio, this is the result of
We then calculate Ku from:
Achieving a value for KU will let us use the Ziegler Nichols closed loop formulas. These being:
Figure 8.9 gives a graphical representation to obtain the damping ratio directly from the %. Overshoot that occurred in the PV as a result of a Step Change made to the controller output.
8.9.3 Step responses
Figure 8.10 is used to determine the Ultimate Period (PU) from the Damped Cycle Period (P); (Courtesy of J.P.Stiekema).
8.10 Tuning for no overshoot on start up (Pessen)
This method is a variation of the Continuous Cycling Method and it is used whenever no overshoot is permitted, even in the extreme case of start up the process. With start up, we mean the transition from manual to automatic control.
An extreme start up situation exists, if the Setpoint and PV are very different when changing from manual to automatic control. In contrast to a change of Setpoint, the change from manual to automatic control does not cause a step change in ERR. Therefore, the change does not directly affect P or D-Control.
An example for applying this tuning procedure, according to Pessen, is a closed tank that could burst or an open tank that could overflow.
The steps of closed loop tuning for No Overshoot are the same as the ones for Continuous Cycling Method (Ref. 8.8.1 )
The formulas developed for this case by Pessen are as follows:
|Kc||=||0.2 x Ku|
8.11 Tuning for some overshoot on start up (Pessen)
This method is a variation of the Continuous Cycling Method. It is used whenever no overshoot during normal modulating control is desired, but some overshoot at start up is acceptable.
The steps of closed loop tuning for Some Overshoot are the same as the ones for Continuous Cycling Method (Ref. 8.8.1) The formulae developed for this case by Pessen are as follows:
8.11.1 The tuning constants
|Kc||=||0.33 * Ku|
8.12 Summary of important closed loop tuning algorithms
8.13 PID equations: dependent and independent gains
The General PID Equation as applicable to digital (PLC) systems is the sum of Four terms:
OP = Proportional + Integral + Derivative + Bias (Manual) value.
This equation can be represented in two ways, ISA (Instrument Society of America) (Dependant Gains) or Independent Gains.
In the independent gains equation, as the name suggests, all three PID terms operate independently. In the ISA Equation a change in the proportional term also effects the Integral and Derivative terms (see Figure 8.11).
8.13.1 ISA equation
The ISA equation is interactive. That is, it contains dependent terms that mean if the controller gain KC is changed, the Integral and Derivative terms also change.
|Kc||=||Controller Gain Constant (Unitless)|
|Tint||=||Integral time constant (Minutes per repeat)|
|Tder||=||Derivative Time Constant (Minutes)|
|dt||=||Time between samples (Minutes)|
|Bias||=||Feedforward or Output bias|
|E||=||Error = to PV-SP or SP-PV|
|PV(n-1)||=||PV Value from last sample|
|E(n-1)||=||Error value from last sample|
8.13.2 Independent gains equation
This equation is non-interactive. As such P, I and D terms are adjusted independently.
|Kp||=||Proportional Gain Constant (Unitless)|
|Ki||=||Integral gain constant (1/Sec)|
|Kd||=||Derivative gain Constant (seconds)|
|dt||=||Time between samples (Seconds)|
|Bias||=||Feedforward or Output bias|
|E||=||Error = to PV-SP or SP-PV|
|PV(n-1)||=||PV Value from last sample|
|E(n-1)||=||Error value from last sample|
The ISA and Independent Gains constants can be compared as follows:
|ISA Constants||Independent Gains Constants|
|Controller Gain KC||Proportional Gain KP|
|Reset Term TINT||Integral Gain Ki|
|(Minutes per Repeat)||(Inverse Seconds)|
|Rate Term TDER||Derivative Term Kd|
To convert from ISA terms to independent gain terms
Kp = Kc Unitless
Kd = Kc(Td) 60 Seconds
Proportional Band Applications PB%
Controller Output Modes, Operating Equations and Cascade Control
- Demonstrate a clear understanding of controllers with multiple and independent outputs;
- Clearly distinguish between Saturation and Non-Saturation Output Limits;
- Describe the concept and strategy of cascade control;
- Select, and apply correctly, the controller options of Initialization, PV-Tracking and Type of Control Equation;
- Describe the concept of cascade control with multiple secondaries;
- Demonstrate how to tune all controllers within a cascade control system.
9.2 Controller output
In order to enable controllers to be cascaded together certain system design requirements have to be made available. The most important one centers around the output section of a controller, in particular the PRIMARY one. Figure 9.1 shows a typical output section, or block, of a PID controller, illustrating the control signals and actions they perform upon the final output value. The functions of each of these will be discussed in this chapter.
Single or stand-alone controller output
The value of the final output of a single or stand-alone controller is affected by one of two possible signals:
The first one is derived from the MANUAL mode, where a set or static value can manually be placed in the output, this value being considered a “live Zero” The controller itself has no knowledge of what this value is, and it can be anywhere for 0 to 100% of the output range.
The second one is when the controller is in AUTO mode and the PID actions now start to increment or decrement the MANUAL value in each scan time of the system.
Multiple controller outputs
A close inspection of Figure 9.1 shows that this controller can have two or more output blocks, all identical, but totally isolated from each other.
If we consider a controller with two (or more) output blocks, which result in final output signals OP1, OP2 to OPN there exists many permutations of possible actions that this type configuration can perform; the most important being listed below:
9.3 Multiple controller output configurations
As each output block is independent of all the others that are attached to a controller, their absolute output values can be, and usually are, different from each other. Although the PID controller’s action is continually reacting to the SP and PV values on its input, the results of the PID calculations will only effect; (+/0/-); a particular output if the mode of that output is set to auto or cascade.
Figure 9.2 illustrates the output control strategy of a single or multi-output controller.
Assume initially, OP1 is set to manual or initialize and its output is cascaded or connected to another controllers output (see later in this chapter); and OP2 is in auto mode. OP2 will be responding to the PID change requirements, but OP1 will be static at its manual or Initialization value. Only when OP1 is set to AUTO or CASCADE by either the MANUAL or INITIALISE control signal changing will OP1 then start to respond to the PID commands. The OP1 value will then “Track” the value of OP2, although they may well have different absolute values. If the PID summing result says ‘Increment’ by a Value of PN in one scan time, then OP1 will increase its value from OP1N to OP1N + PN and OP2 will increase its output value by OP2N to OP2N + PN ,i.e. both outputs will change by the same magnitude; but maintain the differential value between them.
9.3.1 Limits of controller outputs
The controller itself has no knowledge of its final and absolute output value(s) from its output blocks. The result is that these outputs can be driven into saturation at 0% or 100% with the controller still trying to “drive” them further below zero or above full scale.
As we may well not know or be aware of this happening; and how and when and with what accuracy the outputs recover back into their operational range we must be able to “select” our requirements for the type of output limit calculations we need and why.
9.4 Saturation and non-saturation of output limits
There are two principle types of output limit calculations, the first, with output limits, allows saturation of the output based on P and D control. The second does not allow output saturation to occur under any circumstances.
9.4.1 Saturation of the output
If the output of a controller is allowed to saturate; it allows it in the following manner:
- The controller calculates a VIRTUAL Output value independent of any Output-Limit. These may be values far above 100% or far below 0%.
- Only the real Output, which is the displayed Output value, is limited by pre-defined Output-Limits.
- The real Output then awaits the return of the virtual Output to within the defined Output-Limits.
- Then, within the range of the Output-Limits, the real Output follows the virtual Output value exactly.
- Controllers driving field Output normally use this kind of Output-Limit handling.
Non saturation output-limit-calculation
Non saturation of the Output is achieved by ensuring that only the real Output values are used for the calculation.
If a single calculation results in an Output value attempting to go beyond the pre-set Output-Limits, the Output value will be set to the value of the Output-Limit it would have violated.
When the controller calculates the Output value next time (in the next SCAN), the REAL Output value (Output = Output Limit) is used. The previous calculation, beyond Output-Limits, has been totally forgotten.
9.5 Cascade control
Using the example of our feedheater (see Fig 9.3), if we add another control loop, which is just to control the fuel flow, we will keep the fuel flow constant despite fuel flow pressure changes.
If the OP of the temperature controller TC drives the SP of this newly added fuel flow controller, FC, then we have the situation that the OP of the temperature controller TC then drives the true flow and not just a valve position.
Fuel flow pressure would practically no longer have any effect on the outlet temperature. This concept is called CASCADE CONTROL. The principle is shown in Figure 9.4.
9.5.1 Cascade control terms
In order to help identify which controller is which within a cascaded system, the following terms apply:
The controller, whose SP is driven by another controller’s OP may be called a “Down Stream Controller” (Slave), or perhaps more often it is referred to as a “Secondary Controller”.
The controller whose OP drives the SP of a Secondary Controller is called an “Up Stream Controller” or “Primary Controller” (Master).
Multiple cascaded configurations
If we have more than two controllers in a cascade system:
The highest upstream controller is called the Ultimate Primary.
The lowest downstream controller is called the Ultimate Secondary Controller.
If we examine the requirements needed to start up such a Feed Heater Cascade Control System manually, it will give us insight into the principles of operation. This, in turn, will also give us the background required to understand PV-Tracking, Initialization and the different Equation Types used in the related control algorithms.
Referring to Figure 9.4, this illustrates the basic cascaded system, with the Temperature Controller (TC) being the Primary Control and the Fuel Control (FC) the Secondary controller.
9.5.2 The concept of process variable or PV-tracking
PV-Tracking is active if the secondary (FC) controller is IN MANUAL MODE. Controllers can be set up to make use of PV-Tracking or not.
It is the choice of the system designer.
The concept is that an operator sets the OP value of the Fuel Controller manually until they find an appropriate value for the process. We assume that this output value is the optimum value for the process; that is we have set the fuel flow rate to a manual value that is correct to maintain the output temperature, T2, at the required value.
This leads to the conclusion that no correction of the OP value is necessary at this time.
As no change of OP is the ideal requirement, no error (ERR) is permitted. To achieve this we have to keep the SP equal to the PV by the operator manipulating the OP value manually.
Hence for PV-Tracking in MANUAL Mode only: SP = PV
This is called PV-Tracking.
The moment we change the Mode to AUTOMATIC, the SP stays at its last value and is the reference previously created by manual manipulation of OP (when the mode was set to MANUAL).
The output of the Flow Controller (FC) has an ABSOLUTE value as determined by what was set by the operator at the time of the transition from Manual to Automatic sufficient to hold the Fuel valve in its required position.
9.6 Initialization of a cascade system
Initialization is actually a kind of manual mode, in which the operator doesn’t drive the OP value of the primary controller. (Our Temperature Controller, TC, in this case.) Instead, our fuel controller, FC, supplies its Set Point (SP) value, back up the cascade chain, to the OP of the Controller that will be driving it (the FC’s SP) when the system is in automatic mode. If selected, PV-Tracking can take place in the Primary Controller as it would occur in normal manual mode.
9.6.1 Steps of initialization
Let us analyze how Initialization is useful by looking into our Feed Heater Example again in Figure 9.4. If the fuel flow controller FC is in manual mode and its OP value is driven by an operator until the desired outlet temperature (T2) has been reached, the PV of the fuel flow controller (FC) has the correct value in order to obtain the desired value of T2.
(Open loop conditions exist in this loop at this point.)
Via PV-Tracking of the flow controller, its setpoint value, as manipulated by the operator, is at the value required to give the FC output its correct value to maintain T2 by establishing the correct flow rate.
The SP value of the fuel flow controller FC has the exact value that the output of the temperature controller TC will require from it.
The fuel flow controller, while in manual mode, passes back its SP value to the OP block of the temperature controller TC. The temperature controller allows this value to be set into its output block by receiving a signal from the FC that it is in both “Cascade and Manual mode”.(Accepting this command in a similar manner as to itself being placed in manual mode).
This is called Initialization. If the Primary Controller (TC) performs PV-Tracking, then the Temperature-SP follows the true temperature value, the PV of the Primary Controller.
When the operator has found the correct OP value of FC, we have, by default, obtained a SP value for the correct flow. SP = PV.
If we matched this SP with the Primary Controller’s OP value, then we have the correct SP for temperature as well.
All there is to do is to put the Secondary Controller into CASCADE Mode and the associated Primary Controllers output block should switch automatically to AUTO Mode.
We have thus achieved a smooth (bumpless) transfer from manual to automatic control.
9.7 Equations relating to controller configurations
In cascade control, outputs from controllers drive the set points of secondary controllers, and, in essence all of these controllers consist of, or are capable of, independently calculating P, I and D algorithms, based on the error value derived from the two inputs., the Setpoint value (SP) and the Process Variable PV. There are occasions where only certain functions within a cascade chain are required, and it becomes necessary to “re-arrange” the way the P, I and D functions are driven from the PV and SP variables.
There are three ways this is done, and they are known as Controller Equations type A, B or C. Equations A and C are the more commonly used of the three, and are inter-related, so these will be considered together.
Figure 9.5 illustrates the interconnections of a controller that determines the type of equation it represents.
9.7.1 Equation type A
In Equation type A all control is based on the error term (ERR).
A controller using Equation Type A makes no distinction between a disturbance in the PV input and an operator action on the SP.
This is the standard way of calculating control actions of a PID-Controller and this has been the way in which we have considered all controllers so far in this book.
Where PV changes are fairly smooth with minimal or no step changes, they will not cause dramatic or sudden changes to the controller’s output. Additionally the SP of the controller is normally never or very seldom moved again not causing rapid changes of output, but in contrast, an operator may drive a valve through its complete range by a large step change of the SP.
In such situations, we could consider the operator to be the most dangerous disturbance in the system.
Hence, when we require a smooth transition, even if the operator changes the SP dramatically we need to “re-arrange” the construction of the controller to help us achieve this requirement. This leads to Equation Type C.
9.7.2 Equation type B
As can be seen from Figure 9.5, Equation B works as PI controller on error (ERR = PV-SP) and works as a D-controller on the PV only.
9.7.3 Equation type C
Equation type C configures the controller so that we can eliminate the problem of step changes to the output occurring by large and rapid changes being made to the Setpoint value by the operator. We must remember that in most systems the SP of a primary controller is seldom changed much; either in magnitude or time. However when the SP IS changed we need to ensure the resultant output change is as soft and gentle as possible, particularly when it is driving SPs of secondary controllers. A nice way to do this is to integrate the step change, illustrated as:
Referring to Figure 9.5 we see how equation C differs from the normal equation type A by:
The Proportional and Derivative functions operating, via the gain block KC, directly from the PV and NOT from the ERR term.
The ERR term is used exclusively by the Integral function, again derived from either PV-SP or SP-PV.
This means that a step change made to the SP becomes an Integrated (ramped) output from the controller.
This kind of control calculation calculates an identical PID-Control action as with Equation Type A if the Setpoint is a constant.
This maintains the same control against real disturbances and the same loop behavior.
The SP is the only variable treated differently. The details of equation type C follow on the next page.
Equation C and the P-control
When the SP = Constant - What is the difference ?
ΔOP = K * Δ(PV - SP) [Eq-Type A]
ΔOP = K * ΔPV [Eq-Type C]
Answer: No Difference, provided SP = Constant
Observe that the change of ERR, where ERR = Δ(PV-SP), is identical to the term ΔPV (the change of PV).
There is identical P-Control action based on PV, but the SP is ignored totally.
The SP is not even part of the formula any more.
The operator can do what he/she wants with the SP, but this will have no influence on P-Control if Equation Type C is active.
Equation C and the I-control
The availability of an Integral control is the reason for the existence of these controller equations because:
- There are no differences in I-Control with different Equation Types.
- I-Control is available to the operator at all times for smooth bumpless changes of control from one SP to another.
- I-Control will never cause any bump if a SP change takes place.
Since the SP is an outside influence as far as the loop is concerned, the Integral on SP has no effect on loop stability.
Equation C and the D-Control
When the SP = Constant - What is the difference ?
ΔOP = K * TDER * Δ(Δ(PV-SP)) [Eq-Type A]
ΔOP = K * TDER * Δ (ΔPV) [Eq-Type C]
Answer: No Difference, provided SP = Constant
Observe Δ(Δ (SP-PV)) is identical to Δ(Δ PV) if there is no change of SP.
There is identical D-Control action based on PV. but the SP is ignored totally.
The SP is not even part of the formula any more.
The operator can do what he wants with the SP as he has no influence on D-Control if
Equation Type C is active.
This is one more example of the use and benefit of incremental algorithms.
9.8 Application notes on the use of equation types
We have to make a careful assessment of the process and the process strategy before we decide on a particular Equation Type. As a general rule we can use the different types as follows:
9.8.1 Application of equation type A
This is a general-purpose calculation to be used if no special reason exists to use another type.
Eq-Type A is a must for Secondary Controllers. If Eq-Type C were to be used in a Secondary Controller, I-Control would be the only control between the OP of the Primary Controller and the OP of the Secondary Controller. This would add an unnecessary Phase Lag of 90 for the Primary Controller’s Loop. The result could be an unnecessary destabilization of the Primary Loop of a cascade control system.
9.8.2 Application of equation type B
The principle considerations, how the control algorithms can work on PV only, are the same as explained for Equation Type C.
Equation Type B works as a PI-Controller on error (ERR = PV - SP) and works as a D-Controller on PV only.
Since Eq-Type B is in between Eq-Type A and Eq-Type C, it is thus left to the discretion of the user to make decisions about the use of Eq-Type B. If, for example, a Secondary Controller needs D-Control for stability of the Secondary Loop and the OP of a Primary Controller contains all the control actions required for the Primary Loop (including D-Control), then the Secondary Controller may be best used with Eq-Type B. In such a case, the full control action of the Primary Controller is passed on to the OP of the Secondary Controller via Control of the Secondary.
9.8.3 Application of equation type C
Equation Type C works as I-Controller on error (ERR = PV - SP) and works as a PD-Controller on PV only.
Mainly used as the Ultimate Primary Controller. An operator cannot cause any sudden control actions that would result in sudden extreme positions of valves and other control equipment. This can only be fully appreciated if one has heard the noise created by the sudden hitting of an extreme valve position of a large valve. It can be felt in almost the whole plant as a big bang. This is most decidedly not good for maximizing the life of a valve (see Table 9.1).
|Equation Type||P Mode||I Mode||D Mode||Comments on Use|
|A||PV - SP
SP - PV
|PV - SP
SP - PV
|PV - SP
SP - PV
|B||PV - SP
SP - PV
|PV - SP
SP - PV
|PV - SP
SP - PV
9.9 Tuning of a cascade control loop
The approach for tuning is fairly straightforward.
Firstly, tune the Ultimate Secondary controller, (the most down-stream controller) which in our example is the FC controller. Then; considering that controller as part of the process, tune the next up-stream controller (the one whose output drives the setpoint of the last tuned controller.)
Continue in this manner; remembering to consider the last tuned controller as part of the process loop, finally tuning the most Primary Controller, again in our example the TC Controller.
Secondary controllers are mostly used as flow controllers. Flow loops are in most cases intrinsically stable. Therefore, no D-Control is required and most flow controllers are PI-Controllers. Tuning is done with due consideration given to a sufficiently good control response and minimum wear and tear of the valve. K should be smaller than 1 in order to pass on the full range of the Primary Controller’s OP to the OP of the Secondary Controller.
Primary controllers normally control a dynamically more complex loop and require careful stability considerations. Our example of a Feed Heater shows clearly that the temperature controller TC has to cope with most of the process lag. In most cases, Primary Controllers are therefore PID-Controllers.
9.10 Cascade control with multiple secondaries
A control strategy can include controllers with multiple output calculations. In most cases, controllers with multiple outputs are primary controllers in a cascade control system with more than one secondary controller.
9.10.1 Multiple output calculations
The result of the primary controller’s PID calculation is the controller’s dynamic output. In digital controllers, this is the calculated value CV, which is calculated for each scan time interval and is used to increment each output independently.
As every output of a controller may have a different absolute value at any given time, every output is incremented individually.
In actual fact, each output is calculated independently of each other with independent initialization, limit and alarm handling. As the amount of data for multiple outputs is too much for one display, industrial control equipment will display only the first output in the main detail display and has subsequent displays for additional data like multiple outputs. One has to be aware of this from an engineering point of view in order to define the most significant output to be the first output of a controller. From an operator’s point of view, it is important to know that the most prominently displayed output value may not be the only output to be monitored, it may just be the most important one.
Concepts and Applications of Feedforward Control
- Describe the concept and strategy of Feed Forward Control;
- Develop and then clearly describe the tuning procedures for Feed Forward Control.
10.2 Application and definition of feedforward control
If, within a process control’s feedback system, large and random changes to either the PV or Lag time of the process occur the feedback action becomes very ineffective in trying to correct these excessive variances.
These variances usually drive the process well outside its area of operation, and the feedback controller has little chance of making an accurate or rapid correction back to the SP term.
The result of this is that the accuracy and standard of the process becomes unacceptable. Feedforward control is used to detect and correct these disturbances before they have a chance to enter and upset the closed or feedback loop characteristics.
It must be remembered that Feedforward Control does not take the Process Variable into account, it reacts to sensed or measurement of known or suspected process disturbances, making it a compensating and matching control to make the impact of the disturbance and feedback control equal.
The difference between Feedforward and Feedback control can be considered as:
Feedforward is primarily designed and used to prevent errors (process disturbances) entering or disturbing a control loop within a process system.
Feedback is used to correct errors, caused by process disturbances that are detected within a closed loop control system. These errors can be foreseen and corrected by feedforward control, prior to them upsetting the control loop parameters.
It is this second factor alone that makes feedforward a very attractive concept. Unfortunately, for it to operate safely and efficiently, a sound knowledge is required both of the process and the nature of all relevant disturbances.
10.3 Manual feedforward control
Feedforward is a totally different concept to feedback control. A manual example of feedforward control is illustrated in Figure 10.1. Here, as a disturbance enters the process, it is detected and measured by the process operator. Based on their knowledge of the process, the operator then changes the manipulated variable by an amount that will minimize the effect of the measured disturbance on the system.
This form of feedforward control relies heavily on the operator and their knowledge of the operation of the process. However, if the operator makes a mistake or is unable to anticipate a disturbance then the controlled variable will deviate from its desired value and, if feedforward is the only control, an uncorrected error will exist.
10.4 Automatic feedforward control
Figure 10.2 illustrates the concept of automatic feedforward control. Disturbances that are about to enter a process are detected and measured. Feedforward controllers then change the value of their manipulated variables (outputs) based on these measurements as compared with their individual set-point values.
Feedforward controllers must be capable of making a whole range of calculations, from simple on-off action to very sophisticated equations. These calculations have to take into account all the exact effects that the disturbances will have on controlled variables.
Feedforward control, although a very attractive concept, places a high requirement on both the systems designer and the operator to both mathematically analyze and understand the effect of disturbances on the process in question.
As a result Feedforward control is usually reserved for the more important or critical loops with a plant.
Pure Feedforward control is rarely encountered, it is more common to find it embedded within a feedback loop where it assists the feedback controller function by minimizing the impact of excessive process disturbances.
In Chapter 11, we will be examining the concepts and applications of Feedforward control when combined with a cascaded Feedback system.
It is important to remember that Feedforward is primarily designed to reduce or eliminate the effect of changes in both Process reaction times and the magnitude of any measured process variable change.
10.5 Examples of feedforward controllers
As discussed above, feedforward controllers can be required to carry out simple (on-off) control up to high order mathematical calculations. Due to the vast variances of requirements for Feedforward controllers they can be considered as functional control blocks. They can range, as stated, from simple on-off control to lead/lag (Derivative and Integral functions) and timing blocks. Their range of functionality is virtually unlimited as most systems allow them to be “programmed” in as software based math functions.
The four basic requirements for the composition of a feedforward controller are:
- A recognizable INPUT (derived from the measured disturbance).
- A SETPOINT or point of origin and control.
- A Math function operating on the INPUT and SETPOINT values.
- An OUTPUT which is the result of this math function.
In essence, the controllers action can be described purely by the mathematical function it performs.
10.6 Time matching as feedforward control
Figure 10.3 shows an application of feedforward control where the time taken for a process to react in one direction (HEATING) is different to the time taken for the process to return back to its original state (COOLING).
If the reaction curve (dynamic behavior of reaction) of the process disturbance is not equal to the control action, it has to be made equal.
We normally use Lead/Lag compensators as tools to obtain equal dynamic behavior. They compensate for the different speeds of reaction.
The block diagram in Figure 10.4 shows this principle of compensation.
A problem of special importance is the drifting away of the PV. We can be as careful as we want with our evaluation of the disturbances, but we never reach the situation of absolute perfect compensation. There are always factors not accounted for. This causes a drifting of the PV which has to be corrected manually from time to time, or an additional Feedback Control has to be added.
Figure 10.5 shows a carefully designed example, taking into account all major and measurable variables possible.
The Feed Forward Control shown in Figure 10.5 uses mass flow calculation for the inlet flow and uses a fuel flow controller to avoid the influence of Fuel flow pressure changes.
Combined Feedback and Feedforward Control
- Indicate the concept and strategy of Combined Feedback and Feed Forward Control;
- Demonstrate how to develop tuning procedures for Combined Feedback and Feed Forward Control.
11.2 The feedforward concept
Chapter 10 illustrated the concepts of Feedforward control and showed that one problem it gives us is drifting of the PV from the systems SP value. This is caused solely because the PV is NOT taken into account in Feedforward control, if it was it would become a Feedback (closed loop) controlled system.
Examination of the Feedforward concept shows us that it is normally used to minimize the impact of disturbances on a process. This is achieved by detecting and measuring a process disturbance and changing a related manipulated variable before the disturbance has an adverse effect on the process itself.
It is important to remember that process disturbances constitute anything from unexpected changes in either:
magnitude of pressure, flow, temperature and any other physical quantity associated with the process, or
changes in time , of any of the processes responses.
This latter variable, TIME, is very often overlooked as a quantity that may need correcting in a process environment. This was illustrated in Figure 10.3 where we used Feedforward to equalize the difference in heating and cooling times of a feed heater system.
This should make the process responses both in magnitude and time, the same, irrespective of the direction taken by the PV value. If this is achieved, tuning of the system is made easier, with the result the control is more stable and accurate.
11.3 The feedback concept
Chapter five explained the concepts of closed loop control and stability as related to Feedback systems. In general terms it’s accepted that a Feedback system operates more accurately and efficiently if both process disturbances and time delays (Lag times) are kept to the minimum. It then becomes apparent that Feedforward control can be used to achieve this requirement being made by the Feedback control.
If we then combine both an accurately configured Feedforward system with a well tuned Feedback system the result should become almost an optimally operating control system.
11.4 Combining feedback and feedforward control
Figure 11.1 illustrates a concept where we combine both control methods into our Feed Heater Control system.
Chapters 9 and 10 cover most of the aspects which have to be considered when using Feedback and Feedforward control.
Here we will concentrate on the impact of the Summer Block and some tuning aspects of the combined control system.
11.5 Feedback - Feedforward summer
Referring to Figure 11.1 we see that we have only one manipulated variable, the fuel flow, but two control concepts combined.
The mass flow Feedforward control, equating as and the Feedback Cascaded control would appear to compete for the use of the one manipulated variable, the fuel flow.
However, if we remember the concept of compensation which governs Feedforward control, the output of the Feedforward control (from the lead/lag block) would usually be passed directly onto the SP of our Fuel Flow Controller.
To this value coming from the Feed Forward Control, we have to add the output value of the Primary Controller (Temperature Controller; TC).
It is important to remember that these are incremental and NOT absolute calculations.
11.6 Initialization of a combined feedback and feedforward control system
As we have a combination of Feedforward, Feedback and Cascade control the method of initialization is important.
The value for the INITIALIZATION OF THE OP OF THE PRIMARY CONTROLLER (TC) is calculated from the SUM of:
- The value of the FEEDFORWARD SIGNAL from the Lead/Lag block and (+)
- The SP VALUE of the Secondary Controller (FC).
As fluctuations occur to the inlet temperature and the Inlet flow rate varies, (F1 and T1 Variables), and depending on which direction (Up or Down) they occur, so the output of the lead/lag block will vary in accordance with the functional algorithm and lead/lag time constants which comprise the Feedforward control.
The magnitude and Rate-of-change of this signal has to be compatible with any of the F1 and T1 variances so that they have minimal or no influence on the Outlet temperature T2.
The summing block should be considered as part of the output block of the Primary Controller (TC).
11.7 Tuning aspects
Since strong Feedback control has a tendency towards instability, it should be avoided if possible. Therefore, if the Feed Forward Control is already doing the major part of the required control and Feedback Control is just there to eliminate drift of PV, proceed as follows:
- Tune the Secondary Controller used by Feedback and Feed Forward Control;
- Tune the Feed Forward Control;
- Tune Feedback Control using the formulas developed by Ziegler and Nichols;
- Then evaluate the speed of drift of PV.
If the drift of the process variable is insignificant, reduce K of the Primary Controller using process knowledge and personal judgment.
Remember that in this case, the Feedback Control is just a supplement to Feed Forward Control and must not introduce any form of oscillations or instability.
Long Process Dead-time in Closed Loop Control and the Smith Predictor
- Demonstrate the correct use of a process simulation for process variable prediction;
- Show how control loops with long dead-times are dealt with correctly;
- List the procedures for tuning of control loops with long dead-times.
12.2 Process deadtime
Overcoming the dead time in a feedback control loop can present one of the most difficult problems to the designer of a control system. This is especially true if the dead time is more than 20% of the total time taken for the PV to settle to its new value after a change to the SP value of a system.
We have seen that little or no dead time in a control system presents us with a simple and easy set of algorithms that when applied correctly give us extremely stable loop characteristics.
Unfortunately, if the time from a change in the manipulated variable (Controller Output) and a detected change in the PV occurs, is excessive, any attempt to manipulate the process variable before the dead time has elapsed will inevitably cause unstable operation of the control loop.
Figure 12.1 illustrates various dead times and their relationship to the PV reaction time.
12.3 An example of process deadtime
Process Deadtime occurs in virtually all types of process, as a result of the PV measurement being some distance away, both physically and in time, from the actuator that is driven by the manipulated variable.
An example of this is in the overland transportation of material from a loading hopper to a final process, this being some distance away. The critical part of the operation is to detect the amount of material arriving at the end of its journey, the end of the conveyor belt, and from this performing two functions:
To “tell” the ongoing process how much material is arriving and;
To adjust the hopper feed rate at the other end of the belt.
Figure 12.2 illustrates this problem, the controller is measuring the weight of arriving material that during its journey from the supply hopper has encountered some loss due to spillage from the conveyor. Also the amount of material deposited on the belt has varied due to variability of the amount, or head, of material in the hopper.
The Deadtime can be calculated very simply by the product of the belt speed and the distance between the input hopper; where the action of the manipulated variable (Controller Output) occurs and the PV or point where the belt weigher is located.
In this example, the controller measures the weight/meter/minute of the arriving material, compares this with the SP and generates an output, but now it must wait for the Deadtime period, which in this example is about 10 minutes, before seeing a result of this change in the value of the MV. If the controller expects a result before the Deadtime has elapsed, and none occurs, it will assume that its last change had no effect and it will continue to increase its output until such time as the PV senses a change has occurred. By this time it will be too late, the controller will have overcompensated, either by now supplying too much or too little material.
The magnitude of this resultant error will depend on the sensitivity of the system and the difference between the assumed and actual Deadtime. That is, if the system is highly sensitive (high gains and fast responses tuned into it) it will effect large movements of the inlet hopper for small PV changes. Also if the assumed Deadtime is much shorter than the actual Deadtime it will spend longer changing its output (MV) before sensing a change in the PV.
12.3.1 Overcoming process deadtime
To cure these problems depends; to a great extent; on the operating requirement(s) of the process. The easiest solution is to “de-tune” the controller to a slower response rate. The controller will then not overcompensate unless the dead time is excessively long.
The integrator (I mode) of the controller is very sensitive to “Deadtime” as during this period of inactivity of the PV (an ERR term is present) the integrator is busy “ramping” the output value.
Ziegler and Nichols determined the best way to “de-tune” a controller to handle a Deadtime of D minutes is to reduce the Integral time constant TINT by a factor of D2 and the Proportional constant by a factor of D.
The derivative time constant TDER is unaffected by dead time as it only occurs after the PV starts to move.
If, however, we could inform the controller of the Deadtime period, and give it the patience to wait and be content until the Deadtime has passed then detuning and making the whole process very sluggish would not be required.
This is what the Smith Predictor attempts to perform.
12.4 The Smith Predictor model
In 1957 O.J.M. Smith, of the University of California at Berkeley proposed the predictor control strategy as explained below. Figure 12.3 illustrates the mathematical model of the predictor which consists of:
- An ordinary feedback loop;
- A second, or inner, loop that introduces two extra terms into the feedback path.
12.4.1 First term explanation (disturbance free PV)
The first term is an ESTIMATE of what the PV would be like in the absence of any process disturbances. It is produced by running the controller output through a model that is designed to accurately represent the behavior of the process without taking any load disturbances into account. This model consists of two elements connected in series.
- The first represents all of the process behavior not attributable to Deadtime. This is usually calculated as an ordinary differential or difference equation that includes estimates of all the process gains and time constants.
- The second represents nothing but the Deadtime and consists simply of a time delay, what goes in, comes out later, unchanged.
12.4.2 Second term explanation (predicted PV)
The second term introduced into the feedback path is an estimate of what the PV would look like in the absence of both disturbances and Deadtime.
It is generated by running the controller output through the first element of the model (Gains and TCs) but NOT through the time delay element.
It thus predicts what the disturbance-free PV will be like once the Deadtime has elapsed.
12.5 The Smith Predictor in theoretical use
Figure 12.4 shows the Smith Predictor in a practical configuration, or as it is really used.
It shows an estimate of the PV (with both disturbances and dead time) generated by adding the estimated disturbances back into the disturbance-free PV. The result is a feedback control system with the deadtime outside the loop.
The Smith Predictor essentially works to control the modified feedback variable (the predicted PV with disturbances included) rather than the actual PV.
If it is successful in doing so and the process model accurately emulates the process itself, then the controller will simultaneously drive the actual PV toward the SP value, irrespective of SP changes or load disturbances.
12.6 The Smith Predictor in reality
In reality there is plenty of room for errors to creep into this “Predictive Ideal”.The slightest mismatch between the process dynamic values and the model can cause the controller to generate an output that successfully manipulates the modified feedback variable but drives the actual PV into nihilism, never to return.
There are many variations on the Smith predictor principle, but readmits, especially long ones, remain a particularly difficult control problem to control and solve.
12.7 An exercise in deadtime compensation
We have seen that if a long dead-time is part of the process behavior, the quality of control becomes unacceptably low. The main problem lies in the fact that the reaction to a MV change is not seen by the PV until the Deadtime has expired. During this time, neither a human operator nor an automatic controller knows how the MV change has affected the process.
Exercise 15 illustrates the concepts of Deadtime compensation, based on the arrangement shown in Figure 12.5.
As there is no means of separating process dead-time from process dynamic in order to find out how the process would behave without dead-time, we make use of the values provided by a process simulation.
The process simulation is split into two parts as seen in Figure 12.5; these two parts are described in sections 12.4.1 and 12.4.2.
Basic Principles of Fuzzy Logic and Neural Networks
This chapter serves to review the basic principles and descriptions of Neural Networks and Fuzzy Logic.
As a result of studying this chapter, the student should be able to:
- Describe the basic principles of Fuzzy Logic;
- Describe the acronyms and basic terminology as used in Neural Networking and Fuzzy Logic applications.
13.2 Introduction to fuzzy logic
In the real world there is a lot of vague and imprecise conditions that defy a simple “True” or “False” statement as a description of their state. The computer and its binary logic are incapable of adequately representing these vague, yet understandable, states and conditions. Fuzzy logic is a branch of machine intelligence that helps computers understand the variations that occur in an uncertain and very vague world in which we exist.
Fuzzy logic “manipulates” such vague concepts as “warm” or “going fast”, in such a manner that it helps us to design things like air-conditioners and speed control systems to move or switch from one set of control criteria to another, even when the reason to do so is because “It is too warm, or not warm enough to Go faster or slow down a bit”, all of these “instructions” make sense to us, but are far removed from the digital world of just binary 1s and 0s.
The TRUE and FALSE statements that are absolute in there meaning, come from a defined starting location, and are designed to terminate at a known destination.
No known mathematical model can describe the action of say a ship coming, from some undefined point at sea, into a dock area and finally coming to rest at a precise position on a wharf.
Humans and fuzzy logic can perform this action very accurately, if the wind blows a bit harder, or another ship hampers a particular docking maneuver this is sensed and an unrelated but effective action is taken. The action taken though is different each and every time as the disturbance is also different every time. (Similar but different events occur every time a ship tries to perform this procedure.)
When mathematicians lack specific algorithms that dictates how a system should respond to inputs, Fuzzy logic can be used to either control or describe the system by using commonsense rules that refer to indefinite quantities.
Applications for Fuzzy logic extend far beyond control systems, in principle they can extend to any continuous system all be it in say physics or biology. It may well be that Fuzzy models are more useful and accurate than standard mathematical ones.
13.3 What is fuzzy logic?
In standard set theory an object either does or does not belong to a set. There is “no middle ground.” This theory is an ancient Greek law propounded by Aristotle ... the law of the excluded middle.
The number FIVE belongs fully to the odd number set and not at all to the set of even numbers. In such bivalent sets an object cannot belong to both a set and its complement set or indeed, to neither of the sets. This principle preserves the structure of logic and prevents an object being “is” and “is not” at the same time.
Sets that are “Fuzzy” or multivalent break this “no middle ground” law to some extent. Items belong only partially to a Fuzzy set, they may also belong to more than one set. The boundaries of standard sets are exact, those of fuzzy sets curve or taper off, and it is this fact that creates partial contradictions. The temperature of ambient air can be 20% cool and 80% not cool at the same time.
13.4 What does fuzzy logic do?
Fuzzy degrees are NOT the same as Probability Percentages.
Probability measures whether something will occur or not. Fuzziness measures the degree to which something occurs or some condition exists.
13.5 The rules of fuzzy logic
The only real constraint in the use of Fuzzy logic is that, for the object in question, its membership in complementary groups must sum to unity. If something is 30% cool it must also be 70% NOT cool. This enables fuzzy logic to avoid the bivalent contradiction that something is 100% cool and 100% not cool; that would destroy formal logic (see Figure 13.1).
13.5.1 Fuzzy logic ! a conundrum
(Thanks to Bertrand Russell)
This section probably serves to illustrate the Fuzzy Logic world, read this with your full attention though it illustrates the difference between half empty and half full! It is a Greek paradox at the center of modern set theory and logic.
A Cretan asserts that all Cretans lie.
So, is he lying?
If he lies, then he tells the truth and does not lie.
If he does not lie then he tells the truth and so he lies.
Both cases lead to a contradiction because the statement is both true and false.
The same paradox exists in set theory.
The set of all sets is a set, so it is a member of itself.
Yet the set of all apples is not a member of itself because its members are apples and not sets.
The underlying contradiction is then:
Is the set of all sets that are not members of themselves a member of itself?
If it is, it isn’t; if it isn’t, then it is.
Classic logic surrenders here, but Fuzzy logic says that answer is half true and half false a 50-50 divide fifty percent of the Cretans statements are true and fifty percent are false, he lies half the time and tells the truth for the other half. When membership is less than total a bivalent might simplify this by rounding it down to 0 or up to 100%. BUT 50 percent neither rounds up or down.
13.5.2 An example to illustrate fuzzy rules?
Fuzzy logic is based on the rules of the form “If ...... Then” that converts inputs into outputs - one fuzzy set into another.
For example, the controller of a car’s air-conditioner might include rules such as:
If the temperature is cool, then set the motor on slow.
If the temperature is just right set the speed to medium.
The temperatures (cool and just right) and the motor speeds (slow and medium) name fuzzy sets rather than specific values.
13.5.3 Plotting and creating a fuzzy patch
If we now plot the inputs (temperature) along one axis of a graph, and the outputs (motor speed) along a second axis. The product of these fuzzy sets forms a fuzzy patch, an area that represents the set of all associations that the rule forms between those inputs and outputs. The size of this patch illustrates the magnitude of the rules vagueness or uncertainty.
However, if “cool” is precisely 21.5°C the fuzzy set collapses to a “spike”.If both the “slow” and “cool” sets are spikers the rule patch is a point. This would be caused by 21.5°C requiring a speed of 650 rpm for the motor ... a logical result to this problem.
13.5.4 The use of fuzzy patches
A fuzzy system must have a set of overlapping patches that relate to the full range of inputs to outputs. It can be seen that enough small fuzzy patches can cover a graph of any function or input/output relationship. It is also possible to pick in advance the maximum error of the approximation and be sure there is a finite numbers of fuzzy rules that achieve it.
A fuzzy system “reasons, or infers”, based on its rule patches.
Two or more rules convert any incoming number into some result because the patches overlap. When data activates the rules, overlapping patches react in parallel - but only to some degree.
13.6 Fuzzy logic example using five rules and patches
As an example of fuzzy logic we will look at an example of an air conditioner that relies on five rules and therefore five patches to relate temperature to motor speed. Figure 13.2 illustrates this application.
13.6.1 Defining the rules
The temperature sets are COLD, COOL, JUST RIGHT, WARM and HOT these cover all the possible fuzzy inputs.
The motor speed sets are VERY SLOW, SLOW, MEDIUM, FAST and MAXIMUM describe all the fuzzy outputs.
A temperature of say 68°F, might be represented by these fuzzy sets and rules:
- 20% cool and therefore 80% NOT cool and
- 70% just right and 30% NOT just right.
At the same time the air is 0% cold, warm and hot.
The “if cool” and “if just right” rules would fire and invoke both the SLOW and MEDIUM motor speeds.
13.6.2 Acting on the rules
The two rules contribute proportionally to the final motor speed. Because the temperature was 20% cool, the curve describing the slow motor must shrink to 20% of its height. The medium curve must shrink to 70% for the same reason. Summing these two curves results in the final curve for the fuzzy set.
However, in its fuzzy form it cannot be understood by a binary system so the final step in the process is defuzzification.
This is where the resultant fuzzy output curve is converted to a single numeric value. The normal way of achieving this is by computing the center of mass, or centroid, of the area under the curve. In our example this corresponds to a speed of 47 rpm.
Thus, beginning with a quantitative temperature input, the electronic controller can reason from fuzzy temperature and motor speed sets and arrive at an appropriate and precise speed output.
13.7 The Achilles heel of fuzzy logic
The weakness of fuzzy logic is its rules. These are; in the majority of fuzzy applications, set by engineers who are expert in the related application. This leads to a lengthy process of “tuning” these rules and the fuzzy sets. To automate this process many companies are resorting to building and using adaptive fuzzy systems that use neural networks or other statistical tools to refine or even form those initial rules.
13.8 Neural networks
Neural networks are collections of “neurons” and “synapses” that change their values in response from inputs from surrounding neurons and synapses. The neural net acts like a computer because it maps inputs to outputs. The neurons and synapses may be silicon components or software equations that simulate their behavior. A neuron sums all incoming signals from other neurons and then emits its own response in the form of a number. Signals travel across the synapses that have numerical values that weight the flow of neuronic values.
When new input data “fires” a network of neurons, the synaptic values can change slightly. A neural net “learns” when it changes the value of its synapsis.
13.8.1 The learning process
Figure 13.3 illustrates a typical neural network and its synaptic connections.
Each neuron, as illustrated, receives a number of inputs XI which are assigned weights, by the interconnecting synapse WI . From the weighted total input, the processing element computes a single output value Y.
Figure 13.4 shows what occurs inside each neuron when it is activated or processed.
Various signals (Inputs of the neuron XI ) are received from other neurons via synapses.
Neuron input calculation
A Weighted sum of these input values is calculated.
Neuron internal function transform
The sum of the input calculation is transformed by an internal function, which is normally, but not always, fixed for a neuron at the time the network is constructed.
The transformed result (Outputs) are sent individually onto other neurons via the interconnecting synapse.
13.8.2 Neuron actions in “learning”
Learning implies that the neuron changes its input to output behavior in response to the environment in which it exists. However, the neurons transform function is usually fixed, so the only way the input-to-output transform can be changed is by changing the bias weight as applied to the INPUTS.
So “learning” is achieved by changing the weights on the inputs, and the internal model of the network is embodied in the set of all these weights.
How are these “weights” changed?
One of the most common forms widely used is called “Back Propagation Networking”, commonly used for chemical engineering.
13.9 Neural back-propagation networking
These networks always consist of three neuron layers, and input, middle and output layer. The construction is such that a neuron in each layer is connected to every neuron in the next layer (Figure 13.3). The number of middle layer neurons varies, but has to be selected with care; too many result in unmanageable patterns, and too few will require an excessive number of iterations to take place before an acceptable output is obtained.
13.9.1 Forward output flow (neuron initialization)
The initial pattern of neuron weights is randomized and presented to the input layer, which in turn passes it on the to the middle layer. Each neuron computes its output signal as
The is determined by multiplying each input signal by the random weight value on the synoptical interconnection:
This weighted sum is transformed by a function, f(X) called the Activated Function of the Neuron and it determines the activity generated in the neuron as a result of an input signal of a particular size.
13.9.2 Neural sigmoidal functions
For back propagation networks, and for most chemical engineering applications, the function described in Section 13.9 is a Sigmoidal function. This function, as shown in Figure 13.5, is:
- S Shaped
- Monotonically Increasing
Asymptotically Approaches Fixed Values As The Input Approaches
Generally, the upper limit of a sigmoid is set to +1 and the lower limit to 0 or -1.
The steepness of the curve and even the exact function used to compute it is generally less important than the general “S” shape.
The following Sigmoidal curve expressed as a function of IJ, the weighted input to the neuron is widely used.
Where T is a simple threshold and X is the input. This transformed input signal becomes the total activation of the middle layer neurons, which is used for their outputs, and which in turn become the inputs to the output neuron layer, where a similar action takes place in the output neuron, using the Sigmoidal function which produces a final output value from the neuron.
13.9.3 Backward error propagation (the delta rule)
The result of the output is compared with the desired output. The difference (or Error) becomes a bias value by which we modify the weights in the neuron connections. It usually takes several iterations to match the target output required.
The delta method is one of the most common methods used in backward propagation.
The delta rule iteratively minimizes the average squared error between the outputs of each neuron and the target value. The error gradient is then determined for the hidden (middle) layers by calculating the weighted error of that layer. Thus:
- The errors are propagated back one layer;
- The same procedure is applied recursively until the input layer is reached;
- This is backward error flow.
The calculated error gradients are then used to update the network weights. A momentum term can be introduced into this procedure to determine the effect of previous weight changes on present weight changes in weight space. This usually helps to improve convergence.
Thus back propagation is a gradient descent algorithm that tries to minimize the average squared error of the network by moving down the gradient of the error curve. In a simple system the error curve is bowl shaped (paraboloid) and the network eventually gets to the bottom of the bowl.
13.10 Training a neuron network
There are, in principle, seven standard techniques used to “train” a network for a zero error value (resting at the bottom of the bowl shaped error curve).
13.10.1 Re-initialize the weights
If difficulty is found in trying to find a global minimum, the process can have a new set of random weights applied to its input, and the learning process repeated.
13.10.2 Add step changes to the existing weights
It is possible for the network to “oscillate” around an error value due to the fact that any calculated weight change in the network does not improve (decrease) the error term. All that is normally needed is for a “slight push”, to be given to the weighted factors. This can be achieved by randomly moving the weight values to new positions, but not too far from the point of origin.
13.10.3 Avoiding overparameterization
If there are too many neurons in the middle, or hidden, layer then overparameterization occurs which in turn gives poor predictions. Reduction of neurons in this layer effects a cure to this problem. There is no rule of thumb here, and the number of neurons needed is best determined experimentally.
13.10.4 Changing the momentum term
This is the easiest thing to do if the system is a software network, the momentum term α is implemented by adding a part of the last weighted term to the new one, changing this value, again best done experimentally, can assist with a cure.
13.10.5 Noise and repeated data
Avoid repeated or less noisy data. Repeated or noise free inputs makes the network remember the pattern rather than generalizing their features. If the network never sees the same input values twice this prevents it from remembering the pattern. Introducing noise can assist in preventing this from occurring.
13.10.6 Changing the learning tolerance
The training of a network ceases once the error value for all cases is equal or less than the learning tolerance. If this tolerance is too small, the learning process never ceases. Experiment with the tolerance level until a satisfactory point is reached where the weights cease changing their value in any significant way.
13.10.7 Increasing the middle (hidden) layer value
This is the inverse of the problem described in 13.10.3, “Avoiding Overparameterization” and is used if-all-else-fails. In other words we have too few neurons in the middle layer, where as in 13.10.3 we had too many. In general an increase >10% shows improvements.
13.11 Conclusions, and then the next step
Fuzzy logic has been used since the early 1980’s and has been very successful in many applications, such as Hitachi’s use of it in controlling subway trains and it proving so accurate and reliable that its performance exceeds what a trained (no pun intended) driver can do. When its principles are used with neuron, or self-learning networks, we have a very formidable set of tools being made available to us. When this is applied to process control systems, it gives us a foreseeable future of control systems that are error free, and can cope with all the variances, including the operator, to such an extent that we could finally have a “perfect” system.
Meanwhile, we have two more issues to look at that may well be considered as the intermediate step from the past and current technology to the ultimate, self-learning and totally accurate control system.
These two issues, discussed and described in Chapter 14, conclude this manual on Practical Process Control. They are Statistical Process Control (SPC) and Self-Tuning Controllers.
SPC is used to see where the process is in error, a problem solved by neuron networks, and self-tuning controllers, a problem that can be solved by fuzzy logic.
Self-Tuning Intelligent Control and Statistical Process Control
As a result of studying this chapter, the student should be able to:
- Describe the theory and operation of a Self-Tuning Controller;
- Describe the concept of Statistical Process Control (SPC) and its use in analyzing and indicating the standards of performance in control systems.
This chapter introduces the basic concepts of self-tuning or adaptive controllers, intelligent controllers, and provides an overview of statistical process control (SPC).
14.2 Self-tuning controllers
Self or auto-tuning controllers are capable of automatically readjusting the process controllers tuning parameters. They first appeared on the market in the early 1970s and evolved from ones using optimum regulating and control types through to the current types that, with the advent of high-speed processors rely on adaptive control algorithms.
The main elements of a self-tuning system are illustrated in Figure 14.1, these being:
A System Identifier - This model estimates the parameters of the process.
A Controller Synthesizer - This model has to synthesize, or calculate the controller parameters specified by the control object functions.
A Controller Implementation Block - This is the controller whose parameters (Gain KC, TINT TDER etc.) are changed and modified at periodic intervals by the controller synthesizer.
14.2.1 The system identifier
The System identifier, by comparing the PV action as a result of the MV change, and using algorithms based on recursive estimations, determines the response of the system. This is commonly achieved by the use of fuzzy logic that extracts key dynamic response features from the transient excursion in the system dynamics. These excursions may be deliberately invoked by the controller, but are usually the startup, and ones caused by process disturbances that are the normal ones.
14.2.2 The controller synthesizer
The desired values for the PI and D Algorithms used by the controller are determined by this block. The calculations can vary from simple to highly complex, depending on the rules used.
14.2.3 Self tuning operation
This technique requires a starting point derived from knowledge of known and operational plants of a similar nature. This method is affected by the relationship between plant and controller parameters. Since the plant parameters are unknown they are obtained by the use of recursive parameter identification algorithms. The control parameters are then obtained from estimation of the plant parameters.
Referring to Figure 14.1, the controller is called “self tuning” since it has the ability to tune its own parameters. It consists of two loops, an inner one that is the conventional control loop having however varying parameters, and an outer one consisting of an identifier and control synthesizer which adjust the process controllers parameters.
14.3 Gain scheduling controller
Gain scheduling control relies on the fact that auxiliary or alternate process variables are found that correlate well with the main process variable. By taking these alternate process variables it is possible to compensate for process variations by changing the parameter settings of the controller as functions within the auxiliary variables occur. Figure 14.2 illustrates this concept.
14.3.1 Gain scheduling advantages
The main advantage is that the parameters can be changed quite quickly in response to changes in plant dynamics. It is convenient if the plant dynamics are simple and well known.
14.3.2 Gain scheduling disadvantages
The disadvantage is that the gain scheduling is an OPEN LOOPadaptation and has no real learning or intelligence. The extent and design criteria can be very large too. Selection of the auxiliary point of measurement has to be done with a great deal of knowledge and thought regarding the process operation.
14.4 Implementation requirements for self tuning controllers
Self-tuning controllers that deliberately introduce known disturbances into a system in order to measure the effect from a known cause, are not popular. Preference is given to self-tuning controllers that sit in the background, measure and evaluate what the control controller is doing. Then comparing this with the effect this is having on the process, and making decisions on these measured parameters, the controllers operating variables are updated.
To achieve this, the up-dating algorithms are usually kept dormant until the error term generated by the system controller becomes unacceptably high (> 1%) at which point the correcting algorithms can be unleashed on the control system, and corrective action can commence.
After the error has evolved, the self-tuning algorithm can check the response of the controller in terms of the period of oscillation, damping and overshoot values. Figure 14.3 illustrates these parameters.
14.5 Statistical process control
The ultimate objective of a process control system is to keep the product, the final result as produced by the process, always within all the pre-defined limits set by the products description.
There are almost an infinite number of methods and systematic approaches available in the real engineering world to help achieve this. However, although all these tools exist, it is necessary to have procedures that analyze the process’s performance, compare this with the quality of the product and produce results that are both “understandable” by all personnel involved in the management of the process, and of course are also both accurate and meaningful. There are a few terms and concepts that need to be understood to enable a basic and useable concept of control quality to be managed, and once these have been understood, the world of Statistical Process Control, or SPC becomes apparent, meaningful and useable as a powerful tool in keeping a process control system under economical, operationally practical and acceptable control.
14.5.1 Uniform product
Only by understanding the process with all of its variations and quirks, product disturbances, hiccups and getting to known its individual “personality” can we hope to achieve a state of virtually uniform product. No two “identical” plants or systems will ever produce identical product, similar, yes, but never identical. This is where SPC helps in identifying the “identical” differences.
Dr. Shewhart, working at the Bell Laboratories in the early 1920’s, after comparisons made between variations in nature and items produced in process systems he found inconsistencies, and formulated the following statement:
While every process displays variation, some processes display Controlled Variation, others display Uncontrolled Variation.
This is characterized by a stable and consistent pattern of variation over time, attributable to “chance” causes. Consider a product with a measurable dimension or characteristic (mechanical or chemical). Samples are taken in the course of a production run. The results of inspection of these products shows variances caused by machines, materials, operators, and methods all interacting producing these variations. These variations are consistent over time because they are caused by many contributing factors. These chance causes produce “Controlled Variation”.
This is caused by assignable causes, caused by a pattern of variation over time. In addition to changes made by chance causes there exists special factors that can cause large impacts on product measurement; these can be caused by maladjusted machines, difference in materials, a change in methods end even changes in the environment. These assignable factors can be large enough to create marked changes in known and understood patterns of variation.
14.6 Two ways to improve a production process
The two methods described here to improve a process are fundamentally different, one looks for change to a consistent process, the other for modifications to the process.
14.6.1 Controlled variations problem
When a process displays Controlled Variation it should be considered stable and consistent. The variations are caused by factors inherent in the actual process. To reduce these variations it will be necessary to Change the Process.
14.6.2 Uncontrolled variations problem
This means the process is varying from time-to-time. It is both inconsistent and unstable. The solution is to identify and remove the cause(s) of the problem(s).
14.7 Obtaining the information required for SPC
There is fundamentally only one way to record the real processes performance, and that is by a STRIP CHART showing the process variable signal, and possibly the controller output signals. That is, the commands sent into the process, and the processes reply to these commands in both magnitude and time.
14.7.1 Statistical inference
The average of 2,4,6, and 8 is 5 this being the balance point for this sample of data values. The sample range for this data is 6, that is, how far apart 2 and 8 are (the Maxima and Minima). However statistical inference relies on the fact that a conceptual population exists, this being needed to enable us to rationalize any attempt at prediction and that all samples taken were form this population, this being needed to believe in the estimates based on the sample statistics.
For the sake of simplicity and clarity we will consider that all samples are objective and represent one conceptual population.
If this is not true then the results may well be inconsistent and the statistics will be erratic. In fact, if this happens, the process can be considered schizophrenic, thus the process is displaying uncontrolled variation. The resultant statistics simply could not be generalized.
14.7.2 Using subgroups to monitor the process
Each sample collected at a single point in time is a sub-group, each one being treated as a separate sample. Figure 14.4 shows four sub-groups selected from a stable process, one subgroup per hour. The bell shaped profiles represents the total output of the process each hour, the dots representing the measurements taken in each group.
14.7.3 Recording averages and ranges for a stable process
The next step is to record the average and ranges onto a time-scaled strip chart, this being shown below in Figure 14.5. As long as these plots move around within the defined upper and lower limits displayed also on the chart, we can consider that all subgroups were derived from the same conceptual population.
If we now consider the same example, but the process itself is changing from hour-to-hour. We let the bell-shaped curves in Figure 14.6 represent of the processes output each hour.
14.7.4 Recording averages and ranges for an unstable process
At 09:00 the process average increased, moving the subgroup average above the upper limit (see Figure 14.6).
At 10:00 the process average dropped dramatically, and the subgroup moved below the lower limit.
During these first three hours, 08:00 to 11:00, the process dispersion did NOT change and the subgroup ranges all remained with in the control limits ... BUT at 11:00, the process dispersion increased and the process average moved back to its initial value. The subgroup obtained during this hour has a RANGE that falls above the upper control limit, and an AVERAGE that falls within the control limits.
It can be seen that the use of periodic subgroups two additional variables have been introduced, the subgroup average and the subgroup range (see Figure 14.7).
These are the two variables used to monitor the process.
The following example illustrates the behavior of these two variables and how they relate to the measurements when the process is stable. Refer to Table 14.1 and Figure 14.8.
Ref: Understanding Statistical Process Control by Donald J. Wheeler and David S. Chambers 1990 SPC Press Inc. pg. 68
14.7.5 Example of a stable process
The histograms in Figure 14.8 are all to scale on both axes. However all three represent totally different profiles and dispersions. It is therefore essential to distinguish between these variables.
14.7.6 Distributions of measurements, averages and ranges
While the measurements, averages and ranges have different distributions, these are related in certain ways when they are derived from a stable process. Figure 14.9 shows these relationships more clearly.
Notations related to Figure 14.9:
Let AVER(X) denote the average of the distribution of the X Values.
Let SD(X) denote the standard deviation of the distribution of X.
In a similar manner AVER(x) and SD(x) denote the average and standard deviation of the distribution of the subgroup averages; while AVER(R) and SD(R) denote the ranges.
With this notation the relationships between the averages and standard deviations can be expressed as:
Constants d2 and d3 are scaling factors that depend on the subgroup size n. These factors are shown in Table 14.2 and are based on the normality of X.
14.8 Calculating control limits
From the four relationships that have been shown above it is possible to obtain control limits for the subgroup averages and ranges.
There are two principle methods of calculating control limits. One is the structural approach and the other is by formulae. Both methods are illustrated below.
When first obtaining control limits it is customary to collect twenty to thirty sub-groups before calculating the limits. By using many subgroups the impact of an extreme value is minimized.
Using the example illustrated in section 14.8.5 of subgroup averages and range values from a control process the next two sections serve to illustrate the Structural and Formulated approach in calculating the process control limits.
Using the data shown in Table 14.3 the control limits as shown in Figure 14.9 will be found using both structural and formulated approaches.
14.8.1 The structural approach
Since the subgroup ranges are non-negative, the negative lower limit has no meaning. In this case the lower control limit = 0
14.8.2 The formulated approach
The Grand Average is 4.763.
The Average Range is 4.05.
The Sub-Group Size is 4 (see Figure 14.10).
14.9 The logic behind control charts
In conclusion, Figure 14.11 illustrates the logic behind control charts.
Some Laplace Transform Pairs
Laplace transforms make it easy to represent difficult dynamic systems. A mathematical expression F(s) in the frequency domain represents a function in the time domain, a transfer function F(t) or a time function f(t). A transfer function represents the properties (or the behavior) of a mathematical block (or calculation). A time function represents a value (or signal) over time.
Figure A.1 and Table A.1 show some Laplace transform pairs useful for control systems analysis. The output signal f(s)output of a block is calculated as follows:
An explanation of Laplace transform theorems is beyond the scope of this publication and not intended1. Two examples will be given in Figures A.2 and A.3.
1For further studies read “Feedback and Control Systems” by DiStefano III, Stubberud and Williams, published by McGraw-Hill Book Company as part of Schaum’s Outline Series)
The Integral Block and its input, a Step Function, is a good example to show that the same function 1/s in the frequency domain may represent an Integral Calculation (Block or transfer function) or a Step Function (input signal).
A second order system is a close representation of the behavior of many industrial processes.
Block Diagram Transformation Theorems
Complicated block diagrams can be broken into several easily recognized blocks. The summary of transformation theorems is a useful tool for this. W, X, Y, Z represent signals f(s) in the frequency domain. P, P1 and P2 represent transfer functions F(s) (see Figure B.1-B.5).
Getting started with PC-ControLAB
To get started on the simulation exercises you will first need to install PC-ControLAB on your PC and ensure that you are familiar with the tools and controls it provides. The software will be provided by your instructor on an installation CD supplied by IDC Technologies for your temporary use for the duration of the workshop and for a limited follow up period thereafter by means of a short term license key.
All the procedures required to get started are set out in the PC-ControLAB Quick Start Guide which is provided overleaf. This guide is also available as a PDF file in the CD package. The package includes a tool called “Builder” which is used to configure and modify simulations. This is very useful for you to inspect the models provides and to modify them if you wish.
The software is supplied with a suite of data files for training exercises. These comprise tutorial and training exercises from Wade Associates and a separate set of files for the IDC Practical Exercises as listed in Appendix C.
C.2 Installing IDC exercise data files.
The IDC files are all contained in the folder to be supplied by IDC called “IDC PC Models” To install these files for use in the exercises proceed as follows:
- Start PC-ControLAB and select the command “PROCESS” then “SELECT MODEL”.
- The window opens to show a list of files.
- Copy the folder “IDC PC Models” and past it into the files folder. The models will then be available by selecting the subfolder.
- In some exercises the instructions will call for a control strategy file (.stg) these files will be included with the above folder.
C. 3 Tutorial
We recommend that as soon as you have installed PC-ControLAB you should work through the following 4 sheets of the “Quick-start Guide” and then you should go the “HELP” command and work through the sequence of tutorial exercises provided there before proceeding with the practical exercises in Appendix D. The Tutorial and Help files provide very useful descriptions of all the most commonly used configuration blocks for control systems.
Practical Exercises for IDC Process Control Workshop
Our practical exercises are based on the use of the process control training and simulation package PC-ControLAB supplied to IDC Technologies by Wade Associates Inc.
IDC Technologies acknowledges with thanks the major contribution provided to this training course by the facilities available in this package
The following is a list of the practical sessions we will select dependent on the profile of the class.
|1||Exercise 1 - Flow control loop - basic example for open loop control|
|4||Exercise 2 - Flow control loop - basic example for closed loop control|
|5||Exercise 3 - Proportional (P) Control- Flow Chart|
|5||Exercise 4 - Integral (I) Control - Flow Control|
|5||Exercise 5 - Proportional and Integral (PI) Control - Flow Control|
|5||Exercise 6 - Introduction to Derivative (D) Control|
|5 & 7||Exercise 7 - Practical Introduction into Stability Aspects|
|1 & 5||Exercise 8 - Further exercises in identifying process responses|
|8||Exercise 9 - Open Loop Method -Tuning Exercise|
|8||Exercise 10 - Closed Loop Method -Tuning Exercise|
|8||Exercise 11 - Exercise in improving ‘as found’ tuning (damped cycle)|
|9||Exercise 12 - Cascade Control|
|9||Exercise 13 – Ratio control|
|11||Exercise 14 - Combined Feedback and Feedforward Control|
|11||Exercise 15 - FF and FB Applied to boiler drum level control|
|12||Exercise 16 - Dead time Compensation in Feedback Control|
|14||Exercise 17 - Gain scheduling for control of non-linear process|
Each of the above practical exercises is described in the following pages in this appendix.
Before you start
To get started with the exercises you will first need to install PC ControLAB and the load the exercise data files all as described in Appendix C.
The following is a reference list of the data files required for the exercises.
|Chapter||Exercise No||PC Pracs Folder||File Name|
|1||1||EX-1||IDC Flow 1.mdl|
|4||2||EX-2||IDC Flow 1.mdl|
|5||3||EX-3||IDC Flow 2.mdl.|
|5||4||EX-4||IDC Flow 4.mdl|
|5||5||EX-5||IDC Flow 4.mdl|
|5||6||EX-6||IDC Feed Htr.mdl.|
|5 & 7||7||EX-7||IDC Feed Htr.mdl|
|1 & 5||8||EX-8||Wade: Folpt.mdl. etc|
|8||9||EX-9||IDC Feed Htr Ex-9.mdl|
|9||12||EX-12||IDC Feed Htr casc.mdl.|
|11||14||EX-14||IDC FEEDFWD HTR.mdl|
|12||16||EX-16||IDC Ex 16.mdl|
The data files provide the process model for each exercise and in some cases strategy files are used to preload controller settings. Details of how to start up each exercise are contained in the instruction steps given in the practical exercise sheets starting overleaf.
Flow Control Loop - Basic Example for Open Loop Control
This exercise will familiarize the student with the basic concepts of open loop and closed loop control. It provides an opportunity to get a first feel for the open loop response of the process being controlled. A flow control loop as shown in figure 1.1 below will serve as a practical example for this exercise. A flow control loop is generally not difficult to operate and illustrates the basic principles effectively.
Since this is a relatively simple exercise, it can be used for familiarization with the principles of operation of the simulation software: “PC Control-Lab”.Firstly, you must ensure that you have installed the PC Lab software by following the instructions provided in the introduction. You should also have completed some of the first parts of the tutorial (Under HELP) to ensure you are familiar with the basic controls and operations. Then proceed with the following steps.
Step 1: Start the program in MS Windows by running PC-ControLab
Step 2: Open the project file for IDC Flow 1.mdl which is a pre-configured flow control loop with first order lag and dead time.
- Click on Process | Select Model.
- Select the file location “PC-Pracs” and then subfolder “Ex-1” Highlight “IDC Flow 1.mdl” and press Open.
- Set the Horizontal grid scale to “Seconds” (under View). NB: Most exercises in this workshop use the SECONDS scale. This has to be selected at the start of each exercise
- Set the LOAD value to 65% on screen by pressing STEP INCR
- Set the controller mode to Manual
- Use the OUT key to set the OUT value = 30%
You should now have a display on screen that simply indicates the output value to the control valve is steady at 30% and the PV is steady at 105 m3/hr as per Figure D1.2 below
This display gives a general idea of the process and displays the major variables PV (process variable scaled 0 to 300 m3/hr) and OP (controller output signal to flow control valve scaled in range 0 to 100%). The SP (set point) is shown as an arrow on the right of the trend screen and as a red bar on the controller faceplate.
Note that the operating data with PV, SP and OP appears in a box on the left side whenever you press the “OUT” button. PV data appears in a similar box whenever you press the “PAUSE” button. The trend record screen must be set to a scale in seconds to suit the process model we are going to test.
Firstly, we will observe the general behavior of the process as will be seen on the trend display when the RUN control is pressed. In order to observe the process reaction, as a result of changes in the position of the control valve, keep the control mode in MANUAL with OP=30% and when PV is steady press PAUSE. Then, press OUT and change the value of the output to 60% and press OK. Now press the RUN button to see the response develop on the trend display.
Observe the Process Variable PV as it changes in value due to the change in OP. There is a basic first order lag response between the output to the valve and the change in flow measured as the PV. This is an idealized response that has been configured into the simulation using the “Builder” tool supplied with the simulation package. This lag represents the response lag of the valve opening as the actuator and positioner respond to follow the controller output signal as well as the lag in the flow sensing stage.
Two very important process parameters can already be extracted from this response, which is the basis of an “open loop test” of the process. These are:
1) The open loop gain of the process
2) The time constant of the first order lag
Find the open loop gain Kp by measuring:
Because there is a control valve involved you cannot be sure that the process gain will be the same at all flow rates through the system. Therefore the value of Kp should be tested at different flow rates.
Carry out the following gain test in 20% increments starting at OP=10%
1) Record PV(10) when OP = 10%
2) Increase OP to 30% and allow the PV to stabilize
3) Record PV (30) when OP = 30%
4) Calculate Kp from the above equation.
5) Repeat steps 1 to 4 for (30% to 50%) and (50% to 70%).
Remember to change the value of PV from the engineering scale of 300m3/hr to the percentage of full scale value
You should find that the gain Kp is reasonably constant = 1.5 over the range of operation. Very often this is not the case of course, particularly at the low and high ends of the control valve opening but for the moment we have set up an approximately linear relationship between 0 and 300m3/hr.
Now find the time constant of the first order lag in the process (process includes the valve). You can do this by repeating the step response from 30% to 60% OP in Manual and observing the time trace.
The lag is found by measuring the time from the initial movement of the PV until the PV reaches 62.3% of the new steady state value. Once you have most of the curve on screen press the PAUSE button and use the cursor on the trend display to read off the PV and time values. Record the original PV as PV1 and the new steady state value of PV as PV2. Then calculate the target PV point for the time constant as:
PVtc = PV1 + 62.3% (PV2 – PV1).
In the standard example you should be able to show that the time constant is 5.0 seconds as shown in Figure D1.4. Note that the response curve reaches approximately 99% of its final value after 4 time constants.
The PC-Control Lab software allows you to inspect and change most parameters of the process being simulated. You should now experiment with changing and testing the gain and time constant values in the model by clicking on the command “ Process—Change Parameters”.Repeat the above the above tests to verify your chosen values.
Inserting dead time.
Caution: Dead time parameters in PC-Lab are stated in minutes. For example 0.1 minutes = 6seconds. Do not raise dead time above 1.0 minutes when operating in with the display grid set to SECONDS as this may cause corruption of the simulation. If this occurs simple reload PC-Lab.
Most processes exhibit some degree of dead time or “transport delay” in which the change in output to the process (more correctly the manipulated variable MV) has no effect at all on the PV until the change has had time to travel through a time period. (Opening the floodgates on a dam will not affect downstream residents until the surge has traveled to their neighborhood). In this simulation you may insert and observe the effect of dead time on a step change by increasing the parameter value from 0.01 to 1.0 or even higher. Figure D1.5 shows an example when dead time has been set to 6 seconds (0.1 minutes). This will make any a control loop much more difficult to stabilize so it should be restored to the minimum value of 0.01 before proceeding with the next stage of this practical.
D1.4 Open loop control
You can observe that without the advantage of automatic feedback from the PV your ability to drive the PV to a target value depends on knowing the gain of the process and zero offset or knowing what loading is placed on it.
Loading is the general term used in the simulation for an external disturbance on the process or for a measurement offset. The load on this particular process is the system pressure drop across the valve and flow meter. The normal pressure drop applies when the load parameter, displayed on the screen as a grey line, is at 50%. To observe the effect of a reduction in system pressure:
1) Set the OP to 30% and allow the PV to stabilize, then just press the StepDecr button.
2) Step the pressure down again until the grey line is on 40%. Observe that the flow falls each time as you might expect. The change in system pressure drop has changed the gain of the process
Now adjust the OP value to deliver PV = 180 m3/hr. Now increase the pressure drop by pressing the StepIncr button and observe the change in PV . Clearly , unless you know the exact amount of the disturbance you cannot easily restore the PV to 180 m3/hr unless you use feedback from the PV.
If the disturbance is caused by random noise in the process (in this case it could be a supply pressure deviation or a downstream pressure deviation) or in the measurement you will also not be able to maintain the PV at the target value by open loop control. For example: Insert noise by pressing the AutoLoad button and observe the random movement of the PV.
Conclusion for open loop control
Open loop control can only be used when the process characteristics are unchanging and undisturbed by external factors or if these factors can be accurately compensated out. In all other cases predicting the required output to achieve the desired PV will give poor results.
Flow Control Loop - Basic Example for Closed Loop Control
This exercise will familiarize the student with the basic concepts of closed loop control. It provides an opportunity to get a first feel for closed loop control and the open loop response of the process being controlled. A flow control loop as shown in Figure D2.1 will serve as a practical example for this exercise. A flow control loop is generally not difficult to operate and illustrates the basic principles effectively.
This exercise follows directly from Exercise 1 in which you will have seen how to install and operate the PC ControLab simulation tool. Also in Exercise 1 you will have loaded the model IDC Flow1.mdl and tested its open loop response for gain and time constant. Make sure that you have completed these steps before proceeding with this exercise.
D2.3 Operation in Auto mode
Prepare the model IDC Flow 1 for closed loop control by loading it from file folder Ex-2. Select the file location “PC-Pracs” and then subfolder “Ex-2” Highlight “IDC Flow 1.mdl” and press Open. Set the grid scale to seconds.
For some initial practice with the automatic control mode some suitable loop tuning values for controller gain and integral (reset) action must be loaded. Press the “Control” key on the toolbar and select the dropdown for “Retrieve strategy and tuning”.Select the file location to Ex-2, select: “IDC Flow Auto 1.stg” and open it. Now press the TUNE key and verify that you have Gain = 1.5 and Reset = 0.2 minutes per repeat.
The normal practice in setting a control loop to automatic is to first make sure that the set point (SP) is made equal to the present PV to avoid any disturbance as the automatic control action is started. Set point tracking has already been enabled in this strategy model so if you set the output to a starting value the set point will self-adjust to match the PV. Execute the following steps.
Step 1: Keep the controller in MAN and the display in RUN. Set OP = 20%.
Step 2: Note that the PV and SP settle at 45 m3/hr.
Step 3: Change the controller from MANUAL to AUTOMATIC mode.
Step 3: Change the set point to 210 m3/hr by using the SP button and typing in the new value.
Step 4: As soon you press OK you will see the response to this step change in the set point. You should obtain the response shown in Figure D2.2.
As soon as the set point is changed the controller responds by driving the OUTPUT to reduce the error between SP and PV and the PV is steadily driven to the new Set Point. The controller tuning has been set up for proportional plus integral action control, hence you can see in Figure D2.2 that the proportional action brings the PV towards the SP but does not meet it until the integral action has increased the output over time until the error is zero.
You should now experiment on your own with changing the tuning parameters and repeating step changes to see the effect on the responses. The tuning parameters we are concerned with here are simple the controller gain and the integral action. Press TUNE on the display to access these parameters. If you want to get back to the settings used for the test run in Figure D2.2 simple reload the strategy file using the command CONTROL- RETRIEVE STRATEGY,MODEL and TUNING.
Since a flow control loop has no intrinsic stability problems, most effects can be observed clearly. You may also try introducing some dead time into the model as discussed in D2.3 above. Find how much dead time can be tolerated before the response becomes unstable.
This completes Exercises 1 and 2. By this stage you should be quite familiar with the basic features of the PC-Control Lab 3 facility.
Proportional (P) control
This exercise will introduce the main control action of controllers - Proportional Control. Special emphasis is placed on the fact that there is a remaining offset condition, if proportional control is used solely. We shall see that a steady state offset between Set point and Process variable exists whenever proportional only control is used in a non-integrating process such as flow control. We shall see that this offset reduces as the control loop gain is increased but it will also be seen that instability will eventually occur as gain values are increased.
D3.2 Process model
This exercise will use the same basic example for closed loop flow control that we used in Exercise 1. There is however a slightly more realistic model is to be used for this and subsequent related exercises. The process is shown in block diagram form in Figure D3.1. The output of the controller is applied to a first order dynamic lag stage representing the control valve, which also has a stage gain of 1.5. There is a dead time element representing small transport delays in the valve and in the process and then there is another dynamic lag stage representing the flow sensor with a certain amount of noise filtering. Effectively the overall process behaves as two dynamic lags in series plus a very small amount of dead time. This is very typical of a practical flow control loop and it means that the process behaves as very well damped 2nd order system until its loop gain becomes quite high.
The initial parameters used for the model in this exercise were:
- Valve lag time constant: 0.2 minutes
- Dead time: 0.001 minutes
- Sensor lag time constant: 0.05 minutes
To install this model in the PC-ControLab simulation start the simulator programme and then select: Process / Select Model/ PC-Pracs/Ex-3/IDC flow 2.mdl.
Note that this model can display PV in the range 0 to 100% or it can display PVE in the range 0 to 300 m3/hr. For this exercise we have used PV in 0 to 100% for simplicity. Therefore set the display range as follows: View/Display Range/Percent of span
D3.3.1 Operations to demonstrate and calculate offset in proportional only control
Once you have opened IDCFlow2.mdl you should install the control strategy file for this exercise and proceed as follows:
- Step 1: Select Control/Retrieve strategy and tuning/ IDC Flow prop2.stg. Set the scroll display rate to “seconds”.
- Step 2: Select Control options and confirm that the Control Algorithm has been set to Proportional Only.
Control action in Auto will now be defined by the algorithm OP = K1 (SP-PV)
Where K1 = proportional loop gain. (i.e controller gain x process gain)
- Step 3 : Select TUNE and ensure that Gain =1 and Man Reset = 0.00. ( Man reset is used optionally to provide a fixed bias in the output when in manual mode).
- Step 4. Set the LOAD to 75% by using the STEPINCR button.. (This offsets an internal bias of 75% within the model used for other tests).
- Step 5: Check the relationship between OP and flow PV by setting the mode to MAN. Then check.
Op = 0% , PV = 0%
OP = 20 %, PV = 30%
OP = 50 %, PV = 75%
These figures will confirm the process gain (due to the valve) is 1.5. What you have now is a simple control valve arrangement where flow is zero when the valve is shut and where full range flow is achieved well before the valve is fully open. What valve opening is needed to achieve 100% flow? Note also in this model the valve gain is constant over the operating range and hence process gain does not change. This gain would be expected to change in most installations.
- Step 6: Still in Man set OP = 20% and SP = 50%. Allow the simulation to run so that the PV value is steady at 30%.
- Step 7: Switch the mode to AUTO. Note that no disturbance should occur because the deviation between SP and PV is 20% and the loop gain (K1) is 1.5.
OP = K1 (SP-PV)
= 1 x ( 50 –30) % = 20%
- Step 8: Change SP to 80% and observe the response to a step change in the SP.
Figure D3.2 shows the response you should expect on the simulation
Notice that the PV value settles down before reaching the value of SP. The remaining difference between SP and PV is called Proportional Offset. It can also be seen that the control action (OP) moves almost instantaneously (proportional control) in accordance with changes to PV or SP (Equation Type A). The step change of SP has caused the value of OP to make a step change accordingly. Immediately after this, one can observe the output to exactly follow the value of PV, but in reverse direction to the PV as the error (SP-PV) decreases.
Therefore, the trend display shows both the proportional change of output based on change of SP (Step change), followed by proportional change of output based on change of PV (Exponential approach to a steady value).
You can predict the new PV and OP values by using the above equation and by knowing the process relationship PV = Gp x OP. where Gp is the process gain
OP = PV/Gp = K1(SP-PV)
Re-arranged this states: PV = SP. (Gp.K1/(1+Gp.K1))
So if K1 = 1, Gp = 1.5 and SP = 80% , PV = 48%
Therefore a steady sate offset of 32 % occurs in this example.
- Step 9: This step change exercise should now be repeated as you increase the controller gain K1 using the TUNE controls.
- Exercise : Calculate the PV and offset between PV and SP for values of controller gain of 2, 5 and 10. For each gain setting create the same step response in AUTO from SP = 30% to SP = 80%. Record the PV values you obtain and compare with your predicted values.
- Observation: Note how the response becomes increasingly unstable as the gain is increased. This is due to the 2nd order lag characteristic in the process model and places a limitation on the amount of gain you can safely apply. Hence the offset problem cannot be eliminated by proportional control only.
D3.3.2 Operation to demonstrate the effect of longer lag times on response
Repeat the above exercise with slightly longer lag times in the process model and using a controller gain of 5. Increase the lag time in the sensor by using the Process/change parameter command on the simulator.
Now is a good time to experiment with the Zoom facility in the simulator. We suggest you set the minimum PV display to 30% and leave the maximum at 100%.
For this test use the “StepIncr” and “StepDecr” buttons to impose 5% load changes on the process. The damped overshoot responses can be seen in Figure D3.4 and Figure D3.5 where the lag time of the sensor has first been set to 0.05 minutes ( 3 seconds) and then changed to 0.2 minutes ( 12 seconds).
Observe that the overshoot responses do not increase in amplitude but the frequency is substantially reduced by the slower process characteristic. The stability is not affected because the proportional only control action does not introduce any phase shift in the response. This was not going to be the case when we revert to P+I control.
D3.3.2 Operation to demonstrate the effect of process noise in high gain loops
In this exercise we can try out the facility in the simulator to inject random disturbances into the process by using the “Auto load” button. This will demonstrate the effect of noise in the automatic control loop.
- Step 1: Set the controller gain (TUNE button) to 1.0 and restore the process lags to 0.2 for the time constant and 0.05 for the sensor.
- Step 2: Make SP = 80% adjust the load offset to approximately 75% as for the first exercise. Place the controller into Auto
- Step 3: Once the PV is stable press the Auto Load button and see the degree of disturbance to the PV whilst the OP is changing moderately to try to correct for the errors.
- Step 4: Increase the controller gain to 5.0 and see the increase OP disturbances whilst PV disturbance is not reduced very much.
This exercise has demonstrated a number of basic features of feedback control using the proportional-only algorithm.
- With increasing values of Gain, we obtain smaller values for OFFSET. This shows, that it may be desirable to use high values for Gain to minimize the Offset. However the offset can never be eliminated by proportional control.
- As gain increases, overshoots occur but in proportional-only control quite high gains may be used before complete instability occurs. The frequency of overshoot decreases with increased lag times as you would expect since the natural frequency of the process becomes slower.
- If a control loop has a tendency to be unstable, stability problems put limits on the increase of Gain. Even if no stability problem exists, the value of Gain should be kept as low as possible. This avoids unnecessary amplification of noise.
Integral (I) Control - Flow Control
This exercise will introduce the Integral Control action of controllers. Special emphasis is given to the task of eliminating the remaining offset term of proportional control. It will also show the slower control action of Integral control, compared with Proportional control.
D4.2 Process and control models
This exercise will use the same process model for closed loop flow control that we used in Exercise 3. Recall that this model has two first order lags representing the control valve and the process as well as a very small and usually negligible dead time element.
Note that this model can display PV in the range 0 to 100% or it can display PVE in the range 0 to 300 m3/hr. For this exercise we have used PV in 0 to 100% for best clarity.
The controller action for integral only control provides an output that is the time integral of the error between Sp and PV. Hence the output will continue to change as long as error exists and the controller will normally drive the PV to be exactly equal to the SP.
The control algorithm is described in the following box:
Using the above information we can now set up the simulation using the Flow2 model to examine the integral action values and effects.
Test No 1: Open loop integral action.
- Step 1: Start the simulator programme and then select: Process / Select Model/ PC-Pracs/Ex- 4/IDC flow4.mdl.
- Step 2: Select Control/Retrieve strategy and tuning/ PC-Pracs/Ex-4/IDC Flow4 Int Only.stg. Set the scroll display rate to “seconds” and set mode = MAN. Check that Control/Options/ Integral-only has been set. Press the StepIncr control to raise the Load value to 75% as observed on the trend screen.
- Step 3: Check that SP Tracking is ON by going to Control/Control Options/SP Tracking.
- Step 4: TUNE: Integral gain = 2.0. Manual Reset = 0.0
- Step 5: Place the loop into AUTO and SP =30%. Allow the response to stabilize and then PAUSE the simulation. .
- Step 6: Create a temporary open loop situation by increasing the dead time in the process to 1.0 minutes. (Change Parameter/Dead time).
- Step 7: Change SP = 60% to create a 30 % control error. Then RUN and observe the integral action ramp. Calculate the expected ramp rate and check that this confirmed by the ramp rate response on the display. Check your result with Figure D4.1 overleaf.
Test No 2: Close loop step response
- Step 1: Change the dead time back to 0.01 to minimize its effect. .
- Step 2: Place the loop into AUTO and SP =30%. Allow the response to stabilize and then PAUSE the simulation.
- Step 3: Change the SP =60% again and repeat the step change. Observe the exponential approach of PV to the SP and note the slow speed of this response. Confirm with Figure D4.2.
This test shows the great value of Integral (I) action in eliminating offsets. Change the SP to any value in the range and see that PV is brought to exactly the same value.
Test No 3: Close loop step response with high integral gain
- Step 1: Repeat all the steps of test 2 but change the integral action gain in stages upwards to see the increasing degrees of overshoot. Note the phase lag between PV and OP that cause the oscillatory action. Once the OP correction at 180 degrees phase lag is greater than the error at that point (i.e. loop gain > 1 at 180 degrees) the control system will sustain oscillation and expand its responses to the limits of PV and OP. The control is now totally unstable. See Figure D4.3 for a marginally stable response when integral action is 10.0 repeats/minute.
D4.4 Example responses
Figure D4.1 shows the basic integration rate if no PV feedback occurs. The rate of change of output = Integral gain (repeats per minute) x Error (SP-PV). In this example gain = 2 and error =30%. Hence the ramp rate is 1% per second.
Figure D4.2 shows the typical lag response when the loop is closed and the PV approaches the SP and the error is diminished to zero. Minimal overshoot occurs in this case when integral gain = 2 repeats per minute.
Figure D4.3 shows the response when integral action gain is too high for the process, leading to overshoots and oscillation . Note the phase shift in which the OP correction lags behind the PV.
Integral control will control the PV towards the SP precisely, without any OFFSET. The trade-off for this, when compared with Proportional control, is a lagging behavior, which results in slower control. It will be shown later, that Integral control decreases the stability of the loop, if an intrinsic stability problem exists within the control loop.
Proportional and Integral (PI) Control - Flow Control
This exercise will use the same process flow model that we used in exercise 4 but it will introduce a combined Proportional and Integral Control action of controllers. Special emphasis is given to the elimination of the remaining offset of proportional control without loss of control speed. It will be shown that the combination of Proportional and Integral control maintains the speed of control as it exists with Proportional control only, but without the disadvantage of an OFFSET term.
- Step 1: Start the simulator programme and then select: Control/Retrieve strategy and tuning/PC-Pracs/Ex-5/IDC Flow5 P+I.stg.
Set the scroll display rate to “seconds” and set mode = MAN. Set View/Display = Percent of span. Check the Load value is 60% as observed on the trend screen.
- Step 2: Check that SP Tracking is ON by going to Control/Control Options/SP Tracking.
- Step 3: Keep the control mode in MAN and check that the control option has been set to “PID Non-interacting”.This algorithm keeps the proportional gain and integral gain and derivative gain terms separate.
OP = (SP-PV) x ( K1 + Kint (Integral) + Kder (d/dt) )
- Step 4: Check under TUNE that: Gain = 1.0 and Reset = 2.0 repeats/min
- Step 5: Set OP = 30% and SP = 30% and switch to AUTO. Now test the step response to a change in SP from 30% to 60%. See Figure D5.1 for an example of what you should see.
Note how the proportional action response is typically as we have seen in exercise 3 with a substantial offset between PV and SP. The offset is then gradually reduced by the integral action term driving the OP slowly upwards until the error is zero.
The slow speed of response may not be acceptable for many applications so the tuning strategy now comes into play. The proportional gain may be increased substantially to speed up response and reduce offset as we have seen in Exercise 4. However we have already seen that noise and instability problems will limit the gain.
D5.4 Tuning Operations
The normal strategy is to increase gain to its sensible limit and then apply a small amount of integral action to rest the error to zero. You may now experiment with this approach on the simulator. For these tests it is convenient to use the STEPINCR and load decrement buttons to drive 5 % step changes into the process load.
You should start by reducing the reset action to a minimum and testing the response in proportional control only. Write down your suggested maximum gain for a response that has some degree of overshoot but seems to be reasonably stable. Then begin to apply a small amount of reset gain until the responses arrive at the set point fairly quickly but without excessive overshoots. Compare your results with the typical example in Figure D5.2.
Combined Proportional and Integral control turns out to be the best choice of control, if no stability problem exists within the control loop. As a general rule, (with few exceptions) flow control loops have no stability problems. This is the reason why the majority of flow control loops use PI-Control.
Introduction to Derivative (D) Control
This exercise will introduce the Derivative action in feedback controllers. Derivative action is required only in control loops with stability problems. Derivative control counteracts the instability (oscillation) of control loops. As such it is an additional control action. Derivative Control is always combined to form a PD or PID-Control loop system.
As flow loops generally do present severe stability problems, a more typical general process model is used to demonstrate D-Control. Figure D6.1 shows a single loop to control temperature in a feed heater process. This is an example for a control loop, where a stability problem can be expected due to more complex dynamic lags.
The process model to be used is held in the file IDC Feed Htr.mdl. The model represents the dynamic lags of the gas flow, heat transfer stage and feed temperature sensing linked in series. A small dead time element is also included to represent the time for the heated material to travel to the output point of the heater where the temperature sensing is located.
An interesting feature of this process is that the gain of the open loop stage from fuel flow input to product temperature at the exit changes according to the feed flow rate. For our first exercises therefore we shall avoid non-linearity by fixing the feed rate at 60%. This type of non-linearity is commonly encountered and it makes loop tuning difficult until the characteristics are recognized. We shall conduct the next series of tests with model operating at 60% feed.
Open PC-ControLAB and load the process model IDC Feed Htr.mdl. Load the control strategy file IDC Feed Htr PID.stg.
This model and the control strategy have been tuned for a tight response to the model of a feed heater as shown in Figure D6.1.but for this exercise we are going to simply demonstrate the effects of derivative control so we shall temporarily reduce the P and I tuning components.
Test 1: Will show the effect on the controller output of applying the derivative term to the error between SP and PV. This occurs if you are using the option to apply to derivative to error as in the type A equation in the manual
- Step 1: In MAN mode use Control options to select
- Step 2: TUNE and set Gain = 0.5 and Deriv = 2 minutes. Set the term derivative gain to 10.0 using the TUNE/OPTIONS tab. This introduces a substantial low pass filter in the PID equation making this response easier to observe and less sensitive to higher frequencies
- Step 3: Set OP = 36 % and see that PV settles at approximately 500C. If not, adjust the STEPDECRs to achieve this balance. Set the SP to equal the PV
- Step 4: Switch to AUTO. There should be no disturbance at this point since the error is zero.
Deriv on Error. Select Reset action off.
Select interactive PID control option.
Set the scroll rate to Minutes using the View command.
Set load to 0% using the StepDecr button. (In this model Load is applied only to the temperature measuring stage to simulate a bias or to inject process disturbances whilst leaving the feed steady at 60%)
Now we have a situation where any rapid change in error will drive the output in response to the rate of change of error. If the derivative time = 2 minutes this means for example that we can calculate the amount of output as follows: OP = (% error per minute) x 5 per minute
For example: 10% /minute increase in error will create OP = 10% x 2 = 20%.
- Step 5: In order to observe D-Control, step change SPE from 500 degC (50% of range) to 600degC (60% of range). Figure D6.2 shows the response you should obtain.
It is observed that Derivative control calculates a controller OP value based on the Rate of Change of PVE-SPE. As is explained in greater detail in the chapter “Digital Control Principles”, D-Control of an “Interactive PID Controller” is in actual fact a PD-Control action with a Low-Pass-Filter. Therefore, the initial OP change as a result of the step change of SPE is not a Unit-Pulse-function (Needle shaped pulse with magnitude); it is in fact a limited step of OP, followed by an exponential decay. As a P component always exists within this control action, PVE settles down with an OFFSET.
The time constant of the low pass filter is described as the Derivative gain and you can adjust this with the TUNE/Options control. (Normally a value of 10 is used in these simulations.) To see this more clearly you can temporarily disconnect the PVE signal and substitute a fixed value using the commands: Control/Measurement Options. Now the response of OP is purely due to the derivative action. The result of doing this can be seen in Figure D6.2.
Repeat the above exercise with different values of TDER and K. Observe the clipping of OP at the OPHI and OPLO limits, if large values of K are used. You may set the OP limits to 5% (low ) and 95% (high) to more clearly see the effects.
Derivative Control is based on the rate of change of PV-SP and is not designed to bring the value of PV to the value of SP. The sole purpose of D-Control is to stabilize an intrinsically unstable control loop by introducing a phase lead into the output response.
Practical Introduction into Stability Aspects
The objective of this exercise is to demonstrate the direct relationship between process lag and stability, and to make students aware of the great difference between noise and stability in practice. We shall use the same process model that we used for Exercise 6. Please note the comments made on the characteristics of the feed heater process.
Open PC-ControLAB and load the process model IDC Feed Htr.mdl. Load the control strategy file IDC Feed Htr PID.stg. In order to see the stability problem of closed loops in practice and without disturbances ensure that the Auto Load switch is off.
- Step 1:Call up the TUNE display and set K = 5.5, and Deriv = 0 . Go to TUNE/OPTIONS tab to turn off the integral action (i.e. Reset Action Off).
- Step 2: Set the control to MAN and set OP =36%. Set Load to 0 % and note that PVE stabilizes at approximately 500 degC.
- Step 3: Move the SPE to 500degC (or keep SP tracking ON).
The value of proportional gain you have set makes it possible to create a continuous oscillation, purely based on the lag of the process.
- Step 4: Switch to AUTO and then operate the StepIncr switch once to insert a 5% step load change on the process to create the disturbance.
See Figure D7.2 for the expected response.
Observe that the process is driven into a state of continuous oscillation with the OP signal being 180 degrees out of phase from the PVE. With these tuning values, no integral and no derivative control action takes place and the controller will neither increase nor decrease the phase shift within the control loop. Therefore, only the process has caused the existing phase shift of 180°.
In this situation, the controller performs P-Control only. As shown in the chapter “Stability and Control Modes of Closed Loops”, continuous oscillation exists if Loop Gain is 1 and Loop Phase Shift is 180. Since our controller Gain K is 5.5 we can calculate the process Gain at the resonant frequency. Kp = 1/5.5 = 0.182. (Gain is attenuated through the process lags as frequency rises).
In the following operation, we will add derivative control to the existing situation of continuous oscillation, in order to find out what effect it will have on stability. Change Deriv in the TUNE Display to 0.5 minutes. This should result in a Trend Display as shown in Figure D7.3.
The left side of the Trend Display (Figure D7.3) shows continuous oscillation with P Control only. The right side of the Trend Display shows the effect of added D-Control. The trend shows obviously faster control combined with a suppression of the oscillation. This means that D-Control has a stabilizing effect.
It is vital for every Process Control Engineer to have experience in the effects of Process Noise and D Control. (The value of K NOISE is different with different processes.) Now repeat the same operation as before (K =5.5, TDER + 0, Reset Action OFF), in order to create continuous oscillations and then press AutoLoad (but now superimposed by noise). Then add D Control (TDER + 0.5). Figure D7.4 shows the Trend Display resulting from this exercise.
It should be clear that D-Control still has some stabilizing effect but it also multiplies noise as can be seen in the OP trace.
Now repeat the exercise as above and turn derivative term back to 0.0, but change to PI-Control, (not PD-Control as before), in a continuous oscillating situation created by P-Control.
- Step 1: In MAN: First set the load to approx 50%. Set OP =36 and see that PV settles at about 550°C.
- Step 2: Keep reset action switched off and then set to AUTO and induce the oscillation.
- Step 3: Switch on AutoLoad to induce noise
- Step 4: Set TINT = 5.0 minutes /repeat (reset = 0.2 repeats/ minute).
- Step 5: Use TUNE/Options to switch on reset action and then observe the response, which should look like Figure D7.5.
Then start again with continuous oscillation, created by P-Control only and change to mainly I-Control only by taking the following steps:
- Step 1: Start the simulation in MAN with reset switched off and change K from 5.5 to 1.0 and set TINT 1.0.
- Step 2: Switch to AUTO and switch on the Auto Load. You will see that the control is stable despite the noise because the P term is on low gain. The PV may not be very close to SP because there is no reset action.
- Step 3: Switch on the reset action and observe how the instability quickly builds up as shown in Figure D7.6.
Figures D7.5 and D7.6 show that integral action has a destabilizing effect, whether it is combined PI Control or I-Control only. In both cases we observe an increase of magnitude in oscillation. In addition we observe, that I-Control ignores noise. We can see in Figure D7.6 that the change from P-Control to PI-Control has no effect on noise. Noise is not different in P-Control and PI-Control. Noise is almost totally suppressed when I-Control is used only.
This exercise brings us to the following conclusions:
D-Control has a stabilizing effect, whereas I-Control has a destabilizing effect. The only purpose of I-Control is to eliminate the offset of P-only Control.
The trade-off is the destabilizing effect. This may be compensated by adding D-Control. This will result in the use of PID-Control for control loops with stability problems.
D-Control has to be treated very carefully from a noise point of view. It is important to point out that D-Control should never be used without prior filtering of the PVE. Such a filter has to be chosen to reduce noise without adding a significant Lag to the loop. You may access the filter time constant under CONTROL/Measurement Options.
Note: Never use D Control without PV filtering (TD) if any process noise exists.
Further Exercises in Identification of Process Characteristics
D8.1 Introduction and Objectives
The designers of the PC ControLab simulation facility have kindly made available to IDC Technologies a comprehensive suite of training exercises designed to help the student to develop a good understanding of many of the basic principles of process control.
At the stage of the IDC workshop when you have completed Chapter 2 we would suggest that you work through the following exercise provided by Wade Associates which is designed to build up your knowledge of typical process characteristic responses. This series of exercises covers the recognition of the following features.
- First Order Lag plus Dead time process. ( As already seen in Exercise 1)
- Negative process gain
- Integrating process
- A simple frequency response test
Try the exercises before you look at the answers that are available at the end of this Appendix.
LABORATORY EXERCISE 1A
PROCESS DYNAMIC CHARACTERISTICS
OBJECTIVE: To become familiar with various forms of process dynamic characteristics, and to learn a method of constructing a simple process model from step test data. Optional: To become familiar with obtaining data from frequency response tests.
PREREQUISITE: Completion of PC-ControLAB tutorial (under Help | Tutorial ) or an equivalent amount of familiarity with the program operation.
BACKGROUND: All process have both steady state and dynamic characteristics. From a process control standpoint, the most important characteristic is the process gain. That is, how much does the process variable (PV) change for a change in controller output. If both the PV and the controller output are expressed as normalized variables (i.e., 0 - 100%), then the process gain is a dimensionless number.
The two most important dynamic characteristics of a process are the amount of dead time in the process and its time constant. Real processes rarely exhibit a response of a pure first order lag (time constant) and dead time, but can often be approximated as a first order lag and dead time.
This exercise tests for the process gain, dead time and time constant for both a “pure” process (can be exactly represented as first order lag plus dead time) and for a more realistic process which can only be approximated as a first order lag plus dead time.
1. RUNNING THE PROGRAM
Run PC-ControLAB.2. FIRST ORDER LAG PLUS DEAD TIME PROCESS
Click on Process | Select Model.
Highlight “Folpdt.mdl” (First Order Lag Plus Dead Time) and press Open.
Press Zoom and change the PV scale range to 50-75. (Note that the PV scale has already been converted to 0 – 100% of measurement span.)
With the controller in MANUAL, press Out. Note the initial values:
After the PV has stabilized at a new value, press PAUSE.
Changing process parameters
Select Process | Initialize.
Select Process | Change Parameters.
Select “Dead Time” and change its value to 2.0 (minutes).
Select “Process Gain” and change its value to 1.0.
Select “Time Constant” and change its value to 3.0 (minutes).
You have just observed the response of a pure first order lag and dead time process. Very few, if any, processes are this “clean.” We will now look at a process with unknown dynamics, but we will attempt to approximate it with a first order lag plus dead time model.
3. UNKNOWN PROCESS
Click on Process | Select Model. Highlight “Generic” and press Open.
Notice that the PV scale is now in Engineering Units, rather than in percent.
(If not, then select View | Display Range | Engineering Units)
To estimate the dead time, draw (or visualize) a tangent to the PV curve, drawn at the point of steepest rise. From the time of controller output change to the intersection of this tangent with the initial steady state value is the apparent dead time.
Different observers might estimate anywhere between 1½ to 2 minutes. For the purpose of calculating controller tuning parameters, it is better to take the longer value where there is any uncertainty, since that will produce more conservative controller tuning.
The apparent time constant is the time from the end of the dead time to 63.2% of the process rise (or approximately the time from the end of the dead time to 2/3 of the process rise.)
Rather than estimating the time to 2/3 rise point, you can calculate the value of the PV at 63.2% of the ultimate PV change, then use the scroll bar to determine the time precisely between the (estimated) end of the dead time and the 63.2% rise point. This gives a more accurate estimate of the time constant, but takes longer.
NOTE: One of the uses that can be made of the estimates of process gain, dead time and time constant is to calculate controller tuning parameters. (See Laboratory Exercise 9, PID Tuning from Open Loop Tests.) Since dead time is more difficult to control than a first order lag, then if you estimate dead time too short, you are estimating that the process is easier to control than it really is. This will result in controller tuning parameters that cause the loop to be overly aggressive.
Similarly, if you estimate the time constant too long, you are estimating that the process is easier to control than it really is, and again the resulting controller tuning parameters will cause the loop to be overly aggressive. On the premise that is one is to make an error, it is better to err in the conservative direction than in the aggressive direction, then the following pragmatic guidelines can be given:
If there is any uncertainty in estimating the process parameters,
estimate the dead time on the long side, and
estimate the time constant on the short side.
4.0 OTHER FORMS OF STEP RESPONSE
This section will explore other forms of step response.4.1 Negative Process Gain
Select Process | Select Model. Highlight “generic3.mdl” and press Open.
Record the initial values:
Increase the controller output by 10%. When the PV reaches equilibrium, record the following:
4.2 Integrating Process
An integrating process is one which will not achieve a natural equilibrium. The control of liquid level, where either the vessel inflow or vessel outflow is fixed by an external set point, is a common example of an integrating process.
Select Process | Select Model. Highlight “level2.mdl” and press Open.
This process model simulates a level control application in which the tank outflow is an independent process disturbance (load), and the controller controls a valve in the inflow line.
This process model has some elements of “realism” that are not needed in this exercise; These features will be used in subsequent laboratory exercises. For the present, we will eliminate these “realism” elements.
Select Process | Change Parameters.
Highlight the parameter labeled “Valve Pos: 0 = No; 1 =Yes.” Change this parameter value to I.0.
Select the parameter labeled “Levl Noise: 0=No; I =Yes”.(Use the scroll bar at the left hand side of the parameter list, if this parameter is not visible initially.) Change this parameter value to 0.0.
Change the controller output from 35.0 to 38.0. This simulates an increase in inflow to the tank.
Before the PV reaches a limit (0 or I 00), press the button StepIncr above the controller faceplate. This simulates an increase in outflow from the tank. Note the increase in the trace labeled 'LOAD'.4.3 Inverse Process Response
Occasionally the process response to a step change in controller output, or to a load change, is initially in the opposite direction to that expected from “first principles.” This is normally due to some underlying, second order effect. Once the second order effect disappears, then the process responds in the expected manner. A response such as this is called an “inverse response.”
An example of inverse response is the “shrink-and-swell” effect of steam boiler drum level. If the steam draw-off and steam rates are in equilibrium, then the drum level will be constant. On an increase in steam draw without a corresponding increase in feedwater rate, a decrease in drum level would be expected. Initially, however, due to the reduction in pressure and consequent flashing of water into steam, the level rises; if the excess steam draw is maintained, the level eventually begins to drop. This is qualitatively demonstrated in this exercise.
Select Process | Select Model and recall the Level2 process.
Select Process | Change Parameters.
Select the parameter labeled “Levl Noise: 0=No: 1=Yes”.Change this parameter value to 0.0.
Select the parameter labeled 'BLK 57 LEAD TIME”.Change the value of this parameter from 1.5 to -3.0.
Press Steplncr. This simulates an increase in steam flow from the boiler drum.
(This is the drum level “swell” effect. If the steam flow had have been decreased, rather than increased, we would have seen the opposite effect, or the “shrink”.)
Suppose a feedback level controller had been in Automatic and controlling the feedwater rate (drum input). Upon sensing the initial change In level, would the level controller Increase or decrease the feedwater rate?
Would this change be in the proper direction for long-term correction?
Laboratory Exercise 23, (See IDC Practical No 15), Drum Level Control, will demonstrate a control technique widely used in steam generation applications for overcoming this problem.
5.0 FREQUENCY RESPONSE (Optional)
One means of characterizing a process is by its response to a sinusoidal (sine wave) input at various frequencies. At each frequency, the relevant data is:the ratio of the amplitudes of the input and output sine waves;
the phase shift between input and output signals.
After a number of data points are taken, a Bode plot of the data can be constructed.
This portion of this laboratory exercise will determine a few data points for a Bode plot.
Through the Process | Select Model menu, read in “Generic1” process model. Be sure you are using the FEEDBACK control strategy (read the right hand side of the display title bar).
Check to see that the controller is in Manual.
Select Control | Control Options.
Scroll down until you see “Enable Sinusoidal Output.” Click on “YES.”
Set the following parameter values:
Press Clear then press On on the controller to initiate sinusoidal testing of the process.
Record the following:
Repeat this test for periods of 30, 15, 7.5 and 3.75 minutes. (You may have to increase the amplitude of the input signal at the shorter periods. This is alright, since it is only the ratio of the output and input amplitudes that you are seeking.)
Plot these values on the graphs on the following page.
(Note: On many Bode plots, decibels, rather than amplitude ratio, is plotted on a linear scale. The conversion equation from amplitude ratio to decibels is:
Also, frequency is usually plotted on a logarithmic scale reading in radians per time unit, rather than cycles per time unit. A complete Bode plot is beyond the scope of this laboratory exercise, however.)
For answers to this exercise please turn to the end of this Appendix.
Open Loop Method - Tuning Exercise
This exercise will give some practical experience in the Reaction Curve Method of tuning. Figure D9.1 shows the diagram of a typical control loop which is the feed heater we have used in the previous exercise. The steps for tuning are explained in Chapter 5.
Open PC-ControLAB and load the process model IDC Feed Htr EX-9.mdl. Load the control strategy file IDC Feed Htr Ex-9.stg.
It is practicable that the tests is done with a realistic level of noise in the process and that the measurement noise filter should be included in the tuning process. Therefore you should set up the Load signal on the simulator and we suggest the following settings be made under LOAD/Auto Load Change Random:
Max size: 0.3
This produces a noise signal of typically 3% PV with a random walk effect. It is not possible to eliminate the random walk effect in this simulation.
Make a good judgment about the noise observed on the PV and set the digital noise filter (TD) accordingly using the command CONTROL/Measurement Options. A good value of TD will result in a still noticeable but very small magnitude of noise on PV. The noise filter has to be set up before tuning in order to have it included in the result of tuning. From a practical point of view, the noise filter becomes part of the process as far as loop tuning is concerned. In this exercise, a good value for TD will be between 0.05 and 0.1.
The open loop tuning method requires that we first generate a good step response to an OP step. In this case choose the practical range of PVE that you can achieve by starting with OP at about 20% and stepping it to about 70 %. Use the StepIncr/Decr controls to bias the load signal into a suitable region for this test.
To obtain a Process Reaction Curve as shown in Figure D9.2, set MODE to MANUAL, then OP to 20 and wait until the PVE is steady at approximately 400 °C . Then make a change of the OP from 20% to 70%.
Observe the reaction curve and tune the controller as explained in Chapter 8. The summary of the major tuning formulas is shown in Figure D9.3.
The values obtained from the reaction curve are:
|dOP||=||50% (from 20% to 70%)|
|N||=||10.5 % / minute (PVE range from 0 to 1000C)|
Using the formulas as described in Chapter 8 paragraph 8.4.3 we obtain the following tuning constants for PID-Control:
After setting the tuning constants to their calculated values, the tuning panel on the simulator should look as shown in Figure D9.4.
D9.4 Confirmation of proper tuning
Change Mode to Automatic and make a change of the SP. Observe the process settling down (Observe PV). Use the trend display as shown in Figure D9.5 for observation. Most parameters can be operated from the trend display as well. As a rule of thumb, the settling down should take place with quarter damping (1/4 decay). That means two successive maximum values of a damped oscillation should have a magnitude ratio of 1 to 4.
D9.5 Fine tuning
Based on process knowledge (Noise on PV etc), experience and precise knowledge of the control actions of each control mode (P-Control, I-Control and D-Control) fine tuning of the process should be performed now. Only minor variations to the tuning constants should be made.
Observe the noise of the control action (OP). Use your judgment as to how much to increase or decrease TD. The judgment has to be based mainly on the effect noise in the OP signal has on the manipulated variable (Wear and tear of a valve for instance).
An increase of TD may require an increase of T2 (Derivative Time Constant) to compensate for the lagging phase shift of the noise filter.
Since K is the Gain for all control modes (PID), reducing gain reduces the effect of all control actions equally and vice versa. A reduction of K reduces speed of control and increases stability and vice versa.
After tuning, (using the Ziegler Nichols formulas), the damped oscillation of the process after a change of SP takes place equally around the new SP. Increasing the Derivative Time Constant (T2) will bend the baseline towards the old value of SP and therefore reduce the overshoot. In addition stability in general will be increased. It has to be noted, that the area of error increases as well with increased stability. The reason for this is the slower approach of PV towards the SP.
In most cases Integral tuning should be corrected only in relatively stable loops like flow loops. T1-Fine Tuning should be done with close observation using the trend display. Decreasing T1 increases instability and vice versa.
The example given in Figure D9.6 shows the quarter decay response obtained by increasing proportional gain above the value calculated from the Zeigler Nichols formula.
Closed Loop Method - Tuning Exercise
This exercise will give some practical experience in the Closed Loop Tuning Method. It will show you how to recognize the characteristic responses shown by a close loop control system and how to use these features to guide you to find suitable PID tuning parameters.
We are going to use the exercise kindly provided by Wade Associates which involves an interactive step by step exercise in which you will record your answers on the question sheets.
Background: Tuning by closed loop process testing involves putting the controller in AUTOMATIC, removing all Reset and Derivative, and setting the Gain just high enough to cause a sustained process oscillation. From this test, the relevant parameters are the period of oscillation in minutes, and the Gain which ultimately caused the sustained oscillation. These are called the “ultimate Period” and “ultimate Gain”, respectively. From this data, the tuning parameters can be calculated.
Note: The open-loop (the subject of IDC Exercise 7) and the closed-loop methods (the closed-loop method is the subject of this exercise) may or may not produce similar tuning values, even when using the same process model. The “Generic” process model is used in these exercises; there is some difference in the results. For demonstration purposed, is you wish to obtain similar results from t he two methods, use the Generic2” model.
1. RUNNING THE PROGRAM
2. TUNING BY CLOSED LOOP PROCESS TESTS.
Confirm the following:
Process: GENERIC (see the top line, left hand side)
Select: Control Options/ Control Strategy: FEEDBACK (see the top line, right hand side)
Select Control | Select Strategy | Feedback
Select Process | Select Model/ PC Pracs/EX-10/Generic.mdl
Select Process | Initialize to initialize the process model.
If you are more familiar with using Proportional Band, rather than Gain, for tuning controllers, or if you are more familiar with tuning the reset (integral) mode in Repeats per Minute, rather than Minutes per Repeat, then:
Press Tune then select the Options tab to set up the program to match the system you use:
Press Tune then select the Options tab. Select Reset Action OFF.
2.2 Process Testing
Put the controller in Auto.
Make a set point change of 5% of full scale. (Press StepIncr once.)
If there is no oscillation, or if the oscillation dies out, increase the Gain (or decrease the Proportional Band) and repeat the set point change. (The Gain can initially be changed approximately 50% of its present value, or the PB can be changed to one-half of its present value. As the response gets closer to sustained oscillation, smaller changes should be made.) You should not have to observe the response for more than three cycles to determine whether or not the oscillation is decaying or not.)
When sustained oscillation is ultimately achieved, record the following:
Use the table for the closed-loop Ziegler-Nichols method (Table 1 at the back of this exercise) to calculate tuning parameters for a P, PI and PID controller. Enter these in the table below:
(First calculate Gain (KC), Integral time (TI) and Derivative (TD) from the equations. Then, if your system uses PB rather than Gain, or Reset Rate rather than Reset Time, calculate those values.)
|Prop Band (PB)|
|Integ Time (TI) (min/rpt)||___||5.8||3.5|
|Reset Rate (rpt/min)||___||___|
|Deriv Time (TD)||___||___||0.88|
Before testing for the closed loop response, go to Tune | Options tab and set Reset Action ON.
For each type of controller, enter the parameters, put the controller in Auto and test the loop for a 10% (of full scale) set point change.
Calculate or measure the decay ratio, period and (for PI controller only) the period-to-integral time ratio. (This will be used in a subsequent exercise.)
Also, for each type of controller, make a 5% load change. (Press StepIncr or StepDecr.) Mark which controller type has the best, and the worst, response to a load change.
|Period Integral Time||___||___|
|Load change, best and worst response||Worst||Best|
Ziegler-Nichols Tuning Parameter Correlation For
Closed Loop Process Data
|0.5 Kcu||0.45 Kcu||0.6 Kcu|
TI - Mins/Repeat
TD - Minutes
Kcu = Controller Gain that causes sustained oscillation
Pu = Period (in minutes) of sustained oscillation
= Time between any two successive peaks
Continuation from Exercise 9
Now that you have seen the closed loop tuning method it should be possible for you to return to the Feed Heater model in Exercise 9 and apply the closed loop tuning method in similar fashion to the open loop method
Exercise in Improving ‘As Found’ Tuning
This exercise has been kindly supplied to IDC Technologies by Wade Associates
To provide practice in improving “as found” tuning for proportional plus integral (PI) control of a self-regulating process.
Quite often a control systems engineer or instrumentation technician is called upon to improve the behavior of a loop that is currently in operation, but without resorting to either the open loop or closed loop testing methods. Assuming that the loop is not behaving acceptably at present, and that process and equipment problems (e.g. sticking valve) have been eliminated, then most persons resort to “trial and error” tuning. For novice tuners, this is often simply an exploratory procedure; “How about changing this knob in this direction and see what happens”.
This laboratory exercise presents a method for directed trial and error tuning, where each tuning parameter change is made for a deliberate reason. The objective is to go from the current unacceptable behavior to acceptable behavior as efficiently as possible; i.e., in the fewest number of tuning parameter changes.
This method is based upon the premise that if a PI controller, controlling a self-regulating process, is well tuned (that is, exhibiting a slightly underdamped oscillation with a quarter wave decay), then there will be a predictable relationship between the period of oscillation (P) and the integral time (TI). This relationship (stated in three different ways) is:
This premise leads to the following rule-based procedure:
- If the loop is not oscillating, increase the gain, say by 25 to 50%.
- If the loop is oscillating then:
- 2.1 If the Period is between 1.5 and 2.0 times the integral time (or the period-to-reset ratio is between 1.5 and 2.0), then either increase or decrease the gain as required to obtain the desired decay ratio (such as quarter wave damping)
- 2.2 If the Period is greater than 2.0 times the integral time (or the period-to-reset ratio is greater than 2.0), then choose a new integral time according to the criterion:
- 2.3 If the Period is less than 1.5 times the integral time (or the period-to-reset ratio is less than 1.5), then
- 2.3.1 If the decay ratio is greater than 1/4, then decrease the gain, say by 25 to 50%, depending upon how much the decay ratio exceeds 1/4.
- 2.3.4 If the decay ratio is less than, or approximately equal to, 1/4, then choose a new integral time, using the criterion given in 2.2.
The essence of the rule-based procedure listed above is shown in flow chart form in a figure at the back of this laboratory exercise.
1 Running the program
2 Loop Tuning
Select Control | Retrieve Strategy, Model and Tuning/ PC Pracs/EX-11/ Highlight “Feedbck1.stg” (not “Feedback") and press Open.
Observe from the top row that this opens the normal Feedback control strategy, as well as the Generic process model which you have worked with in previous laboratory exercises. The thing that is different here is that the loop has already been tuned – for better or for worse. Press Tune and note the existing tuning parameters.
If the PV scale is not in engineering units, select View | Display Range | Engineering Units.
Put the loop in AUTO.
Change the set point to 300 DegF.
NOTE: The following two procedures will NOT product the same response:
Make the set point change first, then put the loop into AUTO.
Put the loop in AUTO first, then make the set point change.
For the purpose of the procedure described in this laboratory exercise, it is important for you to see the set point response with the loop already in AUTO. Therefore, the correct procedure is to put the loop into AUTO then make the set point change.
Observe the response. Suppose that this is the behavior of the loop when you are asked to make tuning parameter changes. In other words, this is your “as found” condition.
Does the loop need to be retuned? _______
If so, list the “as found” conditions in the top line of the table below. Then use the procedure listed in “BACKGROUND”, or use the flowchart, to make tuning parameter changes. Keep track below of each tuning change you make. (Suggestion: Use set point values of 275 °F and 300 °F.)
|Gain(See Note 1)||TI(See Note 2)||Period||Decay Ratio|
NOTE 1: If you are working in Proportional Band, enter the value of Proportional Band, rather than Gain.
NOTE 2: If you are working in Reset Rate (repeats/min), rather than Reset Time (minutes/repeat), take the reciprocal of the reset rate to obtain Reset Tiime (TI).
3. A comparison
Many (novice) loop tuners, faced with the “as found” condition, would simply reduce the controller Gain until acceptable damping (e.g., quarter-wave decay) was achieved. We will demonstrate why that may not be a good idea.
3.1 Re enter the original tuning parameters.
With the loop in Automatic, set the set point at 275 DegF and let the loop come to equilibrium.
Working between set point values of 275 and 325 DegF, adjust the Gain until quarter-wave damping is achieved. Do not change the reset.
Suppose the product specifications require that the PV be within a certain tolerance above and below SP to be “within specs” then one of two criteria can be used to evaluate the response of the loop to a disturbance:
(1) Maximum deviation from set point, for a step disturbance of specified size;
(2) The time required for the loop to regain “on spec” production; that is, to get within the tolerance band with no further deviation outside the band.
Press [Zoom] and select a display range from 275 to 300.
Change the set point to 287.5 (This is midway of the chosen display range.). When the loop comes to equilibrium, press StepIncr to make a 5% disturbance (load change).StepIncr.
(At this Zoom range, the horizontal green lines on the display are 2.5 Deg apart. So the tolerance band is within one green line above and below SP.)
3.2 Enter your final tuning parameters from part 2.0. (These should have produced a quarter wave decay response following a set point change, and a period-to-reset ratio which meets the criterion stated in “BACKGROUND”.
With the loop in Automatic, set the set point at 287.5 DegF and let the loop come to equilibrium.
Make a 5% disturbance (load change) by pressing StepIncr.
3.3 Which tuning combination (Section 3.1 or 3.2) produced the best response to a load change?
|The tuning in used in Section 3.1 might represent the results of attempting to achieve acceptable damping by adjusting gain only, followed by an attempt to return to set point faster by making the reset faster. (Not wise choices!) The tuning in Section 3.2 is the result of “intelligent trial and error tuning.” The comparison shows less deviation from set point, as well as a faster return to “on spec” production.|
Cascade control operations
The objective of this exercise is to gain some experience with the setting up and operation of cascade control. For this we will take the example of the feed heater we have been working with for single loop control and change it into a cascade control loop where the primary or outer loop is for the product temperature control with OP connected to SP of the secondary or inner loop controlling the flow of fuel gas to the furnace. This configuration is described in Chapter 9 and the arrangement within Exercise 12 can be seen in Figure D12.1. If you have studied chapter 9 the principles of cascade control should already be clear and all that remains is to set up the simulation and carry out some trial operations.
D 12.2 Configuration of PC-ControLAB
Firstly it is important to be clear on how ControLAB implements and presents cascade control . This can be seen by using the help facility to display cascade control. Fig D12.2 shows the diagram that explains the controls.
From Figure D12.2 you can see that the primary process will be will be the temperature control of the feed stream from the heater and this will remain as PV-1 as per the single loop version. The secondary process will be the flow control loop for fuel to the heater and PV-2 represents the flow. PV- 1 has the range 0 to 1000 °C , PV-2 has the range 0 to 200 kg/hr of fuel flow.
Step 1: Open PC-ControLAB and load the process model PC Pracs/EX-12/IDC Feed Htr casc.mdl. Load the control strategy file PC Pracs/EX-12/ IDC Feed Htr PID Cascade.stg. Observe that this produces a cascade control display with two faceplates for PV-1 and PV-2.
Step 2: Keep the AutoLoad switch off at this stage to avoid any noise in the process. This model incorporates a load variable (LOAD 2 ) to allow flowmeter noise to be simulated. The secondary load trace (Green Trace) shows the load value and you should adjust this to approximately 50 %.
Step 3: Begin by checking the open loop response of the flow loop and by checking the range of flow available is approximately 0 to 160 kg/hr. Now adjust its tuning to give you a tightly controlled flow using PD control. Remember to set up the PV noise filter under CONTROL/Measurement Options. Set the value to 1.00.
Note 1: You should avoid using integral action in the primary loop since this will introduce unwanted lag time or phase shift into the secondary loop process characteristics. Essentially the secondary loop becomes part of the process under control as far as the outer loop is concerned. Therefore switch off reset action under Control Options.
Note 2: The control equation should be selected to ensure that the D action operates only on the PV (i.e. on measured value) and not on the error; otherwise any noise and fast changes applied to SP-1 from the outer loop will be amplified by the D term acting on the error SP-PV. Under Control Options select PID non-interacting and “Derivative on measurement only”.
Step 4: Carry out a step change to SP in Auto set to confirm you have established tight flow control. Switch on the noise load to confirm the stability of control. The result should look something like Figure D12. 3.
Step 5: Now check the range of temperature in PV-1 that you can achieve by adjusting the flow set point over its range. Our model has its load set at 60% feed flow but if you change LOAD 2 you will be able to see the effect of product load changes. In cascade control applications you will not want the secondary loop set point SP2 to be driven to unrealistic extremes so it is necessary now to set limits on the output of the primary controller which is going to drive the SP of fuel flow control. Set the limits for OP-1 at 10% and 80% by selecting the primary control (Exit Temperature) and using the Control Options to set the limits on the output.
Step 6: Now, keeping both controllers in MAN set the secondary controller OP = 20% and notice that the primary controller OP is tracking the PV of the secondary controller. This PV tracking action ensures that when you switch the secondary to cascade mode the set point now coming from the primary controller OP will be matched to the secondary. This is known as “bumpless transfer”.
Step 7: Now you may try changing the secondary controller to CASCADE and you will be able to drive the fuel flow SP up and down by operating the primary controller’s output in MAN.
Step 8: Now it’s time to place the primary loop into AUTO. The tuning constants should first be set to the default values of Kp = 1.0 and Tint = 100 minutes. (This will give a very slow and mild response.) Remember to match the SP to the PV (Warning do not use SP tracking in this example. Check it is turned of under Control/Options). Try some simple step responses to see that the controller is commanding the secondary flow control SP.
Step 9: Try tuning the primary loop to achieve a good response by the methods we have already tried before. Increase proportional gain until you see overshoots and then reduce Tint to eliminate offset in reasonable time. Finally add a small amount of D time to dampen the slow overshoots. Your results should look something like Figure D12.5. even with noise added. The tuning in this example was prop Gain = 1.5, Reset 2.00 minutes/repeat, Deriv: 0.05 Minutes.
Note how the SP of the secondary loop is driven as a slave to the primary loop. The secondary loop responds strongly to small changes and can often become noise prone due to its high gain.
In cascade control the secondary loop helps to provide a stable and fast response for the manipulated variable under all conditions. It is tuned for fast response and can greatly reduce non-linearity (effectively gain changes) due to the characteristics of the valve and its action on the process. This allows for easier manipulation of the slower primary loop which is tuned for accurate and stable response. The absence of integral action in the secondary loop avoids phase shifting and allows scope in the primary loop for strong integral action.
D12.5 Initialization in cascade control
Here we consider the effects of PV tracking, Initialization and Mode changes in Cascade Control. This is easiest to learn simply by experimenting with the simulated feed heater cascade controls you have already set up and commissioned in this exercise
Start with the simulation running with the temperature loop (Primary) in AUTO and the flow loop (Secondary) in CASCADE. Check or adjust the secondary load (green trace) to 50%. Set the load random noise signal strength for both loops under LOAD/AUTO LOAD to Max Size 0.3% and Correlation 0.3.
Switch on the AutoLoad for both the temperature and the flow variables. Make the SPE = 500 degC and allow the process to settle down. Now change the flow loop mode to MANUAL and OP2 to 50% of range.
You will see that this causes the temperature will rise to around 700 degrees as you increase fuel flow. The temperature set point remains at 500 degrees but the OP of the primary control is following the SP of the flow control and the Auto action of the temperature controller has been suspended because it has no valid controller to feed to. This prevents the reset action from “winding up”.The display indicates “PRI-AUTO/initialize”.See Figure D12.5.
Now change the primary to MAN and note that the display reads PRI-MAN/Initialize but the OP continues to track the SP of the flow control.
The flow controller has set point tracking in force so its SP is moving with the flow PV. The result of this action benefits the moment when you want to switch back to Cascade mode because the SP of the flow control does not have to “bump” to a new remote SP value. Now place the flow control into CASCADE and see that this is so.
Now because there is no PV tracking on the temperature loop the SP is far away from the present PV. If you change to AUTO on this loop the process will be sharply disturbed as the controller action kicks in. PV tracking is optional for this loop. Try setting it on and repeat the exercise . Observe that the temperature SP now tracks the PV as you ramp up the fuel flow in MAN mode.
When we change from CASC control to AUTO or MANUAL some variables are unpredictable as discussed earlier. When changing back to CASC control, these unpredictable variables might cause a bump within our control system, when they have to change back to real and defined values.
Based on the observation that unpredictable values of OP1, SPE1 and SPE2 cause a bump in control when we change to CASC control, the following requirements have to be met to avoid this.
The secondary controller’s SPE2 has to follow PVE2 as long as the controller is in MANUAL mode (PV Tracking). The same requirement for PV Tracking should also be made for the primary controller although this is less critical. In addition, we cannot permit OP1 to assume any unwanted value when the secondary controller is either in MANUAL or AUTO mode. For this reason OP1 has to assume the same value in % as SPE2 has, as long as the secondary controller is not in CASC mode. This is called Initialization. In our example, the flow controller (Secondary controller) is the initializing controller and has to be configured as such. The temperature controller is the initialized controller.
We come to the conclusion, that it is advisable to make use of PV Tracking and Initialization whenever possible. Special care is necessary when deciding on the use of PV Tracking in a primary controller. The incorrect use of PV-Tracking may cause some operational problems.
Exercise in Ratio Control
This exercise has been kindly supplied to IDC Technologies by Wade Associates
Objective: To demonstrate the behavior of ratio control.
Prerequisite: Completion of PC-ControLAB tutorial (under Help | Tutorial ) or an equivalent amount of familiarity with the program operation.
Background: Ratio control is used when it is necessary to maintain a certain ratio between the flow rate of two streams. One stream, called the “wild” flow, is measured only. It is the pacing stream. The other stream, the controlled flow, is controlled so as to maintain a specified ratio between the two.
There are two general types of ratio control. In one type, illustrated in this laboratory exercise, the ratio is manually set. In the other type, the ratio is automatically set by the output of a Primary feedback controller.
A typical configuration of ratio control is shown in the following figure. For illustrative purpose, this laboratory exercise assumes that the controlled flow is Steam Flow, with a measured range of 0–1000 P/hr. The “wild” flow is a process stream, with a measured range of 0–400 gpm.
1. RUNNING THE PROGRAM
Start Windows. Run PC-ControLAB.
Click on Control / Select Strategy / Ratio.
Click on Process / Select Model/ PC Pracs/EX-13/ highlight “RatioFlo.mdl” and press Open
2. CONTROLLER SET UP
Press Tune and enter the following tuning parameters (representative of a flow loop).
Reset: 0.15 min/rpt
Deriv: 0.00 min
Put the controller in Auto.
In concept, the set ratio back could be calculated from the present ratio of measured variables; this would provide bumpless transfer from the AUTO mode to the RATIO mode. This would probably not be wise, however, since the required ratio is normally set by process conditions or product specifications. The operator, not the system, should be the one to enter the required ratio. Therefore, the set ratio is not backcalculated in this program or in most commercial systems.
When the ratio is set by the output of a feedback controller (for instance, in the Multiplicative Feedforward control strategy – see Exercise 17), the ratio is back calculated, thus producing the output for the Primary controller. This provides for bumpless transfer from Manual to Auto for the Primary.
If you change the controller to the Ratio mode now, where do you think the controller set point will go? ________
Try it! Put the controller in Ratio. What is the SP? ________
You probably observed that that made quite a “jolt” to the flow loop. A wise operator probably would not change from Auto to Ratio when there was that much difference between the existing ratio and the ratio setting. Instead, he/she would probably set the ratio setting to whatever the application or product specifications required, then adjust flow set point until actual ratio met the required ratio. Then the switch to Ratio mode would be made.
Press R and enter a set ratio of 0.5.
After the loop stabilizes, observe:
Combined feedback and feed forward control
This exercise is provided to explore combined Feedback and Feed Forward control. The intention of this section is to introduce this control strategy. Although it is not possible to go into all aspects of this topic in this exercise, the control of the training applications have not been simplified and is realistic. The control example provided here is “Feedheater Control” (Figure 14.1). Later in this exercise we will look at a similar design for boiler drum level control using the exercise provided by Wade Associates
D14.2 Process and control model
In exercise 12 we used a feed heater example such as this to practice with cascade control. However in that model changes to feed flow were not measured directly and we had to wait for their effect on the exit temperature before the control system could make any corrections. In the feed forward configuration as shown in Figure D14.2 the heat load presented by the incoming feed and the required temperature rise is calculated as a fuel flow requirement and added directly to the output of the primary controller. When the secondary loop is in Cascade mode the primary loop output then creates a more accurate and responsive set point for the heating fuel flow.
In this example the feedfoward controller uses the changes in feed flow (and feed temperature) as a heating load change to predict the corrections to fuel flow set point.
A few points to remember:
The main control is Feed Forward control, whenever all major disturbances are used to calculate the corrective action. In most cases, Feedback control serves as a long term correction. In essence this means that it should act against the deviations of the PV from the set point which feed forward cannot be expected to correct. Therefore, tuning should be done in the following order.
- Firstly, the flow controller which is common for Feedback and Feed Forward control has to be tuned.
- Secondly, Feed Forward control and finally Feedback control has to be tuned. The Tuning of Feedback control should aim towards minimum Feedback control action; just enough to eliminate the process drift. Anything more adds to the wear and tear of equipment without significantly improving control results.
D14.3 Tuning Method
For this exercise we shall use the heater process model made available by Wade Associates with the ControLAB simulation package. The following exercise steps are based closely on the details kindly supplied to IDC Technologies.
Description of the feed forward configuration
After starting the ControLab program select Control/Retrieve Strategy Model and Tuning and then open the file “IDC FEEDFWD HTR.stg”.This action incorporates the process model file “IDC FEEDFWD HTR.mdl” The configuration of this cascade + feedfoward system is shown in figure D14.4
The strategy for setting up the feed forward controller is to measure the open loop response from the feed flow to the exit temperature to find the process transfer function A (s) and to find the transfer function of the fuel flow to temperature B(s).
From this the gain, lead and lag time constants of the feedfoward controller (i.e. the transfer function FF(s)) can be estimated using the relationship shown in Figure D14.4.
The following boxes summarize the procedure to be followed to arrive at the settings for the feedfoward controller. You should be familiar with obtaining the parameters from exercise 2 and you may use those notes to revise the method.
Once we have found the transfer function parameters for FF(s) these are to be loaded into the FF tuning page on the controller display after which response tests can proceed. What we are trying to achieve is a response from the Ffwd stage that will very closely compensate the flow set pint for changes in the load.
Step 1: Start PC –ControLAB and select the Control Strategy PC Pracs/Ex-14/IDC FEEDFWD HTR.stg
Step 2: Confirm that you have the process model “IDC FEEDFWD HTR ”.(See PCControLAB title bar) Note the process temperature range is 0-500°F and fuel flow range is 0-100 GPM
Step 3: Set up the secondary controller tuning with Gain = 05. and Reset = 0.15 minutes/repeat. Now test the step response in AUTO (press ON when Fuel Flow Control is selected) and verify that it is fast and stable. Leave the loop in AUTO (ON) with SPE-2 = 30
Step 4: Switch the display to the primary controller and make sure Ffwd is OFF and the mode is MAN. (indicates yellow). Changing feed load will now only affect PV-1. Set the PRI LOAD to 60%. The temperature should be settled around 360°F.
Step 5: Make step changes of 5% in the load by using the Step buttons and zoom the display so that you can measure the step response of the temperature PV-1. from this curve and using the methods described in Exercise 1 or 8 obtain the parameters for the transfer function A(s) as described in Figure D14.4 above.
Lag time constant Tlga.........................
Dead time Tda......................................
Step 6: Obtain the B(s) transfer function by testing a step response of the secondary controller to a step change in SPE-2 by placing the primary controller in MAN and secondary controller in CASC. Make a D14% downward step change in the OP-1 (30% to 20% ) to create the step response and obtain the parameters for the transfer function B(s) as described in Figure D14.5 above.
Lag time constant Tlgb...............................................
Dead time Tdb...........................................................
Step 7: Now arrange the two transfer functions as shown in figure D14.6 above to extract the parameters for FF(s) and load these into the feedfoward controller settings under TUNE and under the tab “ Feedfwd”.
Step 8. You should now be able to test the response of the feedfoward controller. Keep the secondary controller in CASC and leave the primary control in MAN so that there is no corrective action from the temperature feedback controller. Press the Ffwd button to connect the feedfoward signal to the set point of the fuel flow control. The temperature (PV-1) should remain steady and if you now step the load upwards by 5% you should see that the fuel flow steps upwards approximately 2 minutes later as the dead time allowance in the Ffwd controller expires.
If the tuning is good PV-1 should show only a small disturbance and ideally it should not move more than a few degrees. Some adjustment of gain may be needed to improve the matching. See Figure D14.7 for a typical result.
Step 9: Now that you have a good predictive control function, the temperature feed back control can now be introduced to provide the slower acting correction to fuel feed to bring the exit temperature into alignment with the required exit temperature set point . Using the step response data from the B(s) test you may use the Zeigler-Nicholas formula to arrive at settings for gain and reset actions or you may keep the gain low and use moderate integral and derivative action to give you a stable corrective action . Now you will see that the process operates close to set point even when the AutoLoad noise is switched on. See Figure D14.8. You can also switch the feedfoward in and out to see how the FF signal helps to reduce the variability in PV in the presence of disturbances and noise in the load.
We have seen how feedfoward signals can be added to feedback signals to compensate the effect of disturbances in the loading process. It should be noted that this method depends on the relationship between loading and the corrective action being constant in gain and in dynamic lags. In the heater model the gain is not constant over the full temperature range and it is highest at low loads and high temperatures. However the differences are relatively small and the controller remains stable over the whole range. Where this is not the case the sensitivity of the feedfoward controller may have to be changed as the operating point changes. This fairly easy to do but it is important to aware of the likelihood of this condition in practical applications.
The success of feedfoward control also depends on the ability to measure the disturbances affecting the process. If we cannot detect the disturbances with a sensor we cannot build in the compensation.
Feedfoward control with feedback can be a very effective control strategy especially where the process is subject to large disturbances in the process and where the load parameters can be measured. Processes with long dead times and long lags will benefit particularly.
Non-linearity in the process will make the matching of feed forward signals change with the conditions and this may lead to instability problems unless variable gains are employed.
The arrangement of controllers in Cascade with a Feedfoward switch allows users to easily operate with or without feedfoward as required and this in turn assist the tuning and setting up procedures.
D14.7 References and Tuning Answers
This exercise is based on the exercise produced by Wade Associates which can be accessed in the IDC Workshop as filename: 16_AddFf.pdf. The tuning results obtained in the Wade exercise are presented in the file 16_AddFf.pdf with some additional notes. The tuning values found from tests and used in this version of the exercise are as follows:
Feedfoward Gain Kf = 0.36
Lead Time (mins) = 8.00
Lag Time (mins) = D14.00
Dead Time (mins) = 2.00
D14.8 Further applications
Two very widely seen applications of feedfoward additive control are to be found in typical large industrial boilers. These are :
Balanced furnace draft control: Furnaces in boilers and other large combustion systems often have to be controlled at very slightly negative pressures relevant to atmospheric pressure (i.e. small negative gauge pressure). Feedfoward control senses the air flow to the furnace and adds a value to output of the furnace draft pressure controller to change the suction fan (ID Fan ) speed or dampers in anticipation of a pressure change.
Boiler drum level control uses steam flow output to drive the feedwater inlet flow to balance the boiler drum water inventory whilst the level controller is used to trim the level in the presence of large transients. This example is the subject of Exercise 15 overleaf.
Application of feedfoward to control of boiler drum level
This exercise has been kindly provided to IDC Technologies by Wade Associates for application using their PC-ControLAB simulation package.
OBJECTIVE: To illustrate a liquid level control phenomena, known as “shrink and swell”, that is very common with steam boilers.
PREREQUISITE: Completion of the following exercises:
Tuning Liquid Level Control Loops
Characteristics of Cascade Control
Characteristics of Additive Feedforward Control
BACKGROUND:This section will demonstrate the control of a boiler drum level in the presence of the phenomena known as “shrink and swell.” The first type of control will be standard level control, cascaded to feedwater flow control. The second type will be three-element drum level control in which the steam outflow rate will be measured and will adjust the feedwater flow controller by feedforward control. The level controller output will be used as feedback trim. See Figure D15.3.
RUNNING THE PROGRAM
1. DRUM LEVEL CONTROL
1.1 Control with Normal Cascade
Select and Open: Control / Select Strategy / PC PRACS/ EX-15/ DRUM LEVEL FFwd.stg
Check that the Process Model has opened as “level2” Process.
Select Process / Change Parameters. Set or confirm the following:
Enter the following tuning values:
Put the Secondary controller in Cascade. Put the Primary controller in Automatic. For the moment, leave Ffwd OFF.
Change the Primary controller set point by 10%. The response should be essentially the same as in Section 3; that is, quarter cycle decay with a period of about 30 minutes. (In an actual application, the drum level response would be much faster than this.) When the loop has come to equilibrium, put the Primary controller in Manual. With the Primary controller selected, press StepIncr to simulate an increase in outflow. After the Primary PV has decreased about 15%, press StepDecr.
Put the Primary controller in Automatic. When the PV comes to equilibrium at the set point, then put the controller back in Manual.
So far we have not added the simulation feature that creates the drum level “shrink and swell” effect. We will add this feature now and repeat the above test. Through the Process | Change Parameters table, select the parameter labeled “BLK 57 LEAD TIME.” (Be sure NOT to select “BLK 57 LAG TIME.”) Change the parameter value from +1.5 to Minus 3.0. Press [OK].
With the Primary controller selected and in Manual, press StepDecr to simulate a decrease in outflow. When the Primary PV has increased about 15%, press StepIncr.
You should have observed that when StepDecr was pressed, the level initially decreased, then began to increase. This is known as drum level “shrink”.Also that when StepIncr was pressed, then drum level increased further before settling to an equilibrium value. This is known as drum level “swell”.With straight level-to-flow Cascade control, when the steam flow rate is decreased, due to the “shrink” effect, the Primary (level) controller will initially sense a decrease in level, consequently it will increase the set point of the Secondary (feedwater flow) controller. The correct response should have been to decrease the feedwater flow rate.
Thus, due to the shrink-and-swell phenomena, pure Cascade control for this application will produce poor or unacceptable performance, particularly if the same tuning parameters are maintained.
Adjust the Primary controller SP to match the current PV, then put the Primary controller in Automatic. (Leave Feedforward Off.) Change the set point to 5% above the current PV.
If the answer is NO, then we will significantly loosen the Primary controller reset tuning parameter. Enter the following for the Primary:
Your answer should be “YES”, but you should also observe that the loop is very sluggish, taking an extremely long time to stabilize at set point.
When the loop is at (approximate) equilibrium, press AutoLoad to activate automatic, random load changes. Observe the loop for at least 60 (simulated) minutes.
We will now implement the three element drum level control scheme, using the steam flow measurement as a feedforward signal.
2.2 Three Element Drum Level Control
Select the Primary controller, press Tune, then select the Feedfwd tab. Enter 1.0 for Feedforward Gain. Leave the other parameters at 0.0. Press Clear.
At the Primary controller, put Feedforward control ON (Press FFWD.) Observe the loop for at least 60 (simulated) minutes, or longer.
Once near the set point, does the level remain near to the set point?
See overleaf for answers to the above points:
IDC Note: The feed forward technique applied in the exercise greatly improves the response of the level to disturbances in steam load flow as intended for boiler level control. Note however that the tuning of the level control primary loop remains sluggish in response to level changes. This is normally acceptable for boilers since there is usually no requirement to change the set point of the drum level. Answers to the exercise in Drum Level Control as supplied by Wade Associates
Answers to the exercise in Drum Level Control as supplied by Wade Associates
Dead-time compensation in feedback control
This exercise will give some practical experience in the Closed Loop Control strategy necessary if a long Dead-Time is part of the process to be controlled. Figure D15.1 shows the block diagram of a control loop with added process simulation as a means of overcoming process Dead-Time.
Note concerning terminology
Within this exercise, we have a conflict of terminology. As we have no physical industrial plant to control, we have to simulate the behavior of such an industrial plant. If one of the tools of controlling an industrial plant requires a process simulation, we would not know what simulation is meant if we do not make a clear distinction between the process simulation acting as a stand-in on behalf of the real process, and the simulation acting as a tool of control.
Therefore, in the description of this exercise, the term simulation is never used if referring to the real process; the terms for the real industrial process will be used here instead. In this exercise, the term simulation is used only for a process simulation which exists in addition to the industrial process and would exist in a real industrial plant for control purposes as well.
Step 1: Start the PC-ControLAB programme and open the process model “ application “IDC Ex 16.mdl". The tuning parameters for the process simulation are in the PROCESS/ “Change Process Parameters” tool bar. They are K-SIM, Simlag 1, Simlag 2 and SIM Dead Time. The values have been initialized to the correct values required to match the real process as close as possible. The process has two lags in series of 02. and 0.1 minutes and a dead time of 0.2 minutes.
Step 2: Set the Horizontal Grid Scale (under View) to “seconds” and set the variables display to show PV- and PV-2. PV-2 is PV-Real but the control loop is connected to PV-1 which is the predicted PV-PRED. Thus the feedback control operates on predicted PV value but in the steady state this value becomes the same as PV-Real.
Step 3: Then, on the Controller Display set MODE to MANUAL. Set the OP to 10% and wait for the process to settle down. Make a change of OP from 10% to 40% of range. See Figure D16.2.
Observe the different reaction curves of the process variable coming from the real process (PVE-REAL) and simulated process variable (PVE-PRED). It can be seen that the reaction of the predicted process variable is similar, but if compared to the real process variable there is no dead-time. This is a simple form of model predictive control in which the predicted PV may be used as the feedback signal for a PID control loop instead of the actual PV. Since both forms of PV reach the same steady state value the predicted PV allows for some level of conventional tuning to be applied whilst reducing the effects on stability of the long dead time.
The principle of the dead-time compensator model is that the dynamic response of the model has been split into the part without the dead time and a separate part including the dead-time. The pure lag signal may then be used in feedback for a period of time until the real PV signal change develops. A small proportion of the real PV is added to the predicted PV to produce a combined PV that resembles a first order lag response. If there are errors between the dead-times of process and model these are calculated in the dead-time stage of the model and added to the output as a correction signal that will fall away as the disturbance decays.
In order to explore the purpose and impact of the model parameters, do the following for each parameter (for K-SIM, Simlag 1, Simlag 2 and Sim Dead time):
- Make sure, the controller is in MANUAL mode.
- Change the tuning constant.
- Make a step change to the OP.
- Carefully observe the reaction curves of PVE-REAL and PVE-RED.
- Return the tuning constant to its correct value.
- Repeat the above operations with different values for each tuning constant. Avoid experimenting with more than one tuning constant at any one time in order not to confuse the results.
The reaction curve of PVE-PRED changes as follows when the tuning constants are changed:
- The change of K-SIM changes the magnitude of the simulated response.
- The change of Simlag 1 and Simlag 2 and change the dynamic of the simulation (form of reaction curve).
- The change of Simdead time causes significant deformations of the reaction curve of PVE-PRED. The characteristic of these deformations indicate whether the dead-time parameter is too long or too short. Compare the reaction curves you find with the ones shown in Figures D16.3, D16.4 and D16.5
All of the response shown in the figures were obtained with the simulation gain K-SIM set to 1.6. If you increase or decrease the gain value you will see that overshoots and undershoots easily occur. The difficulty with this type of model is that the effects are highly interactive and some time and experience are needed to separate the variables and find the best fit.
K-SIM, Simlag 1 and Simlag 2 shape the simulated reaction curve of PVE-PRED. Simdead time serves to match the simulated dead-time with the process dead-time. Interactions between the effects make it tricky to find the best fit but usually the dead-time is easy to measure so this parameter can be set first.
D16.7 Operation of controller tuning
Once you have established a reasonably smooth first or second order lag response from the predicted PV this variable can be used to attempt the tuning of a basic PID feedback control loop. In practice this procedure may use the conventional Zeigler –Nichols approach since you now have a typical step response. However the loop gain values will need to be kept low to ensure stability so start with a very low loop gain and a very slow reset time.
The values you should obtain will be sufficient to provide good loop stability and you will find that integral action can be raised in small steps until the unset of oscillation. The results of your tuning efforts should be something similar to the results shown in Figure D16.6 below.
Note of caution: The tuned response shown in Figure D15.6 was obtained when using the PC-Control Lab programme running in scroll rate : Seconds. Please note that due to the short time constants involved in the model the simulation will not be stable in the “Minutes” scroll mode.
The basic idea of Dead-time compensation is straight forward. Nevertheless, it is important to realize that it is a strategy of intermeshed loops. The control loop using the predicted PVE-PRED is intermeshed with remnants of the real PVE-REAL via the error calculation. In addition, one has to realize that the reaction of control actions is predicted only. No prediction whatsoever can be made about the impact of disturbances. Derivative control is most effective in counteracting integral and lagging behavior, but has its limitations if applied to dead-time problems. Therefore, one should take care when adding derivative action that noise does not present added problems.
Exercise in gain scheduling for control of non-linear processes
This exercise has been kindly supplied to IDC Technologies by Wade Associates
OBJECTIVE: To demonstrate the benefits of scheduled tuning for non-linear processes.
PREREQUISITES: Completion of Exercises
|9||PID Tuning from Open Loop Tests|
|11||Improving “As Found” Tuning|
BACKGROUND: If a process is highly non-linear and is operated over a wide operating region, then either the tuning parameters must be changed for different operating conditions, or else the controller will be tuned for the “worst case” condition, resulting in less than optimum tuning at other conditions. One means of coping with this is to issue instructions to the operator to change tuning parameters for different operating conditions. Another possibility is to develop a set of tuning parameters for each operating condition and employ some mechanism which applies the correct tuning parameter set for each condition. This is known as “scheduled tuning” (also sometimes called “adaptive gain”). A basic premise is that the operating condition can be indexed by a single variable, such as the measured process variable, the set point, controller output, or a disturbance variable to the loop. This variable can then become the key to finite regions of the operating zone, each with its own set of tuning parameters.
A slight variation of the above is to assign nominal parameters, then apply a unique multiplying factor to each of the parameters for each defined region. This is the approach used by PC-ControLAB.
1. RUNNING THE PROGRAM
Start Windows and Start PC-ControLAB
Confirm that you are running the Feedback control strategy.
Select Process | Select Model/ PC Pracs/EX-17/ Highlight “Temp.mdl” and press Open.
2.2 Tuning for Base Conditions
Notice that the PV scale is 0 to 100 DegC, that the set point is at 60 DegC, and that the load variable (process flow rate) is 0 to 100 m3/hr. For this exercise, we will assume that the base conditions (normal operating point) is close to the current set point value of 60 DegC with a load variable of 75% of full scale. The load variable can vary, however, between 50 and 100% of its full range.
We’re going to be making fairly small set point changes, so press Zoom and change the vertical scale of the PV from 0 – 100 to 55 – 65 DegC. (Why small SP changes? To avoid having the controller output hit a limit when we are working at high loads.)
With the controller in Manual, change the controller output from 48% to 53% (a 5% change). Use the open loop testing method (Laboratory Exercise 9) to determine an initial set of tuning parameters for a PI controller.
Enter these parameters. Put the controller in AUTO. Fine tune these parameters if necessary, using the “Improving As Found Tuning” method (Laboratory Exercise 11). Make your set point changes between 60 and 62 DegC.
When you have determined satisfactory tuning parameters for this condition (load variable = 75m3/hr), record them below:
Process Flow: 75 m3/hr. Gain: ________ Reset: ________ min/rpt.
Now make load changes of 5% (Press StepDecr once. After the PV settles out, press StepIncr once.) and observe the response.
Return the Set Point to 60 DegC.
4. Other Operating Points
Change the process load to 95% of full scale. (Press StepIncr 4 times).
Change the SP to 62 DegC, then to 60 DegC. Observe the responses.
Also make load changes of 5% (press StepDecr once. After the PV settles out, press StepIncr once.) Observe the responses.
How do these responses (to SP and load changes) compare with what you observed when the load was at 75%?
Change the process load to 55% of full scale. (Press StepDecr 8 times).
Change the SP to 62 DegC, then to 60 DegC. Observe the responses.
Also make load changes of 5% (Press StepDecr once. After the PV settles out, press StepIncr once.) Observe the responses.
How do these responses (to SP and load changes) compare with what you observed when the load was at 75%?
What you should have observed so far: Starting with the controller tuned acceptably when the load is 75%, a significant load increase with the same tuning parameters causes the control loop to be slightly more sluggish. A significant load decrease with the same tuning parameters causes the loop to be too aggressive.
Tuning for Other Operating Points
While we are at a load of 55%, adjust either the Gain or the Reset, or both, until you get approximately the same type of response for a SP change from 60 to 62 as you did when the load was 75%. Record your tuning parameters.
Process Flow: 55 m3/hr. Gain: ________ Reset: ________ min/rpt.
Now go to 95% load and adjust the tuning parameters to get approximately the same type of response as you did when the load was 75%.
Process Flow: 95 m3/hr. Gain: ________ Reset: ________ min/rpt.
5. Calculation of Tuning Schedule
Suppose we divide the operating zone into 3 regions, based upon the load value.
|Region 1||Load < 65%. (Actually, we never use a load less than 50%.)|
|Region 2||Load between 65% and 85%.|
|Region 3||Load ≥ 85%.|
Since we said that in our base case (See Section 2.2) the load was 75%, then Region 2 is our base operating region. We will enter tuning parameters for that region as our nominal parameters, and adjust the parameters for the other regions by multiplicative factor. (The multiplicative factor for Region 2 will be 1.00.)
Calculate the Gain multiplier and Reset (Min/Rpt) multiplier for Region 1 and Region 3.
Enter these values into the table below. (Note that the Deriv multipliers can be left at the default value of 1.0, since the base Deriv value is 0.0. Also note that Region 2 represents our base case, so those multipliers are 1.0.)
Press Tune | Schedule tab.
Enter, or confirm, that the Number of Regions is set at 3.
Select PV-4 for the Key variable. (This process model has PV-4 connected directly to Load-1, which represents the process flow disturbance.)
Fix up the table to resemble columns 2 - 6 listed above.
Select Scheduled Tuning ON.
Return to the Tuning tab. Re-enter the tuning values you determined for the base case (Section 2.2).
Press Clear to remove the Tuning dialog box.
6. Testing of Scheduled Tuning
Change the load to 55%.
For each load value between 55% and 95%, make set point changes to 62, then to 60. Observe the response. Then increment (press StepIncr) the load to the next level and repeat.
Ideally, you should see a fairly satisfactory response at each load level. If not, consider what changes you might make to the schedule. Consider changing one or more of the following:
- Number of regions
- Break points between regions
- Multiplying factors.
Record your final tuning schedule in the table below:
Number of Regions: _________
Answers to exercise W13
Answers to practical No 8
Intelligent Trial and Error
Trial and error: An organized procedure
Intech May 2005
By Harold Wade
Copyright Harold Wade. Reproduced by kind permission of Dr Harold Wade.
Seek the fewest number of parameter changes and experimental upsets to the process.
Ziegler-Nichols, Cohen-Coon, and related techniques for controller tuning have been around for over 50 years.
More recently, a number of commercial software tuning aids have become available. In spite of this, it is still commonplace to tune control loops manually using a trial-and-error technique.
An inexperienced tuner normally does not follow any organized procedure and consequently, spends too much time, causes unnecessary upsets to the process, or gives up with only satisfactory results.
With an organized procedure, it should be possible to obtain acceptable performance as rapidly as possible; that is, with the fewest number of parameter changes and with minimal experimental upsets to the process.
Here is an organized procedure for tuning that will achieve that goal. This procedure is only applicable to a certain class of control loops, specifically loops for self-regulating processes using a proportional-plus-integral (PI) controller.
Although control loops for non self-regulating processes (such as liquid level) are not in this domain, nor are loops using derivative control, the applicability still covers a significant proportion of all loops.
Many consider trial-and-error controller tuning an art-a skill requiring the exercise of intuitive facilities, which one cannot learn through study.
A science, on the other hand, is methodological activity. To convert an art into a science, one must conceive of methods, which cover normal circumstances plus most anomalous circumstances that may arise.
Investigate several angles
A typical tuning session begins when a plant operator requests help from an instrumentation technician or control systems engineer. Usually, the request results because the loop is oscillating unacceptably or it is sluggish in returning to the set point after a load upset.
The cause for the unsatisfactory performance may not lie with the controller tuning. Therefore, before starting to tune, it is important to investigate several process, equipment, and operating conditions. For instance, is it a sticky valve? Are there external disturbances affecting the loop?
For the purpose of this discussion, however, let us suppose we've ruled out other causes and it is truly a controller-tuning problem. Here is where an organized procedure for tuning the loop will be helpful.
In the absence of other disturbances, the existing tuning parameters give rise to the present control loop behavior. Collectively, the tuning parameters and the characteristics of the loop behavior are the as-found data.
Two parameters, controller gain, and integral time, represent the as-found tuning. If the loop is oscillating, two parameters will characterize the process behavior, the decay ratio, and the period of oscillation.
There exists an alternative definition of decay ratio. This alternative definition only works in an anomalous situation-such as if one of the first two peaks does not cross the set point. The period of oscillation and integral time must be in the same units, as in minutes for period and minutes (per repeat) for integral time.
The set of as-found data represents a particle of knowledge about the control loop, which can be a starting point for improving the loop performance.
If a Ziegler-Nichols type test were to run on the loop, either in the open or closed loop, we would in essence be starting anew and ignoring the knowledge we now have about the loop behavior.
Desirable set point response
The basic premise for this procedure is when we have a well-tuned loop, there’s a predictable relation between the period, P, of oscillation and the integral time, TI. A reasonable relationship is:
1½ < P/T1 < 2 (1)
Equivalent expressions, which will be useful later, are:
1½ T1 < P < 2 (2)
1/2 P < T1 < 2/3 (3)
Setting the period-to-integral time ratio between 11/2 and 2 is equivalent to setting the phase shift through the PI controller to lie in the range of approximately 13°-18°.
Experienced tuners have often qualitatively estimated the phase lag through a PI controller, and if excessive, have increased the integral time. The criterion stated here merely makes this a quantitative procedure. The target range, 11/2 to 2, is an empirical ratio, rather than one that has come about through analysis. Experience shows this is valid for most loops where the dead time is a relatively small fraction of the process time constant. For longer dead time-to-time constant ratios, the target range can move upward.
Another premise is a decay ratio of approximately 1/4-quarter-amplitude decay-represents acceptable tuning. This well-known and accepted criterion is a compromise between the most desirable set point response (no overshoot) and good response to a load upset (return to set point as rapidly as possible).
For some applications, it is preferable to favor the set point response at the expense of less favorable response to a disturbance. This procedure will provide a couple of fire escapes to accommodate this preference.
The intelligent trial-and-error tuning procedure is as follows:
If the loop is not oscillating, increase the controller gain enough to cause a damped oscillation to occur. This will allow observance of the period of oscillation.
If the loop is oscillating but the period of oscillation is not within the range of 11/2 to 2, readjust the integral time to lie within the range of 1/2 to 2/3 of the present period.
If the period of oscillation does meet the criterion-it lies between 11/2 to 2 times the integral time-adjust the controller gain until a 1/4 decay or other desired damping characteristic happens.
Each time a single tuning parameter is changed, the effect of this new parameter should be determined by making a small set point change. Only one tuning parameter should change at a time.
A convenient method of logging the progress of a tuning session uses this form. The first line is for recording the as-found data, and each successive line is for recording the parameter changes plus resulting behavior of new trials. Acceptable performance is usually realizable in three-to-four trials.
One should only manipulate one tuning parameter per trial. Also, if the units of reset action repeats per minute rather than minutes per repeat, invert the actual tuning parameter, reset rate, to get integral time, TI. When a new value for TI is determined, invert this to obtain the actual entry parameter.
Behind the byline
How to improve the as-found state
Here is how a typical tuning session will play out.
With the as-found behavior, the period is greater than 2 times TI, so exit the right end of decision box 3 to a box that instructs us to set TI to somewhere between 1/2 and 2/3 of the present period, or between 7.5 and 10 minutes.
Suppose we choose 8.0 minutes per repeat. Enter this, and make a small set point change. Now the response appears as in the second figure. Record the results on trial line 1 of the log.
In an actual plant situation, we would probably stop at this point. However, for instructional purposes, suppose we try to get closer to 1/4 decay. The benefit would be a slightly improved response to a disturbance.
The period-reset time criterion is met, so exit the bottom of decision block three, the left end of block five, then exit the bottom of decision block six (If we really did not want quarter-amplitude decay, we would exit the right end of block six and be finished).
The instructions are to increase the gain. At a decay ratio of 0.13, the figure below instructs us to multiply the gain by a factor of about 1.15. Multiply this by the current gain, and get a new value of 2.3. Enter this, and make another small set point change.
This yields the behavior shown by the figure below-behavior after second tuning. Record the results on trial line two of the log.
Again, we are tempted to say, “Good enough.” But suppose we make one more attempt.
Following the same path as before, and again using the flowchart, we get a gain adjustment factor of 1.1, which results in a new gain of 2.53 (a gain entry of 2.5 would be sufficiently precise). This produces the behavior shown in the “loop responses” figure-behavior after third tuning. We log our results on trial line three.
This time, we are sufficiently close to quarter-amplitude decay. (Don’t be a slave to numbers!) We accept this as our final tuning. Thus, we have improved the as-found tuning with at most three tuning changes.
Instructions: When the Flow Chart recommendation is to Change Gain or Change PB, enter the graph on the horizontal axis at the present decay ratio. Read the related factor on the vertical axis. Multiply the present Gain, or Divide the present PB by this factor.
Loop responses after successive tuning parameter changes