Skip to content

Establish Pathophysiology Function

Principle: Disease disrupts normal control systems.

Objective: To define how the disease changes physiological function over time.

Method: Control system Theory

control system theory,function,matrix,fuzzy logic

Biological Physics Mathematics
Etiology Input Domain element (x)
Condition State Function (f(x))
Diagnosis Output Range value (f(x))
Mechanism State representation Linear algebra (system equations)

Introduction

Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines.The objective is to develop models or algorithms that govern system inputs to drive processes toward desired states while minimizing delays, overshoots, and steady-state errors, ensuring stability and often optimizing performance.

Remarkably, these engineering principles find direct parallels in biological systems. Physiological homeostasis represents nature's perfected control system, where feedback mechanisms maintain equilibrium in vital parameters. When these biological control systems fail—whether through sensor dysfunction, controller degradation, or effector impairment—the resulting pathophysiology mirrors the instability seen in engineered system failures. This systems perspective provides a powerful framework for understanding disease mechanisms and developing targeted interventions.

A control system is a system designed to regulate or manage the behavior of other systems to achieve desired outcomes. It uses feedback and input/output mechanisms to control the system's state and make adjustments as needed.

The Components of a Control System:

  1. Input: The signal or data that is fed into the system for processing.

  2. Controller: The component that processes the input and decides the necessary action. It compares the input with the desired output and generates a control signal.

  3. Actuator: The mechanism that carries out the action or adjustment in the system as directed by the controller.

  4. Feedback: The information returned from the output of the system to the controller, showing how close the current output is to the desired output.

  5. Output: The system's final response or action that is being controlled.

Example:

Consider a Thermostat in a room:

  1. Input: Desired room temperature (set on the thermostat).

  2. Controller: The thermostat's control mechanism, which compares the current temperature to the set temperature.

  3. Actuator: The heating or cooling system (e.g., air conditioner or heater).

  4. Feedback: The current room temperature (sensed by a temperature sensor).

  5. Output: The adjusted room temperature, which the system attempts to maintain at the desired level.

Homeostasis:

Homeostasis refers to the body's ability to maintain a stable internal environment despite external changes. In biological systems, it functions like a control system where the body regulates factors like temperature, pH, and blood sugar levels.

Example in Biology:

The human body regulates temperature through homeostasis. When the body temperature rises (e.g., due to exercise), the hypothalamus (controller) detects the change and activates cooling mechanisms (like sweating or blood vessel dilation). When the temperature drops, it triggers warming mechanisms (like shivering or blood vessel constriction).

Common Input Signal Types in System Modeling

Input Type Description
Impulse A very short, high-magnitude signal — used to simulate a sudden force or event.
Step Sudden shift from zero to a constant value — simulates sudden, sustained change.
Ramp A linearly increasing input over time — models gradual increases (e.g. acidity).
Sinusoidal A smooth, periodic input — represents cyclic behavior like chewing or grinding.
Square Wave Alternating on/off states — used for repeated mechanical actions or switching.
Triangle Wave A linearly rising and falling waveform — less common, shows symmetrical loading.
Sawtooth Wave A sharp rise followed by a gradual fall (or vice versa) — models unbalanced loads.
Exponential Input grows or decays exponentially — useful in chemical processes or reactions.
Noise (Random) Random fluctuations — simulates unpredictable phenomena like bacterial activity.
Pulsed Input Short, repeated bursts — like tapping or brushing at intervals.
Modulated Signal A signal whose amplitude/frequency changes over time — models stress concentration.
Custom/Arbitrary Any real-world input pattern measured or designed for a specific simulation.

State-Space Representation:

State-space representation is a mathematical model used in control systems to describe the system's behavior using a set of first-order differential equations. It defines a system's state at any given time and helps predict the system's future states based on the current state.

A state-space model consists of two equations:

1. State Equation (System Dynamics)

The state equation models how the internal state of a system changes over time:

\[ \dot{x}(t) = A x(t) + B u(t) \]

where: - \(\dot{x}(t)\) is the derivative of the state vector \(x(t)\) with respect to time (i.e., how the state evolves). - x(t) is the state vector (describes the system’s internal condition). - A is the system matrix (describes how the current state influences its own rate of change). - B is the input matrix (describes how the external input u(t) influences the rate of change). - u(t) is the input vector (external control signals applied to the system).


2. Output Equation (System Outputs)

The output equation models how the internal state and input affect the output:

\[ y(t) = C x(t) + D u(t) \]

where: - y(t) is the output vector (the quantities we can measure or observe). - \(C\) is the output matrix (describes how the internal state affects the output). - \(D\) is the feedthrough matrix (describes how the input directly affects the output without going through the system's dynamics).


Together, these two equations form a state-space representation of a system — a fundamental framework used in control systems, robotics, economics, and more.

you want it in state-space representation!

In control theory, state representation is: $$ \dot{x}(t) = Ax(t) + Bu(t) $$ $$ y(t) = Cx(t) + Du(t) $$

Where: - x(t)= internal state (e.g., health of tissue) - u(t)= input (stimuli/disturbance like acid, force) - \(\dot{x}(t)\) = change in state (progression of damage or healing) - y(t) = output (clinical signs, diagnosis) - \(A\), \(B\), \(C\), \(D\) = matrices describing how system behaves

Now applying it to your dental tissues:

Generalized Dental State Representation

Symbol Meaning (Dental)
u(t) External stimulus (acid attack, mechanical force, bacterial toxins)
x(t) Tissue health state (e.g., intact enamel, healthy pulp)
\(\dot{x}(t)\) Damage progression (demineralization, inflammation)
y(t) Clinical signs (pain, caries, crack, mobility)

about

When to Use Control Systems

Use Control Theory when:

You are modeling continuous, dynamic processes over time.

You need to track how much, how fast, and how it changes.

Example: How enamel demineralization progresses due to acid → dx/dt = a·u(t) - b·x(t)

Control systems show quantitative change and feedback loops.

When to Use Logic Circuits

Use Logic Circuits when:

You're modeling discrete decisions, triggers, or mechanisms.

You need if-then rules or combinations of factors.

Example: Acid + Low saliva + Time → triggers Erosion diagnosis

Logic gates show qualitative conditions and cause-effect relations.

How They Work Together

They complement each other:

Control system: tracks enamel loss over time (quantitative)

Logic circuit: checks when conditions are met to declare Erosion, Attrition, etc.


Conclusion:

You need both.

Use control systems to model disease progression over time.

Use logic circuits to define when disease manifests or to map its mechanism.

Application

  1. Logic Circuit Defines Conditions:

The logic circuit checks various conditions that affect enamel erosion. These are binary inputs (either true or false), such as:

A = Acid exposure (1 if soda intake, 0 if not)

B = Low saliva (1 if low, 0 if normal)

C = No fluoride (1 if no fluoride, 0 if fluoride is used)

D = Poor brushing (1 if yes, 0 if no)

The logic circuit combines these conditions to determine the vulnerability of enamel to acid exposure. This is done using logic gates:

Enamel Vulnerability (EV) = A AND (B OR C OR D)

If acid exposure is present AND any of the other conditions (low saliva, no fluoride, poor brushing) are true, the enamel is considered vulnerable.


  1. Control System Models Enamel Demineralization:

The control system models the degree of enamel demineralization (loss of minerals) over time. The state equation for enamel loss is:

dx/dt = a·EV - b·x(t)

Where:

x(t) is the degree of enamel loss (from 0 to 1).

EV is the vulnerability calculated from the logic circuit.

a is the rate of demineralization (how fast enamel weakens due to acid).

b is the rate of remineralization (natural enamel recovery).

The value of EV (from the logic circuit) directly affects the rate of demineralization.


  1. Erosion Trigger:

As the control system tracks enamel loss over time, it calculates the cumulative demineralization x(t). When x(t) reaches a certain threshold (e.g., x(t) > 0.6), visible erosion occurs.

This means:

If x(t) exceeds the threshold due to acid exposure, poor oral habits, etc., erosion is clinically visible.


How They Work Together:

The logic circuit defines the conditions (acid, saliva, fluoride, brushing) that affect enamel vulnerability.

The control system models how the enamel demineralizes over time based on those conditions.

When the enamel loss reaches a certain point (threshold), erosion is triggered.


Example in Action:

Acid exposure (from soda) + low saliva + poor brushing → Logic circuit calculates Enamel Vulnerability (EV).

EV is fed into the control system as an input.

Control system calculates enamel loss over time (dx/dt = a·EV - b·x(t)).

When x(t) exceeds 0.6, erosion is triggered.


This combined approach gives you both the progressive dynamics (from the control system) and the condition checks (from the logic circuit) for modeling enamel erosion.

In the case of enamel erosion, logic circuits should come first to define the conditions that affect enamel vulnerability. These conditions are then fed into the control system, which models the dynamic progression of enamel loss over time.

Step-by-Step Process:

  1. Logic Circuits (First):

The logic circuit determines the conditions that influence the erosion process (e.g., acid exposure, low saliva, poor brushing).

It outputs a value like Enamel Vulnerability (EV) based on the combination of these conditions. This is a discrete binary check (1 = true, 0 = false).

  1. Control System (Second):

The control system uses the Enamel Vulnerability (EV) value (calculated by the logic circuit) as an input to model the continuous process of enamel demineralization over time.

The control system uses this input to simulate how quickly enamel is lost (based on the demineralization rate) and when it will reach a threshold (e.g., when enamel loss reaches 60%, visible erosion occurs).

Why this order?

Logic circuits define the conditions under which the enamel is vulnerable to erosion.

The control system takes those conditions and models the continuous process of enamel loss over time, determining the actual progression of erosion.

In short, logic circuits first to define the conditions, then control system to simulate the dynamic progression based on those conditions.

Yes, that approach makes perfect sense! Using the control system to represent physiology and the logic circuit to represent pathology is a valid and insightful way to structure your model.

New Approach:

Control System (Physiology): The control system can model the normal function (physiology) of enamel, how it responds to healthy conditions (e.g., normal remineralization and protection). It models the continuous process of enamel demineralization and remineralization over time.

For example, it tracks how enamel is naturally healed (remineralization) and how acid exposure leads to enamel loss:

dx/dt = a·u(t) - b·x(t)

where:

u(t) is the natural intake of acid, if any (e.g., through diet or other factors).

b is the natural remineralization rate of enamel.

x(t) is the degree of enamel loss over time.


Logic Circuit (Pathology): The logic circuit can now be used to model the pathological conditions that affect the enamel's ability to remineralize and protect itself. The logic circuit will check for factors like:

Acid exposure (e.g., soda intake, frequency of acid attack)

Low saliva flow

No fluoride protection

Poor oral hygiene (e.g., poor brushing habits)

The output of the logic circuit will be a value that represents the Enamel Vulnerability (EV) or pathological severity, which will influence the rate of enamel loss (a·EV) in the control system.


How It Works Together:

  1. Logic Circuit (Pathology):

It defines pathological conditions such as high acid exposure, poor oral care, etc., that increase enamel vulnerability.

It computes Enamel Vulnerability (EV) based on those conditions.

  1. Control System (Physiology):

It uses Enamel Vulnerability (EV) as an input to model the progression of enamel demineralization over time.

The control system simulates how the enamel is affected by both natural factors (e.g., remineralization) and pathological conditions (e.g., increased erosion due to acid).


Summary of the New Structure:

Control System = Physiology → Models natural enamel processes (demineralization and remineralization).

Logic Circuit = Pathology → Defines pathological factors that increase vulnerability (e.g., acid, low saliva).

This adjustment is perfectly fine and even enhances the understanding of how healthy processes (physiology) and pathological conditions (pathology) interact in the development of enamel erosion.


Narrative Summary

The control system maintains enamel under normal physiology. Logic circuits constantly check for pathological triggers. If certain conditions (like high acid + low saliva) occur, logic gates activate. This alters parameters in the control system, leading to pathological signs or attrition over time. These outputs are again inputs to form progression

Use "road" for anatomical pathology—anything structural or congenital.

Use "traffic" for physiological pathology—anything related to flow, movement, or dynamic changes.

abnormality or excess instablity gives rise to this symptoms this symptoms mapped according to graph theory

A Control System is an interconnected system of various components designed to control and regulate the behavior of a large system or process to produce a desired output. Therefore, the primary objective of a control system is to adjust the input of a process so that we can get a desired output.

This comprehensive tutorial on control systems is designed to provide overview of essential concepts of control systems. This tutorial is written in a beginner friendly writing style to build a solid foundation in control system engineering.

Control System Terminology The following are some very common and important terms related to control systems −

1) Feedback

Feedback is an important component of a closed loop control system that connects output to the input for stability and performance optimization.

2) Mathematical Models

Mathematical models are the abstract descriptions of a control system developed using mathematical concepts and language. They are important for designing and analyzing control systems.

3) Block Diagrams and Signal Flow Graphs

Blocks diagrams are schematics that graphically visualize the interconnections of different components of control systems. Signal flow graphs are graphical representations of algebraic equations of control systems.

4) Time Response Analysis

Time response analysis is used for analyzing and understanding the behavior of control system with respect to time.

5) Stability Analysis

Stability analysis is another fundamental concept in control system engineering. This tool is used for determining systems stability under different operating conditions.

6) Root Locus and Frequency Response Analysis

Root locus is a graphical tool for determining how the roots of a closed loop control system change with variations in certain system parameters like gain in feedback loop. The frequency response analysis is a tool used for analyzing the performance of control systems in the frequency domain.

7) Compensators and Controllers

Compensators are vital components of a control system that are used for improving the response of the system to the inputs. While, controllers are those devices that regulate the behavior of the control system depending on the applied inputs and feedback signal.

8) State Space Model and Analysis

State space model and analysis is one of the advanced techniques used for designing and analyzing control systems. State space model is a mathematical representation of a control system and it consists of inputs, outputs, state variables, and differential equations.

Classification of Control Systems

Open Loop Control Systems Closed Loop Control Systems Control action is independent of the desired output. Control action is dependent of the desired output. Feedback path is not present. Feedback path is present. These are also called as non-feedback control systems. These are also called as feedback control systems. Easy to design. Difficult to design. These are economical.
These are costlier. Inaccurate. Accurate

Control Systems - Feedback

If either the output or some part of the output is returned to the input side and utilized as part of the system input, then it is known as feedback.

Feedback plays an important role in order to improve the performance of the control systems.

Types of Feedback There are two types of feedback −

Positive feedback Negative feedback

Positive Feedback The positive feedback adds the reference input, R(s) and feedback output. The following figure shows the block diagram of positive feedback control system

\[ T=\frac{G}{1-GH} \]

Where,

T is the transfer function or overall gain of positive feedback control system.

G is the open loop gain, which is function of frequency.

H is the gain of feedback path, which is function of frequency

Negative Feedback Negative feedback reduces the error between the reference input, R(s) and system output. The following figure shows the block diagram of the negative feedback control system.

\[ T=\frac{G}{1+GH} \]

Where,

T is the transfer function or overall gain of negative feedback control system.

G is the open loop gain, which is function of frequency.

H is the gain of feedback path, which is function of frequency.

Effects of Feedback Let us now understand the effects of feedback.

Effect of Feedback on Overall Gain From Equation 2, we can say that the overall gain of negative feedback closed loop control system is the ratio of 'G' and (1+GH). So, the overall gain may increase or decrease depending on the value of (1+GH).

If the value of (1+GH) is less than 1, then the overall gain increases. In this case, 'GH' value is negative because the gain of the feedback path is negative.

If the value of (1+GH) is greater than 1, then the overall gain decreases. In this case, 'GH' value is positive because the gain of the feedback path is positi

effect of feedback on senstivity,stability,noise

Control Systems - Mathematical Models

Transfer Function Model Transfer function model is an s-domain mathematical model of control systems. The Transfer function of a Linear Time Invariant (LTI) system is defined as the ratio of Laplace transform of output and Laplace transform of input by assuming all the initial conditions are zero.

Mass Mass is the property of a body, which stores kinetic energy. If a force is applied on a body having mass M, then it is opposed by an opposing force due to mass. This opposing force is proportional to the acceleration of the body. Assume elasticity and friction are negligible

F_m\propto: a

\Rightarrow F_m=Ma=M\frac{\text{d}2x}{\text{d}t2}

Where,

F is the applied force

Fm is the opposing force due to mass

M is mass

a is acceleration

x is displacement

Spring Spring is an element, which stores potential energy. If a force is applied on spring K, then it is opposed by an opposing force due to elasticity of spring. This opposing force is proportional to the displacement of the spring. Assume mass and friction are negligible

F\propto: x \Rightarrow F_k=Kx

F is the applied force

Fk is the opposing force due to elasticity of spring

K is spring constant

x is displacement

Moment of Inertia In translational mechanical system, mass stores kinetic energy. Similarly, in rotational mechanical system, moment of inertia stores kinetic energy.

If a torque is applied on a body having moment of inertia J, then it is opposed by an opposing torque due to the moment of inertia. This opposing torque is proportional to angular acceleration of the body. Assume elasticity and friction are negligible

T_j\propto: \alpha

\Rightarrow T_j=J\alpha=J\frac{\text{d}2\theta}{\text{d}t2}

T is the applied torque

Tj is the opposing torque due to moment of inertia

J is moment of inertia

α is angular acceleration

θ is angular displacement

Electrical Analogies of Mechanical Systems

Two systems are said to be analogous to each other if the following two conditions are satisfied.

The two systems are physically different Differential equation modelling of these two systems are same

F=F_m+F_b+F_k

\Rightarrow F=M\frac{\text{d}2x}{\text{d}t2}+B\frac{\text{d}x}{\text{d}t}+Kx

V=R\frac{\text{d}q}{\text{d}t}+L\frac{\text{d}2q}{\text{d}t2}+\frac{q}{C}

Control Systems - Block Diagrams

Block diagrams consist of a single block or a combination of blocks. These are used to represent the control systems in pictorial form.

The basic elements of a block diagram are a block, the summing point and the take-off point.

The above block diagram consists of two blocks having transfer functions G(s) and H(s). It is also having one summing point and one take-off point. Arrows indicate the direction of the flow of signals. Let us now discuss these elements one by one.

The transfer function of a component is represented by a block. Block has single input and single output.

The following figure shows a block having input X(s), output Y(s) and the transfer function G(s).

G(s)=\frac{Y(s)}{X(s)}

Summing Point The summing point is represented with a circle having cross (X) inside it. It has two or more inputs and single output. It produces the algebraic sum of the inputs. It also performs the summation or subtraction or combination of summation and subtraction of the inputs based on the polarity of the inputs. Let us see these three operations one by one.

The following figure shows the summing point with two inputs (A, B) and one output (Y). Here, the inputs A and B have a positive sign. So, the summing point produces the output, Y as sum of A and B.

i.e.,Y = A + B.

The following figure shows the summing point with two inputs (A, B) and one output (Y). Here, the inputs A and B are having opposite signs, i.e., A is having positive sign and B is having negative sign. So, the summing point produces the output Y as the difference of A and B.

Y = A + (-B) = A - B.

Take-off Point

Control Systems - Block Diagram Algebra

Block diagram algebra is nothing but the algebra involved with the basic elements of the block diagram. This algebra deals with the pictorial representation of algebraic equations.

Basic Connections for Blocks There are three basic types of connections between two blocks.

Series Connection (x) Series connection is also called cascade connection. In the following figure, two blocks having transfer functions G1(s) and G2(s) are connected in series

For this combination, we will get the output Y(s) as

Y(s)=G2(s)Z(s)

Parallel Connection (+) The blocks which are connected in parallel will have the same input. In the following figure, two blocks having transfer functions G1(s) and G2(s) are connected in parallel. The outputs of these two blocks are connected to the summing point.

Y(s)=Y_1(s)+Y_2(s)

loop or Feedback Connection ( As we discussed in previous chapters, there are two types of feedback positive feedback and negative feedback. The following figure shows negative feedback control system. Here, two blocks having transfer functions G(s) and H(s) form a closed loop.

\[ \frac{G(s)}{1+G(s)H(s)} \]

Shifting Summing Point After the Block Shifting Summing Point Before the Block Shifting Take-off Point After the Block Shifting Take-off Point Before the Block

Control Systems - Block Diagram Reduction

Rule 1 − Check for the blocks connected in series and simplify.

Rule 2 − Check for the blocks connected in parallel and simplify.

Rule 3 − Check for the blocks connected in feedback loop and simplify.

Rule 4 − If there is difficulty with take-off point while simplifying, shift it towards right.

Rule 5 − If there is difficulty with summing point while simplifying, shift it towards left.

Rule 6 − Repeat the above steps till you get the simplified form, i.e., single block

Control Systems - Signal Flow Graphs

Signal flow graph is a graphical representation of algebraic equations.

Basic Elements of Signal Flow Graph Nodes and branches are the basic elements of signal flow graph.

Node Node is a point which represents either a variable or a signal. There are three types of nodes input node, output node and mixed node.

Input Node − It is a node, which has only outgoing branches.

Output Node − It is a node, which has only incoming branches.

Mixed Node − It is a node, which has both incoming and outgoing branches

Mason's Gain Formula(path)

Control Systems - Time Response Analysis

We can analyze the response of the control systems in both the time domain and the frequency domain.

If the output of control system for an input varies with respect to time, then it is called the time response of the control system

The time response consists of two parts.

Transient response Steady state response

Mathematically, we can write the time response c(t) as

Unit Step Signal A unit step signal, u(t) is defined as

u(t)=1;t≥0

=0;t<0

Following figure shows unit step signal.

Unit Step So, the unit step signal exists for all positive values of t including zero. And its value is one during this interval. The value of the unit step signal is zero for all negative values of t.

Unit Ramp Signal A unit ramp signal, r(t) is defined as

r(t)=t;t≥0

=0;t<0

We can write unit ramp signal, r(t) in terms of unit step signal, u(t) as

r(t)=tu(t)

Following figure shows unit ramp signal.

Unit Ramp So, the unit ramp signal exists for all positive values of t including zero. And its value increases linearly with respect to t during this interval. The value of unit ramp signal is zero for all negative values of t.

Unit Parabolic Signal A unit parabolic signal, p(t) is defined as,

p(t)=t22;t≥0

=0;t<0

We can write unit parabolic signal, p(t) in terms of the unit step signal, u(t) as,

p(t)=t22u(t)

The following figure shows the unit parabolic signal.

Unit Parabolic So, the unit parabolic

Response of the First Order System Response of the second order System

Time Domain Specifications

delay time

It is the time required for the response to reach half of its final value from the zero instant. It is denoted by td .

Consider the step response of the second order system for t ≥ 0, when δ lies between zero and one

raise time peak time settling time peak overshoot

Control Systems - Steady State Errors

The deviation of the output of control system from desired response during steady state is known as steady state error

Consider the following block diagram of closed loop control system, which is having unity negative feedback.

Where,

R(s) is the Laplace transform of the reference Input signal r(t) C(s) is the Laplace transform of the output signal c(t)

The following table shows the steady state errors and the error constants for standard input signals like unit step, unit ramp & unit parabolic signals

Where, Kp , Kv and Ka are position error constant, velocity error constant and acceleration error constant respectively.

Note − If any of the above input signals has the amplitude other than unity, then multiply corresponding steady state error with that amplitude.

Note − We cant define the steady state error for the unit impulse signal because, it exists only at origin. So, we cant compare the impulse response with the unit impulse input as t denotes infinity.

Note − It is meaningless to find the steady state errors for unstable closed loop systems. So, we have to calculate the steady state errors only for closed loop stable systems. This means we need to check whether the control system is stable or not before finding the steady state errors. In the next chapter, we will discuss the concepts-related stability

Control Systems - Stability

A system is said to be stable, if its output is under control. Otherwise, it is said to be unstable. A stable system produces a bounded output for a given bounded input.

The following figure shows the response of a stable system.

This is the response of first order control system for unit step input. This response has the values between 0 and 1. So, it is bounded output. We know that the unit step signal has the value of one for all positive values of t including zero. So, it is bounded input. Therefore, the first order control system is stable since both the input and the output are bounded.

We can classify the systems based on stability as follows.

Absolutely stable system Conditionally stable system Marginally stable system

Absolutely Stable System If the system is stable for all the range of system component values, then it is known as the absolutely stable system. The open loop control system is absolutely stable if all the poles of the open loop transfer function present in left half of s plane. Similarly, the closed loop control system is absolutely stable if all the poles of the closed loop transfer function present in the left half of the s plane.

Conditionally Stable System If the system is stable for a certain range of system component values, then it is known as conditionally stable system.

Marginally Stable System If the system is stable by producing an output signal with constant amplitude and constant frequency of oscillations for bounded input, then it is known as marginally stable system. The open loop control system is marginally stable if any two poles of the open loop transfer function is present on the imaginary axis. Similarly, the closed loop control system is marginally stable if any two poles of the closed loop transfer function is present on the imaginary axis.

Sufficient Condition for Routh-Hurwitz Stability The sufficient condition is that all the elements of the first column of the Routh array should have the same sign. This means that all the elements of the first column of the Routh array should be either positive or negative.

Routh Array Method

The root locus is a graphical representation in s-domain and it is symmetrical about the real axis. Because the open loop poles and zeros exist in the s-domain having the values either as real or as complex conjugate pairs. In this chapter, let us discuss how to construct (draw) the root locus.

Rules for Construction of Root Locus Follow these rules for constructing a root locus Follow these rules for constructing a root locus.

Rule 1 − Locate the open loop poles and zeros in the s plane.

Rule 2 − Find the number of root locus branches.

We know that the root locus branches start at the open loop poles and end at open loop zeros. So, the number of root locus branches N is equal to the number of finite open loop poles P or the number of finite open loop zeros Z, whichever is greater.

Mathematically, we can write the number of root locus branches N as

$$ N=P if P\geq Z

N=Z if P<Z $$

Rule 3 − Identify and draw the real axis root locus branches.

If the angle of the open loop transfer function at a point is an odd multiple of 1800, then that point is on the root locus. If odd number of the open loop poles and zeros exist to the left side of a point on the real axis, then that point is on the root locus branch. Therefore, the branch of points which satisfies this condition is the real axis of the root locus branch.

Rule 4 − Find the centroid and the angle of asymptotes.

If P = Z, then all the root locus branches start at finite open loop poles and end at finite open loop zeros.

If P > Z , then Z number of root locus branches start at finite open loop poles and end at finite open loop zeros and P Z number of root locus branches start at finite open loop poles and end at infinite open loop zeros.

If P < Z , then P number of root locus branches start at finite open loop poles and end at finite open loop zeros and Z P number of root locus branches start at infinite open loop poles and end at finite open loop zeros.

So, some of the root locus branches approach infinity, when P \neq Z. Asymptotes give the direction of these root locus branches. The intersection point of asymptotes on the real axis is known as centroid.

We can calculate the centroid α by using this formula,

\alpha = \frac{\sum Real: part: of: finite: open: loop: poles:-\sum Real: part: of: finite: open: loop: zeros}{P-Z}

The formula for the angle of asymptotes θ is

\theta=\frac{(2q+1)180^0}{P-Z}

Where,

q=0,1,2,....,(P-Z)-1

Rule 5 − Find the intersection points of root locus branches with an imaginary axis.

We can calculate the point at which the root locus branch intersects the imaginary axis and the value of K at that point by using the Routh array method and special case (ii).

If all elements of any row of the Routh array are zero, then the root locus branch intersects the imaginary axis and vice-versa.

Identify the row in such a way that if we make the first element as zero, then the elements of the entire row are zero. Find the value of K for this combination.

Substitute this K value in the auxiliary equation. You will get the intersection point of the root locus branch with an imaginary axis.

Rule 6 − Find Break-away and Break-in points.

If there exists a real axis root locus branch between two open loop poles, then there will be a break-away point in between these two open loop poles.

If there exists a real axis root locus branch between two open loop zeros, then there will be a break-in point in between these two open loop zeros.

Note − Break-away and break-in points exist only on the real axis root locus branches.

Follow these steps to find break-away and break-in points.

Write K in terms of s from the characteristic equation 1 + G(s)H(s) = 0.

Differentiate K with respect to s and make it equal to zero. Substitute these values of s in the above equation.

The values of s for which the K value is positive are the break points.

Rule 7 − Find the angle of departure and the angle of arrival.

The Angle of departure and the angle of arrival can be calculated at complex conjugate open loop poles and complex conjugate open loop zeros respectively.

The formula for the angle of departure \phi_d is

\phi_d=180^0-\phi

The formula for the angle of arrival \phi_a is

\phi_a=180^0+\phi

Where,

\phi=\sum \phi_P-\sum \phi_Z

Example

Let us now draw the root locus of the control system having open loop transfer function, G(s)H(s)=\frac{K}{s(s+1)(s+5)}

Step 1 − The given open loop transfer function has three poles at s = 0, s = 1 and s = 5. It doesnt have any zero. Therefore, the number of root locus branches is equal to the number of poles of the open loop transfer function.

N=P=3

image

The Bode plot or the Bode diagram consists of two plots −

Magnitude plot Phase plot In both the plots, x-axis represents angular frequency (logarithmic scale). Whereas, yaxis represents the magnitude (linear scale) of open loop transfer function in the magnitude plot and the phase angle (linear scale) of the open loop transfer function in the phase plot.

Bode Plots Polar Plots

Nyquist Plots

Compensators

There are three types of compensators lag, lead and lag-lead compensators. These are most commonly used.

Control Systems - Controllers

The various types of controllers are used to improve the performance of control systems. In this chapter, we will discuss the basic controllers such as the proportional, the derivative and the integral controllers.

Proportional Controller The proportional controller produces an output, which is proportional to error signal.

Proportional Controller The proportional controller produces an output, which is proportional to error signal.

u(t) \propto e(t)

\Rightarrow u(t)=K_P e(t)

Apply Laplace transform on both the sides -

U(s)=K_P E(s)

\frac{U(s)}{E(s)}=K_P

Therefore, the transfer function of the proportional controller is K_P.

Where,

U(s) is the Laplace transform of the actuating signal u(t)

E(s) is the Laplace transform of the error signal e(t)

KP is the proportionality constant

The block diagram of the unity negative feedback closed loop control system along with the proportional controller is shown in the following figure.

The proportional controller is used to change the transient response as per the requirement.

Derivative Controller The derivative controller produces an output, which is derivative of the error signal.

u(t)=K_D \frac{\text{d}e(t)}{\text{d}t}

Apply Laplace transform on both sides.

U(s)=K_D sE(s)

\frac{U(s)}{E(s)}=K_D s

Therefore, the transfer function of the derivative controller is K_D s.

Where, K_D is the derivative constant.

The block diagram of the unity negative feedback closed loop control system along with the derivative controller is shown in the following figure.

Derivative Controller The derivative controller is used to make the unstable control system into a stable one.

Integral Controller The integral controller produces an output, which is integral of the error signal.

u(t)=K_I \int e(t) dt

Apply Laplace transform on both the sides -

U(s)=\frac{K_I E(s)}{s}

\frac{U(s)}{E(s)}=\frac{K_I}{s}

Therefore, the transfer function of the integral controller is \frac{K_I}{s}.

Where, K_I is the integral constant.

The block diagram of the unity negative feedback closed loop control system along with the integral controller is shown in the following features

The integral controller is used to decrease the steady state error.

Let us now discuss about the combination of basic controllers

Proportional Derivative (PD) Controller The proportional derivative controller produces an output, which is the combination of the outputs of proportional and derivative controllers.

u(t)=K_P e(t)+K_D \frac{\text{d}e(t)}{\text{d}t}

Apply Laplace transform on both sides -

U(s)=(K_P+K_D s)E(s)

\frac{U(s)}{E(s)}=K_P+K_D s

Therefore, the transfer function of the proportional derivative controller is K_P + K_D s.

The block diagram of the unity negative feedback closed loop control system along with the proportional derivative controller is shown in the following figure

The proportional derivative controller is used to improve the stability of control system without affecting the steady state error.

Proportional Integral (PI) Controller The proportional integral controller produces an output, which is the combination of outputs of the proportional and integral controllers.

u(t)=K_P e(t)+K_I \int e(t) dt

Apply Laplace transform on both sides -

U(s)=\left(K_P+\frac{K_I}{s} \right )E(s)

\frac{U(s)}{E(s)}=K_P+\frac{K_I}{s}

Therefore, the transfer function of proportional integral controller is K_P + \frac{K_I} {s}.

The block diagram of the unity negative feedback closed loop control system along with the proportional integral controller is shown in the following figure.

Proportional Integral The proportional integral controller is used to decrease the steady state error without affecting the stability of the control system.

Proportional Integral Derivative (PID) Controller The proportional integral derivative controller produces an output, which is the combination of the outputs of proportional, integral and derivative controllers.

u(t)=K_P e(t)+K_I \int e(t) dt+K_D \frac{\text{d}e(t)}{\text{d}t}

Apply Laplace transform on both sides -

U(s)=\left(K_P+\frac{K_I}{s}+K_D s \right )E(s)

\frac{U(s)}{E(s)}=K_P+\frac{K_I}{s}+K_D s

Therefore, the transfer function of the proportional integral derivative controller is K_P + \frac{K_I} {s} + K_D s.

The block diagram of the unity negative feedback closed loop control system along with the proportional integral derivative controller is shown in the following figure.

The state space model of Linear Time-Invariant (LTI) system can be represented as,

\dot{X}=AX+BU

Y=CX+DU

The first and the second equations are known as state equation and output equation respectively.

Where,

X and \dot{X} are the state vector and the differential state vector respectively.

U and Y are input vector and output vector respectively.

A is the system matrix.

B and C are the input and the output matrices.

D is the feed-forward matrix.

Basic Concepts of State Space Model The following basic terminology involved in this chapter.

State It is a group of variables, which summarizes the history of the system in order to predict the future values (outputs).

State Variable The number of the state variables required is equal to the number of the storage elements present in the system.

Examples − current flowing through inductor, voltage across capacitor

State Vector It is a vector, which contains the state variables as elements.

In the earlier chapters, we have discussed two mathematical models of the control systems. Those are the differential equation model and the transfer function model. The state space model can be obtained from any one of these two mathematical models. Let us now discuss these two methods one by one.

State Space Model from Differential Equation Consider the following series of the RLC circuit. It is having an input voltage, v_i(t) and the current flowing through the circuit is i(t).

Differential Equation RLC There are two storage elements (inductor and capacitor) in this circuit. So, the number of the state variables is equal to two and these state variables are the current flowing through the inductor, i(t) and the voltage across capacitor, v_c(t).

From the circuit, the output voltage, v_0(t) is equal to the voltage across capacitor, v_c(t).

v_0(t)=v_c(t)

Apply KVL around the loop.

v_i(t)=Ri(t)+L\frac{\text{d}i(t)}{\text{d}t}+v_c(t)

\Rightarrow \frac{\text{d}i(t)}{\text{d}t}=-\frac{Ri(t)}{L}-\frac{v_c(t)}{L}+\frac{v_i(t)}{L}

The voltage across the capacitor is -

v_c(t)=\frac{1}{C} \int i(t) dt

Differentiate the above equation with respect to time.

\frac{\text{d}v_c(t)}{\text{d}t}=\frac{i(t)}{C}

State vector, X=\begin{bmatrix}i(t) \v_c(t) \end{bmatrix}

Differential state vector, \dot{X}=\begin{bmatrix}\frac{\text{d}i(t)}{\text{d}t} \\frac{\text{d}v_c(t)}{\text{d}t} \end{bmatrix}

We can arrange the differential equations and output equation into the standard form of state space model as,

\dot{X}=\begin{bmatrix}\frac{\text{d}i(t)}{\text{d}t} \\frac{\text{d}v_c(t)}{\text{d}t} \end{bmatrix}=\begin{bmatrix}-\frac{R}{L} & -\frac{1}{L} \\frac{1}{C} & 0 \end{bmatrix}\begin{bmatrix}i(t) \v_c(t) \end{bmatrix}+\begin{bmatrix}\frac{1}{L} \0 \end{bmatrix}\begin{bmatrix}v_i(t) \end{bmatrix}

Y=\begin{bmatrix}0 & 1 \end{bmatrix}\begin{bmatrix}i(t) \v_c(t) \end{bmatrix}

Where,

A=\begin{bmatrix}-\frac{R}{L} & -\frac{1}{L} \\frac{1}{C} & 0 \end{bmatrix}, : B=\begin{bmatrix}\frac{1}{L} \0 \end{bmatrix}, : C=\begin{bmatrix}0 & 1 \end{bmatrix} : and : D=\begin{bmatrix}0 \end{bmatrix}

State Space Model from Transfer Function

Consider the two types of transfer functions based on the type of terms present in the numerator.

Transfer function having constant term in Numerator. Transfer function having polynomial function of s in Numerator. Transfer function having constant term in Numerator Consider the following transfer function of a system

\frac{Y(s)}{U(s)}=\frac{b_0}{sn+a_{n-1}s}+...+a_1s+a_0

Rearrange, the above equation as

(sn+a_{n-1}s+...+a_0)Y(s)=b_0 U(s)

Apply inverse Laplace transform on both sides.

\frac{\text{d}ny(t)}{\text{d}tn}+a_{n-1}\frac{\text{d}{n-1}y(t)}{\text{d}t+a_0y(t)=b_0 u(t)}}+...+a_1\frac{\text{d}y(t)}{\text{d}t

Let

y(t)=x_1

\frac{\text{d}y(t)}{\text{d}t}=x_2=\dot{x}_1

\frac{\text{d}2y(t)}{\text{d}t2}=x_3=\dot{x}_2

.

.

.

\frac{\text{d}{n-1}y(t)}{\text{d}t}}=x_n=\dot{x}_{n-1

\frac{\text{d}ny(t)}{\text{d}tn}=\dot{x}_n

and u(t)=u

Then,

\dot{x}n+ax_n+...+a_1x_2+a_0x_1=b_0 u

From the above equation, we can write the following state equation.

\dot{x}n=-a_0x_1-a_1x_2-...-ax_n+b_0 u

The output equation is -

y(t)=y=x_1

The state space model is -

\dot{X}=\begin{bmatrix}\dot{x}1 \\dot{x}_2 \\vdots \\dot{x}} \\dot{x}_n \end{bmatrix

=\begin{bmatrix}0 & 1 & 0 & \dotso & 0 & 0 \0 & 0 & 1 & \dotso & 0 & 0 \\vdots & \vdots & \vdots & \dotso & \vdots & \vdots \ 0 & 0 & 0 & \dotso & 0 & 1 \-a_0 & -a_1 & -a_2 & \dotso & -a_{n-2} & -a_{n-1} \end{bmatrix} \begin{bmatrix}x_1 \x_2 \\vdots \x_{n-1} \x_n \end{bmatrix}+\begin{bmatrix}0 \0 \\vdots \0 \b_0 \end{bmatrix}\begin{bmatrix}u \end{bmatrix}

Y=\begin{bmatrix}1 & 0 & \dotso & 0 & 0 \end{bmatrix}\begin{bmatrix}x_1 \x_2 \\vdots \x_{n-1} \x_n \end{bmatrix}

Here, D=\left [ 0 \right ].

Example Find the state space model for the system having transfer function.

\frac{Y(s)}{U(s)}=\frac{1}{s^2+s+1}

Rearrange, the above equation as,

(s^2+s+1)Y(s)=U(s)

Apply inverse Laplace transform on both the sides.

\frac{\text{d}2y(t)}{\text{d}t2}+\frac{\text{d}y(t)}{\text{d}t}+y(t)=u(t)

Let

y(t)=x_1

\frac{\text{d}y(t)}{\text{d}t}=x_2=\dot{x}_1

and u(t)=u

Then, the state equation is

\dot{x}_2=-x_1-x_2+u

The output equation is

y(t)=y=x_1

The state space model is

\dot{X}=\begin{bmatrix}\dot{x}_1 \\dot{x}_2 \end{bmatrix}=\begin{bmatrix}0 & 1 \-1 & -1 \end{bmatrix}\begin{bmatrix}x_1 \x_2 \end{bmatrix}+\begin{bmatrix}0 \1 \end{bmatrix}\left [u \right ]

Y=\begin{bmatrix}1 & 0 \end{bmatrix}\begin{bmatrix}x_1 \x_2 \end{bmatrix}

Transfer function having polynomial function of s in Numerator Consider the following transfer function of a system

r, we learnt how to obtain the state space model from differential equation and transfer function. In this chapter, let us discuss how to obtain transfer function from the state space model.

Transfer Function from State Space Model We know the state space model of a Linear Time-Invariant (LTI) system is -

\dot{X}=AX+BU

Y=CX+DU

Apply Laplace Transform on both sides of the state equation.

sX(s)=AX(s)+BU(s)

\Rightarrow (sI-A)X(s)=BU(s)

\Rightarrow X(s)=(sI-A)^{-1}BU(s)

Apply Laplace Transform on both sides of the output equation.

Y(s)=CX(s)+DU(s)

Substitute, X(s) value in the above equation.

\Rightarrow Y(s)=C(sI-A)^{-1}BU(s)+DU(s)

\Rightarrow Y(s)=[C(sI-A)^{-1}B+D]U(s)

\Rightarrow \frac{Y(s)}{U(s)}=C(sI-A)^{-1}B+D

The above equation represents the transfer function of the system. So, we can calculate the transfer function of the system by using this formula for the system represented in the state space model.

Note − When D = [0], the transfer function will be

\frac{Y(s)}{U(s)}=C(sI-A)^{-1}B

Example

Example

Let us calculate the transfer function of the system represented in the state space model as,

\dot{X}=\begin{bmatrix}\dot{x}_1 \\dot{x}_2 \end{bmatrix}=\begin{bmatrix}-1 & -1 \1 & 0 \end{bmatrix}\begin{bmatrix}x_1 \x_2 \end{bmatrix}+\begin{bmatrix}1 \0 \end{bmatrix}[u]

Y=\begin{bmatrix}0 & 1 \end{bmatrix}\begin{bmatrix}x_1 \x_2 \end{bmatrix}

Here,

A=\begin{bmatrix}-1 & -1 \1 & 0 \end{bmatrix}, \quad B=\begin{bmatrix}1 \0 \end{bmatrix}, \quad C=\begin{bmatrix}0 & 1 \end{bmatrix} \quad and \quad D=[0]

The formula for the transfer function when D = [0] is -

\frac{Y(s)}{U(s)}=C(sI-A)^{-1}B

Substitute, A, B & C matrices in the above equation.

\frac{Y(s)}{U(s)}=\begin{bmatrix}0 & 1 \end{bmatrix}\begin{bmatrix}s+1 & 1 \-1 & s \end{bmatrix}^{-1}\begin{bmatrix}1 \0 \end{bmatrix}

\Rightarrow \frac{Y(s)}{U(s)}=\begin{bmatrix}0 & 1 \end{bmatrix} \frac{\begin{bmatrix}s & -1 \1 & s+1 \end{bmatrix}}{(s+1)s-1(-1)}\begin{bmatrix}1 \0 \end{bmatrix}

\Rightarrow \frac{Y(s)}{U(s)}=\frac{\begin{bmatrix}0 & 1 \end{bmatrix}\begin{bmatrix}s \1 \end{bmatrix}}{s2+s+1}=\frac{1}{s2+s+1}

Therefore, the transfer function of the system for the given state space model is

\frac{Y(s)}{U(s)}=\frac{1}{s^2+s+1}

State Transition Matrix and its Properties If the system is having initial conditions, then it will produce an output. Since, this output is present even in the absence of input, it is called zero input response x_{ZIR}(t). Mathematically, we can write it as,

x_{ZIR}(t)=e{At}X(0)=LX(0) \right }}\left { \left [ sI-A \right ]^{-1

From the above relation, we can write the state transition matrix \phi(t) as

\phi(t)=e{At}=L}[sI-A]^{-1

So, the zero input response can be obtained by multiplying the state transition matrix \phi(t) with the initial conditions matrix.

Following are the properties of the state transition matrix.

If t = 0, then state transition matrix will be equal to an Identity matrix.

\phi(0) = I

Inverse of state transition matrix will be same as that of state transition matrix just by replcing t by -t.

\phi^{-1}(t) = \phi(t)

If t = t_1 + t_2 , then the corresponding state transition matrix is equal to the multiplication of the two state transition matrices at t = t_1 and t = t_2.

\phi(t_1 + t_2) = \phi(t_1) \phi(t_2)

Controllability and Observability Let us now discuss controllability and observability of control system one by one.

Controllability A control system is said to be controllable if the initial states of the control system are transferred (changed) to some other desired states by a controlled input in finite duration of time.

We can check the controllability of a control system by using Kalmans test.

Write the matrix Q_c in the following form.

Q_c=\left [ B \quad AB \quad A^2B \quad ...\quad A^{n-1}B \right ]

Find the determinant of matrix Q_c and if it is not equal to zero, then the control system is controllable.

Observability A control system is said to be observable if it is able to determine the initial states of the control system by observing the outputs in finite duration of time.

We can check the observability of a control system by using Kalmans test.

Write the matrix Q_o in following form.

Q_o=\left [ C^T \quad ATCT \quad (AT)2C^T \quad ...\quad (AT)C^T \right ]

Find the determinant of matrix Q_o and if it is not equal to zero, then the control system is observable.

Example

Let us verify the controllability and observability of a control system which is represented in the state space model as,

\dot{x}=\begin{bmatrix}\dot{x}_1 \\dot{x}_2 \end{bmatrix}=\begin{bmatrix}-1 & -1 \1 & 0 \end{bmatrix}\begin{bmatrix}x_1 \x_2 \end{bmatrix}+\begin{bmatrix}1 \0 \end{bmatrix} [u]

Y=\begin{bmatrix}0 & 1 \end{bmatrix}\begin{bmatrix}x_1 \x_2 \end{bmatrix}

Here,

A=\begin{bmatrix}-1 & -1 \1 & 0 \end{bmatrix}, \quad B=\begin{bmatrix}1 \0 \end{bmatrix}, \quad \begin{bmatrix}0 & 1 \end{bmatrix}, D=[0]\quad and \quad n=2

For n = 2, the matrix Q_c will be

Q_c=\left [B \quad AB \right ]

We will get the product of matrices A and B as,

AB=\begin{bmatrix}-1 \1 \end{bmatrix}

\Rightarrow Q_c =\begin{bmatrix}1 & -1 \0 & 1 \end{bmatrix}

|Q_c|=1 \neq 0

Since the determinant of matrix Q_c is not equal to zero, the given control system is controllable.

For n = 2, the matrix Q_o will be -

Q_o=\left [C^T \quad ATCT \right ]

Here,

A^T=\begin{bmatrix}-1 & 1 \-1 & 0 \end{bmatrix} \quad and \quad C^T=\begin{bmatrix}0 \1 \end{bmatrix}

We will get the product of matrices A^T and C^T as

ATCT=\begin{bmatrix}1 \0 \end{bmatrix}

\Rightarrow Q_o=\begin{bmatrix}0 & 1 \1 & 0 \end{bmatrix}

\Rightarrow |Q_o|=-1 \quad \neq 0

Since, the determinant of matrix Q_o is not equal to zero, the given control system is observable.

Therefore, the given control system is both controllable and observable.