In this chapter, the motivations in the use of metaheuristic methods in the
Automatic Control field are presented. Compared with traditional approaches,
the use of these methods does not require the reformulation of the initial
control problem and allows the optimization of the control laws. A brief
description of the book contents is also provided.
1.1. Introduction: automatic control and optimization
Links between automatic control and optimization are very strong
as optimization methods are often the core of automatic control
methodologies. Indeed, optimization has traditionally brought efficient
methods to identify system models, to compute control laws, to
analyze system stability and robustness, etc.
Because of the required tractability of the corresponding
optimization problems, traditional approaches are usually based on the
definition of a simplified model of the plant to control. The simplified
model may rely, for instance, on the use of a linearized plant about an
equilibrium point and neglected dynamics. Making these
simplifications, the model is expressed by a linear and low-order
system. Such a simplified model can be used for the computation of
the control law using a particular mathematical framework. In parallel,
an optimization problem expressing the desired performance and
constraints is defined. Special attention is paid to the structures of the
model and the optimization problem so as to be able to solve it with
exact and deterministic solvers. Several examples can be given for
such an approach.
This is the case for the linear quadratic method, belonging to the
class of optimal control methods [KWA 72]: a linear model of the
system is used, and the optimization of a quadratic cost is performed.
The quadratic cost is the sum of two weighted factors, one for the
reference tracking and another for the energy consumption. For this
particular problem, the optimal solution can be analytically found with
the help of Riccati equations.
More recently, H2/H∞ synthesis methods have been developed
[ZHO 96]. The approach is also based on a linear model of the plant,
and the problem is expressed as the minimization of the H∞ norm of
the closed-loop system. Reformulations are used so as to express the
problem in a linear matrix inequality (LMI) framework, for which the
required solvers can be used. Another trend is the use of the Youla–
Kucera parameterization [FRA 87]. This parameterization allows us to
parameterize all the controllers stabilizing a given plant. Using this
property, it is possible to find an “optimal” controller by solving a
convex optimization problem.
Finally, we can also mention predictive control [MAC 02]. Once
again using linear models and quadratic costs, and knowing the desired
output in advance, it is possible to compute an optimal discrete controller,
once again weighting the reference tracking and the energy consumption.
In the general case (nonlinear model of the system), the method depends
on the online solution to an optimization problem. Using a linear
model and a quadratic cost allows us to exploit the solution and hence
to make most of the computation in an offline procedure.
As said earlier, for all these traditional approaches, a linear model
of the plant is used, and a particular structure for the mathematical
formulation of costs and constraints is required. However, considering
real-life problems with high-level specifications, this reformulation
step is not straightforward, and hence some of them cannot be directly
taken into account. In this case, these specifications are not first
considered in the design procedure and have to be checked a
posteriori during an analysis phase. This approach may lead to some
iteration between the synthesis and the analysis phases, which is time
consuming and may require a high level of expertise to adapt the
tuning parameters. This aspect has often limited the introduction of
advanced control methods such as the H∞ synthesis in the industry
community.
Nowadays, three main points have to be considered. First, systems
to be controlled are more and more complex, and it is not always
possible to define linear models that cover all the aspects of their
behavior. Further, interconnections between subsystems have to be
dealt with, and so the order of the model increases. Then,
specifications are more and more various and precise, and it appears to
be crucial to take them into account as soon as possible in the
synthesis phase. Finally, industries not only want to find a controller
that satisfies the desired specifications, but also a controller that
optimizes them. Even if a problem can be easy to solve for given
values of the tuning parameters (given values for the weighting
matrices of a linear quadratic regulator, for instance), finding the best
tuning parameters is a hard task. Indeed, the corresponding
optimization problems are non-convex and non-differentiable, with
numerous local optima. Some attempts have been made with subgradient
methods [LAS 05], or non-smooth optimization [BUR 06],
achieving interesting results. However, these proposed methods
remain local search methods; as a result, they are strongly dependent
on the initial point.
1.2. Motivations to use metaheuristic algorithms
Considering the elements given in section 1.1, two main
motivations for the use of stochastic methods and metaheuristic
optimization algorithms in automatic control have emerged.
First, we want to avoid the reformulation step of the constraints
into a particular mathematical framework. This aspect could bring
several advantages:
– The reformulation of costs and constraints is time consuming.
– The reformulation requires expertise.
– As some constraints cannot be reformulated into a given
mathematical framework, an iteration of synthesis and a posteriori
analysis phases has to be done.
Second, we want not only to tune a controller that satisfies some
specifications, but also to optimize the behavior of the system (for
instance, we want to find the controller for which the time response is
as low as possible while constraints on robustness and energy
consumption are still satisfied). Very often, the problem of controller
design with fixed tuning parameters can be exactly solved. It is the
case for the H∞ methodology, where the synthesis of the controller is a
convex problem for given weighting filters. We will see in Chapter 4
that in this case, the traditional oriented “trial and error” procedure is a
way to optimize the tuning. Optimizing the tuning parameters is a
natural extension, but it is really not straightforward as good
mathematical properties, such as convexity, are lost.
To achieve these goals, the use of metaheuristic optimization
seems to be an interesting alternative to traditional approaches.
Indeed, such algorithms are supposed to optimize any cost and
constraints, whatever their mathematical structure. In particular, they
do not require the possibility of evaluating gradient information. The
only required point is the possibility of evaluating the costs and the
constraints or a given choice of optimization variables. Reformulation
is no longer necessary, and the optimization of traditional methods
becomes possible.
Of course, the main drawback of these methods is the fact that no
guarantee can be given regarding the actual optimality of the solution.
Several arguments can be made to answer this point:
– When designing a controller, the overall optimality is not so
important: having an algorithm that is able to find a controller, which
satisfies all the constraints, with no need for reformulation, and which
is better than a controller tuned by hand, is already a great challenge
for the industry community and so a great opportunity for the
academic world.
– Using a metaheuristic algorithm allows us to approximately solve
the initial problem. In the traditional approach, the exact solution is
found, but for a reformulated problem, which is often not equivalent
to the initial problem. Finally, the optimality with respect to the initial
problem is not guaranteed even with the traditional approach.
– The metaheuristic algorithm can sometimes be considered as a
first step in the optimization procedure. This first step allows us to
compute an initial point for a deterministic optimization algorithm that
gives the local optimality property but for which the quality of the
solution strongly depends on the initial point.
1.3. Organization of the book
The book is divided into four main chapters describing various
aspects of the use of metaheuristic algorithms in the automatic control
field.
Chapter 2 deals with one of the first steps in the design of
automatic control laws, which is the identification of the system
model. The classical approach is concerned with the identification of
the model parameters, the structure of the model being an a priori
choice based on physical or field-knowledge considerations. In this
chapter, we consider the case where this model structure is not known.
Hence, the goal is to identify both model structure and parameters,
which is referred to in the literature as symbolic regression. An ant
colony algorithm is developed for this purpose. The solution uses the
tree representation of functions.
Chapter 3 discusses the tuning of proportional integral derivative
(PID) controllers. Such controllers are the most common controllers in
the industry community because of their ability to achieve satisfying
trade-offs between stability, rapidity and precision of the closed loop.
Numerous methods do exist to tune such controllers. However, the
optimization of the closed-loop behavior remains an open problem,
especially if numerous constraints have to be taken into account. In
this chapter, a particle swarm optimization is used to obtain
satisfactory results. An extension to multi-objective optimization is
also presented.
In Chapter 4, an advanced control method is considered, namely
the H∞ methodology. This m