Formally, let
f:Rn —>R be the fitness function which must be minimized.
The function takes a candidate solution as an argument in the form of a vector of real numbers and produces a real number as output which indicates the fitness of the given candidate solution. The gradient of f is not known. The goal is to find a solution m for which f(m)≤ f(p) for all p in the search-space, which means that m is the global minimum.
Let x ∈ Rn designate a candidate solution (agent) in the population. The basic DE algorithm can then be described as follows
Choose the parameters
NP is the population size, i.e. the number of candidate agents or “parents”; a typical setting is 10n
The parameter CR ∈ [0,1] is called the crossover probability and the parameter F ∈ [0,2] is called the differential weight. Typical settings are F=0.8 and CR=0.9
- Optimization performance may be greatly impacted by these choices; see below.
- Initialize all agents x with random positions in the search-space.
- Until a termination criterion is met (e.g. number of iterations performed, or adequate fitness reached), repeat the following:
- For each agent x in the population do
- Pick three agents a, b and c from the population at random, they must be distinct from each other as well as from agent X. (a is called the “base” vector.)
- Pick a random index R ∈ {1,…..,n} where n is the dimensionality of the problem being optimized.
- Compute the agent’s potentially new position Y=[y1,…..,yn] as follows:
- For each i ∈ {1,…..,n} pick a uniformly distributed random number Ri ∼ U(0,1)
- If Ri < CR or i = R then set yi = ai + F X (bi – ci ) otherwise set yi = xi (Index position R is replaced for certain.)
- If f(y) ≤ f(x) then replace the agent xin the population with the improved or equal candidate solution Y.
- Pick the agent from the population that has the best fitness and return it as the best found candidate solution.