Mathematics programming techniques are commonly used to extremize nonlinear functions of single and multiple (n) design variables without constraints. Despite the fact that most structural optimization problems involve constraints that constrain the design space, studying unconstrained optimization methods is important for several reasons. Firstly, if the design has no constraints in place, then an unconstrained function minimization algorithm is used to determine the search direction and travel distance for minimizing the objective function. During the move in design space, one must constantly monitor for constraint violations. Secondly, a constrained optimization problem can be recast as an unconstrained minimization problem. Penalty function and multiplier methods discussed in Chapter 5 are examples of indirect methods that transform constrained minimization problems into equivalent unconstrained problems. The use of unconstrained minimization strategies is becoming increasingly popular as a technique for solving linear and nonlinear structural analysis problems (see Kamat and Hayduk[l]). Such systems can be solved by finding the minimum of the potential energy or the minimum of the residuals of the equations in a least squares manner.
Minimization of Functions of One Variable
It is important to study minimization of functions of a single design variable in structural design problems, since it is the objective of most structural design problems to minimize a function with many variables. Several theoretical and numerical aspects of minimization of functions of n variables are best illustrated, especially graphically, in a one-dimensional space. The majority of methods for unconstrained minimization of functions f(x) of n variables rely on sequential one-dimensional minimization along prescribed directions, sk, in the multidimensional design space Rn. The point x0 and the specified search direction s0, for any point along that direction, can be expressed in terms of a single variable x = x0 + s0,
In this case, α is usually referred to as the step length. The function f(x) to be minimized can, therefore, be expressed as f(x) = f(x0 + αs0) = f(α) .
As a result, the problem of minimization can be reduced to finding the value that minimizes the function, f(α). As a matter of fact, one of the simplest ways to minimize n-dimensional functions is to change only one variable at a time, while maintaining all other variables fixed, and then perform a one-dimensional minimization in each coordinate direction of an n-dimensional design space to find the minimum of the objective function. The univariate search technique is used in this procedure. Generally, we classify minimization algorithms into three categories for one-dimensional and multidimensional problems. Methods of zeroth, first, and second order fall into these categories. As part of the minimization process, zeroth order methods use only the value of the function. Using first order methods, values of the function and its first derivatives are used. In addition, second order methods use the values of the function and its first and second derivatives. The following discussion of one-variable function minimizations assumes that f = f(α). As well as minimization of multivariable problems along a predetermined direction, these methods can be applied to minimization of single-variable problems.
In order to numerically optimize nonlinear multivariable objective functions, efficient and robust techniques are required. Iterative solutions to these problems require an iterative process, and trial and error becomes impractical for more than three or four variables. Generally, nonlinear functions are unpredictable in their behavior; they may have relative maxima or minima, saddle points, convex or concave regions, and so on. Depending on the region, the optimization algorithm may progress very slowly toward the optimum, requiring excessive computer time. We can evaluate various approaches proposed for optimizing unconstrained functions based on extensive experience testing nonlinear programming algorithms.
When nonlinear objective functions contain functions like abs, min, max, or if-then-else statements, derivatives, or the function itself, can be discontinuous at some points, causing nonlinear objective functions to be nonsmooth. Nonsmooth NLP problems are often solved by unconstrained optimization methods without derivatives, whereas derivative-based methods fail. Derivative-based methods can get “stuck” at discontinuities, but function-value-only methods are less susceptible. Methods that use derivatives, however, are both faster and more accurate when dealing with smooth functions, and their advantage grows as the number of decision variables increases. We now turn our attention to unconstrained optimization methods that use only the first partial derivatives of the objective function.
Depending on how well each linear approximation of a non-linear problem approximates the original problem, a non-linear problem’s convergence properties will vary. In the neighborhood of the approximation point, any linearisation of a continuous function is good. Neighborhoods vary in size, however. The updates calculated by solving equation (3) or (5) may be outside this neighborhood if the problem is highly non-linear, which may cause slow convergence.
The convergence properties of the bundle adjustment algorithm were compared for two parameterisations of the rotation matrix. Since the rotation matrix parameterization is less non-linear than the Euler angle parameterization, linear approximations used by, for example, bundle adjustment would be better, and convergence properties would be improved. A line search technique was optionally applied in addition to the two parameterisations. By using the line search technique, you can reduce the risk of adding too many updates. There are various line search techniques, but they all add a fraction of the update. It is determined that the new parameter approximations are better than the current ones. Re-parameterisation holds true, but the difference is probably too small to justify using it, except to avoid singularities. According to the results, line search reduces the number of convergence failures on average by 54% at an extra cost of 0.15 iterations. As a result, we conclude that any standard implementation of bundle adjustment should include a technique like line search.