Introduction

Simulation annealing (SA) is a probabilistic method for estimating the global optimum of a function. In particular, it is a metaheuristic for approximating global optimization in a large search space. The traveling salesman problem, boolean satisfiability problem, protein structure prediction, and job-shop scheduling are examples. Simulated annealing may be preferable to gradient descent or branch and bound algorithms where finding an approximate global optimum is more important than finding a precise local optimum.

Simulated Annealing (SA) mimics the Physical Annealing process, but optimizes parameters in a model. In situations where there are many local minima, such as Gradient Descent, this process is very useful.

As of now, SA solves problems using an objective function of many variables, subject to several constraints. As part of the objective function, the constraint can be penalized.

Gradient descent is one optimization technique, but it has the potential to become stuck at local minima, thus our model will not be taken into consideration since it does not perform the operation as expected. Simulation Annealing has been introduced to overcome this problem.

Although it usually achieves an approximate solution to the global minimum, simulated annealing can be helpful in a wide variety of practical applications when exact algorithms fail.

It all starts with the initial point of a random number solution which can be anything according to the requirement that fits. Next thing is to set up a reduction function i.e alpha with respect to the random number. Here each reduction will reduce the random number at a different rate and each method is better at optimizing a different type of model.

Than we move on to start with initial number which loops through n numbers of iteration here in this case its 5000 and than decrease the number with respect to alpha value that we mentioned earlier and the loops stops only after termination condition is reached. 

While iteration the simulated annealing heuristic considers some neighboring state s* of the current state s, and probabilistically decides between moving the system to state s* or staying in-state s. These probabilities ultimately lead the system to move to states of lower energy. Typically this step is repeated until the system reaches a state that is good enough for the application, or until a given computation budget has been exhausted.



Problem to be optimized by SA

  • Travelling Salesman Problem (TSP)
  • Scheduling Problems
  • Task Allocations
  • Graph colouring and partitioning
  • Non-linear function optimizations

Advantages and Disadvantages

Advantage: 

Its easy to learn and use.

As you can see above it provide solutions to wide range of problems

Disadvantages:

if the annealing schedule is long it will take a long time to run.

It consisit of lots of tunable programs

 

Lets start implementation

Initially we are declaring the objective function for minimization.

				
					// the objective function to minimize
val f: RealScalarFunction = object : AbstractRealScalarFunction(2) {

    override fun evaluate(x: Vector): Double {
        val x1: Double = x.get(1)
        val x2: Double = x.get(2)
        // (4 - 2.1*x(1)^2 + x(1)^4/3)*x(1)^2
        val term1: Double
                        = (4.0 - 2.1 * x1.pow(2.0) + x1.pow(4.0) / 3.0) * x1.pow(2.0)
        // x(1)*x(2)
        val term2: Double = x1 * x2
        // (-4 + 4*x(2)^2)*x(2)^2
        val term3: Double = (-4.0 + 4.0 * x2.pow(2.0)) * x2.pow(2.0)
        return term1 + term2 + term3
    }
}

				
			

In this section we are constructing a optimization problem in order to solve it further using SA.

				
					// construct an optimization problem
val problem = object : OptimProblem {

    override fun f(): RealScalarFunction {
        return f
    }

    override fun dimension(): Int {
        return 2
    }
}
				
			

As you can see we are setting number of iteration in this case which is 5000 iteration and initializing the AS solver with dimension. 

				
					// stop after 5000 iterations
val stopCondition: StopCondition = AfterIterations(5000)
// an instance of a simulated annealing solver
val solver: IterativeMinimizer = GeneralizedSimulatedAnnealingMinimizer(
                2, // dimension of the objective function
                stopCondition
        )
				
			

in the final block of code we are finally bringing all the variables together with initial guess and the minimum value.

				
					val soln: IterativeSolution = solver.solve(problem)
val x0: Vector = DenseVector(0.5, 0.5) // the initial guess
val xmin: Vector = soln.search(x0)
val fxmin: Double = f.evaluate(xmin) // the mimimum
println("Simulated annealing")

println(String.format("f(%s) = %f", xmin, fxmin))
				
			

Conclusion

In order to optimize a multi-parameter model, Simulated Annealing is a popular algorithm that can be implemented relatively quickly. If simulated annealing is tasked with many iterations, it can be extremely computationally intensive, but it is capable of finding a global maximum.If simulated annealing is tasked with many iterations, it can be extremely computationally intensive, but it is capable of finding a global maximum.