## Introduction

A second-order cone programming problem (SOCP) is a convex optimization problem in which a linear function is minimized over the intersection of an affine linear manifold with a Cartesian product of second-order (Lorentz) cones. SOCP problems include linear programs, convex quadratic programs, and quadratically constrained convex quadratic programs, as well as many other problems that do not fit into these categories. Those problems model applications from a wide range of fields, including engineering, control, and finance, as well as robust optimization and combinatorial optimization.

It has a linear objective function and linear constraints, as well as a set of second-order cone constraints

In spite of the fact that SOCP problems can be solved as SDP problems, doing so is not recommended from both a numerical and computational complexity standpoint. Vandenberghe and Boyd [VB96] present many SDP problems that can, in fact, be formulated as SOCPs and should be solved as SOCPs. SOCP formulations are provided below for four of these examples: the convex quadratically constrained quadratic programming (QCQP) problem, fractional quadratic functions, and problems arising from optimization. structural optimization, logarithmic Tchebychev approximation and the problem of finding the smallest ball containing a given set of ellipsoids. Thus, because of its broad applicability and its computational tractability, SOCP deserves to be studied in its own right.

Nesterov and Nemirovski  showed that their general results on self-concordant barriers apply to SOCP problems, yielding interior point algorithms for SOCP with an iteration complexity of √r for problems with r second-order cone inequalities (see below for definitions). Nemirovski and Scheinberg showed that primal or dual interior point methods developed for linear programming can be extended in a word-for-word fashion to SOCP; specifically they showed this for Karmarkar’s original method.

SOCPs or mixed SOCP, LP, and SDP problems can now be handled by several software packages. As mentioned above, the SDPpack package is available. In Sturm’s SeDuMi [Stu98] Sturm presents a theoretical basis for his computational work based on the Nesterov-Todd method.

In addition, we demonstrate how to transform many kinds of constraints into second-order cone inequalities. We describe how robust least squares and robust linear programming problems can be formulated as SOCPs in *3. In 4, we describe the algebraic foundation for second-order cones. The analysis of interior point algorithms for SOCP is based on a particular Euclidean Jordan algebra. We can better understand the relationship between SDP and SOCP if we understand this algebra.

Duality and complementary slackness for SOCP are covered in 5, while primal and dual non-degeneracy and strict complementarity are covered in §6. The logarithmic barrier function and equations defining the central path for SOCPs are discussed in *7, as well as primal-dual path-following interior point methods for SOCPs. A numerically stable and efficient implementation of interior point methods for SOCP is dealt with in *8.

## Implementation

The implementation below is for SOCP problem using interior point method. Require Packages has been imported.

				
%use s2
import dev.nm.solver.IterativeSolution



We have made a SOCP problem problem to solve using minimum fuction with different conic constraints.

				
// construct an SOCP problem to solve
fun problem() : SOCPGeneralProblem {
// min f'x
val f: Vector = DenseVector(1.0, 0.0, 0.0, 0.0, 0.0)

// The A's in the conic constraints.
val A1t: Matrix = DenseMatrix(arrayOf(doubleArrayOf(0.0, -1.0, 0.0, 1.0, 0.0), doubleArrayOf(0.0, 0.0, 1.0, 0.0, -1.0)))
val A2t: Matrix = DenseMatrix(arrayOf(doubleArrayOf(0.0, 0.5, 0.0, 0.0, 0.0), doubleArrayOf(0.0, 0.0, 1.0, 0.0, 0.0)))
val A3t: Matrix = DenseMatrix(arrayOf(doubleArrayOf(0.0, 0.0, 0.0, -0.7071, -0.7071), doubleArrayOf(0.0, 0.0, 0.0, -0.3536, 0.3536)))

// The b's in the conic constraints.
val b1: Vector = f
val b2: Vector = f.ZERO()
val b3: Vector = f.ZERO()

// The c's in the conic constraints.
val c1: Vector = DenseVector(2) // zero
val c2: Vector = DenseVector(-0.5, 0.0)
val c3: Vector = DenseVector(4.2426, -0.7071)

// The d's in the conic constraints.
val d = doubleArrayOf(0.0, 1.0, 1.0)
val constraints: kotlin.collections.List = java.util.Arrays.asList(
SOCPGeneralConstraint(A1t.t(), c1, b1, d[0]),
SOCPGeneralConstraint(A2t.t(), c2, b2, d[1]),
SOCPGeneralConstraint(A3t.t(), c3, b3, d[2])
)

// The SOCP problem to be solved.
val problem: SOCPGeneralProblem = SOCPGeneralProblem(
f,
constraints) // ||Ax + b|| &lt;= c&#039;x + d
return problem
}



You can see that we have used interior point method to solve the above problem from a given starting point as initial guess.

				
// Uses interior point method to solve the given problem from a given starting point.
val x0: Vector = DenseVector(1.0, 0.0, 0.0, 0.1, 0.0, 0.0, 0.1, 0.0, 0.0)
val s0: Vector = DenseVector(3.7, 1.0, -3.5, 1.0, 0.25, 0.5, 1.0, -0.35355, -0.1767)
val y0: Vector = DenseVector(-3.7, -1.5, -0.5, -2.5, -4.0)
// an initial guess
val soln0 = PrimalDualSolution(x0, s0, y0)


				
// construct an SOCP solver
val solver = PrimalDualInteriorPointMinimizer(
1e-8,
20) // max number of iterations


				
// solve the SOCP problem
val p = problem()
val soln: IterativeSolution = solver.solve(p)
soln.search(soln0)



				
// the solution
val y = soln.minimizer().y
val fx = p.f().evaluate(y);
println("minimizer: $y, fx =$fx")



## Conclusion

It is a class of problems that lies between linear programming and second-order cone programming

Programming in (or quadratic) terms and semidefinite programming. As with LP and SDP,

A primal dual interior-point method can solve SOCPs very efficiently

Particularly, far more efficiently than by treating the SOCP as if it were a

SDP . Furthermore, a wide range of engineering problems can be formulated as second-Order cone Problems.