An Introduction to Root-Finding
A root is a solution to the given equation at hand. Root-finding is used wherever data is involved – engineering, finance, statistics, biomedical science – any real-world scenario where an approximation of a solution is needed.
For any continuous function for which one knows two values with opposite signs, the bisection method is a root-finding method.
A polynomial equation’s roots can be found using the bisection method. Using this method, the interval containing the root of the equation can be separated and subdivided. A continuous function’s intermediate theorem underlies this method. In this method, the gap between the positive and negative intervals is narrowed until the correct answer is reached. Averaging the positive and negative intervals narrows the gap. Simple and slow, it is a simple method. There are many names associated with the bisection method, including interval halving, root-finding, binary search, and dichotomy method.
By repeatedly dividing an interval, the bisection method approximates the roots of a given equation. A very small interval will be found by dividing the interval until the result is found.
Bisection method is applicable for solving the equation f(x) = 0 for a real variable x. At each step, the interval is divided into two parts/halves by computing the midpoint, c = \frac{(a+b)}{2}, and the value of f(c) at that point.
Bisection fails when the roots are double roots, since the function keeps the same sign with the exception of hitting zero once. Then, at each stage, it is unclear which half of the interval to take since f(a) and f(b) have the same sign. In this case, it is preferable to determine the lowest or highest value. The problem isn’t always obvious when it’s first presented, of course.
Now that you understand how polynomials are reduced, we can move on to the more intricate details of how the roots are found in the first place.
If you are familiar with the basics of data structures and algorithms, you would probably have come across a search algorithm known as binary search. The bisection method follows the same principles, such as:
- Having a pre-defined interval consisting of an upper and lower bound.
- Splitting the interval into two halves
- Comparing the midpoint against the target value (or root in this case) to decide which half contains it.
One notable difference of the bisection method would be the continuous values of numerical roots in contrast to binary search which assumes the discrete values of indexed items. Thus, the algorithm would have to be terminated either once a certain threshold has been reached or after a set number of iterations.
Explanation

Above is the graph of \(f(x)=sin(x)+3/(4x) \). Given that there is no known analytical method of finding its roots, a numerical method will have to be used instead.
Part 1 – Determining the existence of a root
There are three conditions to determine the existence of a root numerically:
- The function has to be continuous over the domain in question. Analytically, we know that \(f \) is a continuous function for the domain \(2.5 \leq x \leq 4.5 \), so this holds.
- The function itself has to be known to calculate the upper and lower bounds. This condition is also fulfilled as we also know that \(f(x)=sin(x)+3/(4x) \)
- The upper and lower bounds need to be of opposite signs. As \(f(2.5)=0.898 \) and \(f(4.5)=-0.811
\), the final condition is satisfied.
Therefore, we can conclude that a root exists in the domain \(D_f=[2.5,4.5] \) over the function \(f \)
Part 2 – Calculating the root
This is the iterative part of the process. Given the lower and upper bounds of \(f(2.5) \) and \(f(4.5) \), the bisection method can be outlined as:
Select values for the upper and lower bounds, \(l = 2.5, u = 4.5 \).
Calculate the midpoint \(p=(l+u)/2 \).
If \(f(p) = 0 \), then the root is found, \(x=p \). Stop.
Else \(f(p) \neq 0 \)
a. If \(f(p) \cdot f(l) > 0 , l = p , u = u \)
b. If \(f(p) \cdot f(u) > 0 , l = l , u = p \)Repeat steps 2 to 4 until a pre-specified tolerance is satisfied or the maximum number of iterations is reached.

Based on the analytical method, there is a root at some point between \(x=2.5 \) and \(x=4.5 \). We can label the lower bound \(l=2.5 \) and upper bound \(u=4.5 \) as shown in Figure 3.2 above.

Next, we calculate the midpoint of \(l \) and \(u \), \(p=(l+u)/2=3.5 \).

As \(f(p)*f(u)>0 \), \(u=p=3.5 \).

The process then repeats with \(p=(l+u)/2=3 \), generating the following set of values with each iteration:
Iteration | l | u | p |
1 | 2.5 | 4.5 | 3.5 |
2 | 2.5 | 3.5 | 3 |
3 | 3 | 3.5 | 3.25 |
4 | 3.25 | 3.5 | 3.375 |
5 | 3.25 | 3.375 | 3.3125 |
6 | 3.3125 | 3.375 | 3.34375 |
7 | 3.34375 | 3.375 | 3.359375 |
8 | 3.359375 | 3.375 | 3.367188 |
9 | 3.359375 | 3.367188 | 3.363281 |
10 | 3.363281 | 3.367188 | 3.365234 |
11 | 3.365234 | 3.367188 | 3.366211 |
Efficiency
Notice that it took 11 iterations for the method to get an estimate that is correct to 3 decimal places, 3.366.
Although the Bisection Method is very simple and robust, it is not very efficient as the absolute error is halved with each iteration. As such, the method converges linearly which makes it slower compared to other methods (like the Newton-Rhapson Method).
Below is the iterative part of the bisection method implemented from first principles using kotlin:
import kotlin.math.sin as sin
fun testFunction(x: Double):Double{
return sin(x)+3/(4*x)
}
fun bisectionMethod(myFunction: (x: Double) -> Double, lowerBound: Double, upperBound: Double, maxMargin: Double, maxIterations: Int): Double{
var lower: Double = lowerBound
var upper: Double = upperBound
if (myFunction(lower)*myFunction(upper)>0.0){
return 0.0 // lower and upper bounds have the same sign
}
var mid: Double=(lower+upper)/2
var iterations = 0
while (upper-lower>=maxMargin && iterations0){
upper = mid
}
else{
lower = mid
}
iterations+=1
}
return mid
}
bisectionMethod(::testFunction, 2.5, 4.5, 0.001, 99)
Output:
3.5 3.0 3.25 3.375 3.3125 3.34375 3.359375 3.3671875 3.36328125 3.365234375 3.3662109375
In S2, the bisection method is implemented as the following:
UnivariateRealFunction f = new AbstractUnivariateRealFunction() {
@Override
public double evaluate(double x) {
return x * Math.sin(x) - 3; // x * six(x) - 3 = 0
}
};
BisectionRoot solver = new BisectionRoot(1e-8, 30);
double root = solver.solve(f, 12., 14.);
double fx = f.evaluate(root);
System.out.println(String.format("f(%f) = %f", root, fx));