The Jenkins-Traub Algorithm is an iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. After each root is computed, the linear factor is removed from the polynomial by the method called “deflation”, which guarantees that each root is computed only once.

Of course, the actual derivation of the Algorithm is much more complex than this, so this topic will give a general idea of how deflation works in relation to root-finding.

A variant that is commonly known as the “CPOLY” algorithm works for polynomials with complex coefficients, whereas the “RPOLY” algorithm works for polynomials with real coefficients.

Similar to the real variant, the real variant computes two roots at once, either two real roots or two conjugate complex roots. Real variants are faster (by a factor of 4) than complex variants because they avoid complex arithmetic

A polynomial with complex coefficients can be analyzed using the Jenkins-Traub algorithm. Initially, the algorithm checks for very large or very small roots in the polynomial. By rescaling the variable, the coefficients can be rescaled if necessary.


The Jenkins-Traub Algorithm can be implemented using iterative programming with a series of integer arrays. 

Given a polynomial \(P(x) \),

\(P(x)=a_0+a_1 x+a_2 x^2+ \cdots + a_n x^n \)

where \(a_{n} \neq 0 \) and

the amount of computation required for \(P(x) \) would be \(\frac{1}{2}n^2 \) floating point operations.

By rewriting \(P(x) \) into the following form:

\(P(x)=a_0+x(a_1+x( \cdots (a_{n-1}+x(a_n)) \cdots )) \)

we can compute P(x) in \(2n-1 \) floating point operations, which is much faster than the \(\frac{1}{2}n^2 \) using the standard form.


For each root \(r \) that is found, the polynomial is divided by the corresponding factor of that root. By letting the resulting polynomial be \(Q(x)=b_0+x(b_1+x( \cdots (b_{n-2}+x(b_{n-1})) \cdots )) \), the formula for deflation can be written as:

\(P(x)=(x-r)Q(x) \)

where r is the root found.

Expanding both \(P(x) \) and \(Q(x) \) reveals some insight:

\(P(x)=xQ(x)-rQ(x) \)

\(a_0+x(a_1+x(\cdots (a_{n-1}+x(a_n))\cdots)) \)

\(=x(b_0+x(b_1+x(\cdots(b_{n-2}+x(b_{n-1}))\cdots))) \)

\(-r(b_0+x(b_1+x(\cdots(b_{n-2}+x(b_{n-1}))\cdots))) \)

From the above, it is clear that:

  • \(a_0=-r(b_0) \)
  • \(a_1=b_0-r(b_1) \)
  • \(a_2=b_1-r(b_2) \)
  • \(\vdots \)
  • \(a_{n-1}=b_{n-2}-r(b_{n-1}) \)
  • \(a_n=b_{n-1} \)

By making \(b_0 \) the subject,

  • \(b_0=\frac{a_0}{-r} \)
  • \(b_1=\frac{a_1-b_0}{-r} \)
  • \(\vdots \)
  • \(b_{n-2}=\frac{a_{n-1}-b_{n-2}}{-r} \)
  • \(b_{n-1}=a_n \)

Thus, we can repeatedly compute the coefficients of the next polynomial using the coefficients of the previous one.

Implementing it in code form:

					Polynomial p = new Polynomial(1, -10, 35, -50, 24);
PolyRoot solver = new PolyRoot();
List roots = solver.solve(p);

Using imaginary numbers:

					Polynomial p = new Polynomial(1, 0, 1); // x^2 + 1 = 0
PolyRoot solver = new PolyRoot();
List roots0 = solver.solve(p);
List roots1 = PolyRoot.getComplexRoots(roots0);