Application Center - Maplesoft

App Preview:

Integer root extraction and perfect-power detection via p-adic Newton-Hensel lifting

You can switch back to the summary page by clicking here.

Learn about Maple
Download Application



Integer root extraction and perfect-power detection via p -adic Newton-Hensel lifting

Author: Carl Devore <>

8 January 2003

If you use this worksheet, please email me at mailto:// , to discuss it, or just to say "hi".  If you find this worksheet useful, please consider supporting the work of a poor graduate student who spends thousands of dollars per year on books and computer equipment by sending me a small item from my wishlist .  You just have to select the items, and they are sent directly to me.

Keywords: Newton's method, Hensel's lemma,   p -adic, perfect-power detection, root extraction


This worksheet can serve as an introduction to p -adic methods, as a review of Newton's method, or as a discussion of the number theoretic problem of perfect-power detection.

The problems:  

The computation problem (root extraction):   Given positive integers n  and r  we want to find if possible a positive integer x  such that n = x^r .  If no such x exists, then we do not care about getting an approximate solution.

The decision problem (perfect-power detection):   Given a positive integer n , we wish to detect whether there exists integers x  and r  such that n = x^r  without necessarily finding a suitable x  or r.  However, in the code that follows, we will find the x and r .

Newton's method:

Recall that if f(x)  is a differentiable function, then the Newton iteration for finding an approximate solution to f(x) = 0  is x := x-f(x)*`f '`(x)^(-1) .    

As an example, consider the simple problem of computing a reciprocal.  While it may seem trivial to use Newton's method for this , this example is of fundamental importance and is used extensively in what follows, because it turns out that this Newton iteration can be expressed in a form that does not use division.

>    restart;

>    f:= x-> 1/x - a;

f := proc (x) options operator, arrow; 1/x-a end proc

So that solving f(x) = 0  is equivalent to computing 1/a .  The Newton iteration is

>    G:= unapply(x - f(x)/diff(f(x),x), x);

G := proc (x) options operator, arrow; x+(1/x-a)*x^2 end proc

But watch what happens when we expand that.

>    expand(G(x));


There is no division -- just multiplication and subtraction.  That idea is used in some computers to implement division, and it has many other uses.  (See Knuth, 1998).  In the general Newton's method, we need to divide by the derivative.  We can avoid the division by performing a preliminary Newton step for reciprocating the derivative.  While that is not very useful for floating-point computation, it is useful in modular arithmetic, where the concept of division is slightly different.

The p -adic case:

It is a great surprise that the Newton iteration also works in modular arithmetic, even though the notion of derivative must be radically altered, and the usual graphical explanation of Newton's method in terms of finding the x -intercept of the tangent line makes no sense.  Indeed, in a sense, it works even better in modular arithmetic because it can give exact answers rather than approximations and because the number of iterations required to get an exact answer can be determined precisely.

Theorem 1 (Newton-Hensel lifting of the modular reciprocal):   If   a*x = `mod`(1,p)  ( p  not necessarily prime) (thus x = `mod`(a^(-1),p) ), then 2*x-x^2*a = `mod`(a^(-1),p^2)  .

Proof:  Since   p  divides a*x-1 ,   p^2  divides -(a*x-1)^2 = a*(2*x-x^2*a)-1 .  Thus a*(2*x-x^2*a) = `mod`(1,p^2) , and the conclusion follows.

Indeed, the idea can be extended to the Newton iteration for the roots of polynomials:

Theorem 2 (Newton-Hensel lifting for modular polynomial roots):   Let f(x)  be a polynomial with integer coefficients, f(a) = `mod`(0,p) , and `f '`(a)  a unit modulo p .  Then f(a-f(a)*`f '`(a)^(-1)) = `mod`(0,p^2) .

Proof:  Consider the Taylor expansion of f :

f(x+y) = sum(`@@`(D,k)(f)(x)/k!*y^k,k = 0 .. deg(f)) .

When x  and the coefficients of f are integers, the factorial denominators will always cancel with factors arising from differentiation, so the coefficients of the y 's in the above expansion are integers.  Substituting x = a  and y = z*p  and taking the result modulo p^2  we get

f(a+z*p) = `mod`(f(a)+`f '`(a)*z*p,p^2) .

Now substitute z = `mod`(-f(a)*`f '`(a)^(-1)/p,p)   .  The division by p makes sense because we are assuming f(a) = `mod`(0,p) , hence f(a)  is divisible by p .  We get

f(a-f(a)*f(a)^(-1)) = `mod`(0,p^2) .

So how does this help us solve problems over the integers?  Once the modulus is larger than the possible root over the integers, we check whether the solution also works over the integers.  Note that the modulus is squared in each step, so it grows very fast.  This is analogous to the well-known rule of thumb about Newton's method -- that the number of digits of accuracy doubles with each iteration.  In the p -adic case, that rule of thumb is actually precise:  the number of p- adic digits of accuracy exactly doubles with each iteration.

In what follows, sometimes we need to factor a small prime out of an integer.  The following procedure does that.

>    FactorOut:= proc(n::posint, p::posint)
   # return e and o such that n = p^e*o and o mod p <> 0.
   local e,o,x;
   o:= n;
   for e from 0 do
      if irem(o,p,'x') <> 0 then break end if;
      o:= x
   end do;
end proc:    

To compute the square root of n , we use the Newton iteration x := x-(x^2-n)*(2*x)^(-1) .  The inverse cannot exist if p  is even, thus we use p = 3 .  That means we have to factor out 3's from n, and we use an initial solution x = 1  because 2 is not a square modulo 3.  The inverse is updated via its own Newton iteration from Theorem 1.

>    Isqrt:= proc(N::posint)
   # Returns ``(sqrt(N))^2 if N is a perfect square, false otherwise,
   # via 3-adic Newton lifting.
   local p, x, `/2/x`, n, e3;

   # The 3-adic algorithm requires that N not be a multiple of 3.
   # So factor out all 3's.
   e3, n:= FactorOut(N,3);
   if irem(e3,2,'e3') <> 0 then return false end if;

   # No square is congruent to 2 mod 3.
   if irem(n,3)=2 then return false end if;

   # Find the solution mod p=3^(2^k) until p > sqrt(n).
   p:= 9; x:= 1; `/2/x`:= 2;  # 1/2 mod 3 = 2
   to ilog2(1+iquo(ilog2(n), 3)) do
      # Newton-Hensel lift for the solution of x^2 = n.
      x:= mods(x - (x^2-n)*`/2/x`, p);
      # Newton-Hensel lift for the next "denominator" `/2/x` in the line above.
      `/2/x`:= mods(2*`/2/x`*(1-x*`/2/x`), p);
      p:= p^2
   end do;
   x:= mods(x - (x^2-n)*`/2/x`, p);
   if x^2 = n then ``(3^e3*abs(x))^2 else false end if
end proc:

Amazingly enough, the above procedure compares favorably timewise with Maple's builtin isqrt.   But the builtin command also computes the nearest integer approximation if the square root is not an integer.

Test on a number of 10,000 digits.

>    n:= rand(10^5000)()^2:

>    time(Isqrt(n));


>    time(isqrt(n));


Maple's iroot  is faster though.

>    time(iroot(n,2,'exact'));


>    evalb(op(op(1,Isqrt(n)))^2=n);


To compute the  r th root of n for odd r , we use the Newton iteration x := x-(x^r-n)*(r*x^(r-1))^(-1) .  The x^(r-1)  is computed separately to avoid repetition.  A separate Newton iteration is used to update the inverse.  We use p = 2  and factor out all 2's from n .  If r  is even, we just use Isqrt as many times as needed before using the algorithm for odd exponents.

>    Iroot:= proc(N::posint, R::posint)
   # Returns ``(N^(1/R))^R if N is an Rth power, false otherwise,
   # via 2-adic Newton lifting.
   local n, r, p, next_p, x, `r-1`, `x^(r-1)`, `/r/x^(r-1)`, e, e2, e3, ee2;

   if R=1 then return ``(N) end if;
   n:= N;
   # First, do as much of the computation as possible with Isqrt.
   ee2, r:= FactorOut(R,2);
   if ee2 > 0 then
      e3, n:= FactorOut(n,3);
      if irem(e3, R, 'e3') <> 0 then return false end if;
      to ee2 do
         n:= Isqrt(n);
         if n=false then
            return false
            n:= op(op(1,n))
         end if
      end do
      e3:= 0
   end if;
   # Find a solution mod p = 2^(2^k) until p > n^(1/r).
   e2, n:= FactorOut(n,2);
   if irem(e2, r, 'e2') <> 0 then return false end if;
   p:= 4; x:= 1; `/r/x^(r-1)`:= 1; `r-1`:= r-1; `x^(r-1)`:= 1;
   to ilog2(iquo(ilog2(n),r)+1) do
      next_p:= p^2;

      # Newton-Hensel lift for the solution of x^r = n.
      x:= mods(x - (x*`x^(r-1)`-n)*`/r/x^(r-1)`, p);

      # Newton-Hensel lift for the denominator.
      `x^(r-1)`:= mods(x^`r-1`, next_p);
      `/r/x^(r-1)`:= mods(`/r/x^(r-1)`*(2-r*`x^(r-1)`*`/r/x^(r-1)`), p);

      p:= next_p
   end do;
   x:= mods(x - (x*`x^(r-1)`-n)*`/r/x^(r-1)`, p);
   if x^r = n then ``(2^e2*3^e3*x)^(r*2^ee2) else false end if
end proc:

Unlike Isqrt, Iroot is not competitive timewise with Maple's iroot , which is not builtin but uses extended-precision floating-point computation.  Nonetheless, Iroot is still fast.  Note that p  is always a power of 2.  All of the mod operations could be done by bit manipulations, but Maple does not represent large integers in binary, nor does Maple have an adequate bit-truncation command.

>    n:= rand(10^5000)()^3:

>    time(Iroot(n,3));


>    time(iroot(n,3, 'exact'));


>    evalb(op(op(1,Iroot(n,3)))^3=n);


To detect whether n is a perfect power, we simply check every possible prime exponent up to a logarithmic bound.

>    Ispower:= proc(n::posint)
   # Returns two items.  
   # The first is a true/false whether n is perfect power.
   # The second is a partial factorization n = ``(2)^e*``(3)^f*``(a)^g
   # where gcd(a,6) = 1 and g is maximal.
   local e2,e3,o,B,b,factor,save_factor;
   if n=1 then return true, ``(1)^2 end if;
   e2,o:= FactorOut(n,2);
   e3,o:= FactorOut(o,3);
   if o=1 then
      return evalb(igcd(e2,e3)>1), ``(2)^e2*``(3)^e3
   end if;
   save_factor:= ``(o);
   # Upper bound on the remaining power (the minimal base is
   # now 5).
   B:= iquo(ilog2(o), 2);
   b:= 2;
   while b <= B do
      factor:= Iroot(o,b);
      if factor<>false then
         if igcd(e2,e3,b) = b then
            # Use recursive call (at minimal extra cost) to determine
            # if the factor can be expressed to a higher power.
            return true, ``(2)^e2*``(3)^e3*Ispower(op(op(1,factor)))[2]^b
            save_factor:= factor
         end if
      end if;
      b:= nextprime(b)
   end do;
end proc:

>    n:= (6*rand())^47:

>    Ispower(n);

true, ``(2)^141*``(3)^94*``(51287689745)^47

>    n:= rand(10^100)():

>    Ispower(n);

false, ``(6157977345574364035860981231842576594741555536805354334509992273186264663346706935193485395145208389)


If one was purely interested in perfect-power detection, then there is no need to return the partial factorization.  The recursive call is only made to refine this factorization; whether or not n  is a perfect power has already been determined with certainty when the recusive call is made.  Some small extra efficiencies could be made to the above by sieving for the possible prime exponents.   See the paper by Berstein.  


1. Eric Bach and Jeffrey Shallit Algorithmic Number Theory: Volume 1: Efficient Algorithms  (MIT Press, 1996).

2. Dario Bini and Victor Pan Polynomial and Matrix Computations: Volume 1: Fundamental Algorithms (Birkhauser, 1994).

3. Daniel J. Berstein "Detecting perfect powers in essentially linear time" Mathematics of Computation (67:223, pp. 1253-83, July 1998).

4. Donald Ervin Knuth The Art of Computer Programming: Volume 2: Seminumerical Algorithms (3rd ed., Addison Wesley Longman, 1998).

5. Joachim von zur Gathen and Jurgen Gerhard Modern Computer Algebra (Cambridge University Press, 1999).