Integer root extraction and perfect-power detection via p -adic Newton-Hensel lifting
Author: Carl Devore <devore@math.udel.edu>
8 January 2003
If you use this worksheet, please email me at mailto://devore@math.udel.edu , to discuss it, or just to say "hi". If you find this worksheet useful, please consider supporting the work of a poor graduate student who spends thousands of dollars per year on books and computer equipment by sending me a small item from my Amazon.com wishlist http://www.amazon.com/exec/obidos/registry/4SWHEIGN0FN9/ref=wl_s_3/104-5062484-1726354 . You just have to select the items, and they are sent directly to me.
Keywords: Newton's method, Hensel's lemma, p -adic, perfect-power detection, root extraction
Purpose:
This worksheet can serve as an introduction to p -adic methods, as a review of Newton's method, or as a discussion of the number theoretic problem of perfect-power detection.
The problems:
The computation problem (root extraction): Given positive integers n and r we want to find if possible a positive integer x such that . If no such x exists, then we do not care about getting an approximate solution.
The decision problem (perfect-power detection): Given a positive integer n , we wish to detect whether there exists integers x and r such that without necessarily finding a suitable x or r. However, in the code that follows, we will find the x and r .
Newton's method:
Recall that if is a differentiable function, then the Newton iteration for finding an approximate solution to is .
As an example, consider the simple problem of computing a reciprocal. While it may seem trivial to use Newton's method for this , this example is of fundamental importance and is used extensively in what follows, because it turns out that this Newton iteration can be expressed in a form that does not use division.
So that solving is equivalent to computing . The Newton iteration is
But watch what happens when we expand that.
There is no division -- just multiplication and subtraction. That idea is used in some computers to implement division, and it has many other uses. (See Knuth, 1998). In the general Newton's method, we need to divide by the derivative. We can avoid the division by performing a preliminary Newton step for reciprocating the derivative. While that is not very useful for floating-point computation, it is useful in modular arithmetic, where the concept of division is slightly different.
The p -adic case:
It is a great surprise that the Newton iteration also works in modular arithmetic, even though the notion of derivative must be radically altered, and the usual graphical explanation of Newton's method in terms of finding the x -intercept of the tangent line makes no sense. Indeed, in a sense, it works even better in modular arithmetic because it can give exact answers rather than approximations and because the number of iterations required to get an exact answer can be determined precisely.
Theorem 1 (Newton-Hensel lifting of the modular reciprocal): If ( p not necessarily prime) (thus ), then .
Proof: Since p divides , divides . Thus , and the conclusion follows.
Indeed, the idea can be extended to the Newton iteration for the roots of polynomials:
Theorem 2 (Newton-Hensel lifting for modular polynomial roots): Let be a polynomial with integer coefficients, , and a unit modulo p . Then .
Proof: Consider the Taylor expansion of f :
.
When x and the coefficients of f are integers, the factorial denominators will always cancel with factors arising from differentiation, so the coefficients of the y 's in the above expansion are integers. Substituting and and taking the result modulo we get
Now substitute . The division by p makes sense because we are assuming , hence is divisible by p . We get
So how does this help us solve problems over the integers? Once the modulus is larger than the possible root over the integers, we check whether the solution also works over the integers. Note that the modulus is squared in each step, so it grows very fast. This is analogous to the well-known rule of thumb about Newton's method -- that the number of digits of accuracy doubles with each iteration. In the p -adic case, that rule of thumb is actually precise: the number of p- adic digits of accuracy exactly doubles with each iteration.
In what follows, sometimes we need to factor a small prime out of an integer. The following procedure does that.
To compute the square root of n , we use the Newton iteration . The inverse cannot exist if p is even, thus we use . That means we have to factor out 3's from n, and we use an initial solution because 2 is not a square modulo 3. The inverse is updated via its own Newton iteration from Theorem 1.
Amazingly enough, the above procedure compares favorably timewise with Maple's builtin isqrt. But the builtin command also computes the nearest integer approximation if the square root is not an integer.
Test on a number of 10,000 digits.
Maple's iroot is faster though.
To compute the r th root of n for odd r , we use the Newton iteration . The is computed separately to avoid repetition. A separate Newton iteration is used to update the inverse. We use and factor out all 2's from n . If r is even, we just use Isqrt as many times as needed before using the algorithm for odd exponents.
Unlike Isqrt, Iroot is not competitive timewise with Maple's iroot , which is not builtin but uses extended-precision floating-point computation. Nonetheless, Iroot is still fast. Note that p is always a power of 2. All of the mod operations could be done by bit manipulations, but Maple does not represent large integers in binary, nor does Maple have an adequate bit-truncation command.
To detect whether n is a perfect power, we simply check every possible prime exponent up to a logarithmic bound.
If one was purely interested in perfect-power detection, then there is no need to return the partial factorization. The recursive call is only made to refine this factorization; whether or not n is a perfect power has already been determined with certainty when the recusive call is made. Some small extra efficiencies could be made to the above by sieving for the possible prime exponents. See the paper by Berstein.
References:
1. Eric Bach and Jeffrey Shallit Algorithmic Number Theory: Volume 1: Efficient Algorithms (MIT Press, 1996).
2. Dario Bini and Victor Pan Polynomial and Matrix Computations: Volume 1: Fundamental Algorithms (Birkhauser, 1994).
3. Daniel J. Berstein "Detecting perfect powers in essentially linear time" Mathematics of Computation (67:223, pp. 1253-83, July 1998). http://cr.yp.to/papers/powers-ams.pdf
4. Donald Ervin Knuth The Art of Computer Programming: Volume 2: Seminumerical Algorithms (3rd ed., Addison Wesley Longman, 1998).
5. Joachim von zur Gathen and Jurgen Gerhard Modern Computer Algebra (Cambridge University Press, 1999).