Difference between revisions of "Newton's method"

m
 
(7 intermediate revisions by 2 users not shown)
Line 16: Line 16:
  
 
Starting at <math>x_0 = 1.5</math>, we apply this formula repeatedly:  
 
Starting at <math>x_0 = 1.5</math>, we apply this formula repeatedly:  
<cmath>\begin{align*} x_1 &= \frac{1.5^2 + 1}{2 * 1.5 - 1} &= 1.625, \\  
+
<cmath>\begin{alignat*}{2} x_1 &= \frac{1.5^2 + 1}{2 * 1.5 - 1} &&= 1.625, \\  
x_2 &= \frac{1.625^2 + 1}{2 * 1.625 - 1} &= 1.6180556, \\
+
x_2 &= \frac{1.625^2 + 1}{2 * 1.625 - 1} &&= 1.6180556, \\
x_3 &= \frac{1.6180556^2 + 1}{2 * 1.6180556 - 1} &= 1.6180340, \\
+
x_3 &= \frac{1.6180556^2 + 1}{2 * 1.6180556 - 1} &&= 1.6180340, \\
x_4 &= \frac{1.6180340^2 + 1}{2 * 1.6180340 - 1} &= 1.6180340. \\ \end{align*}</cmath>
+
x_4 &= \frac{1.6180340^2 + 1}{2 * 1.6180340 - 1} &&= 1.6180340. \\ \end{alignat*}</cmath>
  
 
Because <math>x_4 = x_3</math> as calculated, the ratio <math>\frac{f(x_3)}{f'(x_3)}</math> must be very close to <math>0</math>. Since <math>f'(x_3) = 2.2360680</math> is not too large, <math>f(x_3)</math> must be quite close to <math>0</math>, so <math>x_3 = 1.6180340</math> is a very good estimate of the greater root.
 
Because <math>x_4 = x_3</math> as calculated, the ratio <math>\frac{f(x_3)}{f'(x_3)}</math> must be very close to <math>0</math>. Since <math>f'(x_3) = 2.2360680</math> is not too large, <math>f(x_3)</math> must be quite close to <math>0</math>, so <math>x_3 = 1.6180340</math> is a very good estimate of the greater root.
Line 35: Line 35:
 
Suppose we try to estimate the greater root as before, but start with <math>x_0 = -0.5</math>. Applying Newton's method to <math>x^2 - x - 1</math> yields the following sequence of estimates:
 
Suppose we try to estimate the greater root as before, but start with <math>x_0 = -0.5</math>. Applying Newton's method to <math>x^2 - x - 1</math> yields the following sequence of estimates:
  
<cmath>\begin{align*} x_1 &= \frac{-0.5^2 + 1}{2 * (-0.5) - 1} &= -0.625, \\  
+
<cmath>\begin{alignat*}{2} x_1 &= \frac{-0.5^2 + 1}{2 * (-0.5) - 1} &&= -0.625, \\  
x_2 &= \frac{(-0.625)^2 + 1}{2 * (-0.625) - 1}  &= -0.6180556, \\
+
x_2 &= \frac{(-0.625)^2 + 1}{2 * (-0.625) - 1}  &&= -0.6180556, \\
x_3 &= \frac{(-0.6180556)^2 + 1}{2 * (-0.6180556) - 1} &= -0.6180340, \\
+
x_3 &= \frac{(-0.6180556)^2 + 1}{2 * (-0.6180556) - 1} &&= -0.6180340, \\
x_4 &= \frac{(-0.6180340)^2 + 1}{2 * (-0.6180340) - 1} &= -0.61803395, \\
+
x_4 &= \frac{(-0.6180340)^2 + 1}{2 * (-0.6180340) - 1} &&= -0.61803395, \\
x_5 &= \frac{(-0.61803395)^2 + 1}{2 * (-0.61803395) - 1} &= -0.61803406, \\
+
x_5 &= \frac{(-0.61803395)^2 + 1}{2 * (-0.61803395) - 1} &&= -0.61803406, \\
x_6 &= \frac{(-0.61803406)^2 + 1}{2 * (-0.61803406) - 1} &= -0.61803395. \\ \end{align*}</cmath>
+
x_6 &= \frac{(-0.61803406)^2 + 1}{2 * (-0.61803406) - 1} &&= -0.61803395. \\ \end{alignat*}</cmath>
  
These estimates converge to the lesser root, in this case because the initial estimate was closer to the lesser root than the greater root. In general, predicting which root Newton's method will finally converge to is difficult, and Newton's method may converge to a farther root even if a closer root is available within the interval between the initial estimate and the root.  
+
These estimates converge to the lesser root, in this case because the initial estimate was closer to the lesser root than the greater root. In general, predicting which root Newton's method will finally converge to is difficult, and Newton's method may converge to a given root even if a closer root is available within the interval between the initial estimate and the given root.  
  
 
===Periodic behavior without finding a root===
 
===Periodic behavior without finding a root===
 
When finding roots of <math>x^2 - x - 1</math> as above, the estimates eventually stabilize into a predictable pattern: in the <math>x_0 = 1.5</math> case, <math>x_4 = x_3</math>, so every subsequent estimate will equal <math>x_3</math>; in the <math>x_0 = -0.5</math> case, <math>x_6 = x_4</math>, so the subsequent estimates will oscillate, with every even estimate equal to <math>x_4</math> and every odd estimate equal to <math>x_5</math>. These behaviors are due to the limited precision of the calculations; with indefinite precision, the estimates would continue to approach the root without becoming periodic.  
 
When finding roots of <math>x^2 - x - 1</math> as above, the estimates eventually stabilize into a predictable pattern: in the <math>x_0 = 1.5</math> case, <math>x_4 = x_3</math>, so every subsequent estimate will equal <math>x_3</math>; in the <math>x_0 = -0.5</math> case, <math>x_6 = x_4</math>, so the subsequent estimates will oscillate, with every even estimate equal to <math>x_4</math> and every odd estimate equal to <math>x_5</math>. These behaviors are due to the limited precision of the calculations; with indefinite precision, the estimates would continue to approach the root without becoming periodic.  
  
However, for some functions, Newton's method can cycle without coming close to a root.
+
However, for some functions, Newton's method can cycle without coming close to a root. Consider <math>f(x) = x^4 - 3x^2 - 2</math>, which indeed has two real roots <math>\pm \sqrt{r}</math>, where <math>r</math> is the (unique) positive root of <math>x^2 - 3x - 2</math>. However, if we attempt to find the value of <math>\sqrt{r}</math> using Newton's method on <math>f(x)</math> with an initial guess <math>x_0 = 1</math>, then we get <math>f'(x) = 4x^3 - 6x</math>, so <cmath>\begin{align*} x_1 &= 1 - \frac{1 - 3 - 2}{4 - 6} = -1, \\
 +
x_2 &= -1 - \frac{1 - 3 - 2}{-4 + 6} = 1. \\ \end{align*}</cmath>
 +
Because the value of <math>x_{i+1}</math> only depends on the single previous term <math>x_i</math>, by induction <math>x_{2n} = 1</math> and <math>x_{2n+1} = -1</math> for all nonnegative integers <math>n</math>, so the estimates never converge to a root.
  
===Divergence===
+
===Unbounded sequence of estimates===
 
The function <math>f(x) = e^x</math> is always positive, so it has no roots. Since <math>f(x) = e^x</math> satisfies the differential equation <math>f(x) = f'(x)</math> for all <math>x</math>, the ratio <math>\frac{f(x)}{f'(x)}</math> is always <math>1</math>, so the sequence of estimates produced by Newton's method becomes <math>x_{i+1} = x_i - 1</math>, diverging in the negative direction.
 
The function <math>f(x) = e^x</math> is always positive, so it has no roots. Since <math>f(x) = e^x</math> satisfies the differential equation <math>f(x) = f'(x)</math> for all <math>x</math>, the ratio <math>\frac{f(x)}{f'(x)}</math> is always <math>1</math>, so the sequence of estimates produced by Newton's method becomes <math>x_{i+1} = x_i - 1</math>, diverging in the negative direction.
 +
 +
The estimates can also decrease (or increase) without bound even if the function has a root. For example, <math>g(x) = e^x - e^{2x}</math> has a root at <math>x = 0</math>. We have <math>g'(x) = e^x - 2e^{2x}</math> (using the [[Derivative/Formulas|chain rule]]), so the steps of Newton's method are <cmath>x_{i+1} = x_i - \frac{e^{x_i} - e^{2x_i}}{e^{x_i} - 2e^{2x_i}}.</cmath> However, for <math>x_i < \ln \left( \frac{1}{2} \right)</math>, we have <math>e^{x_i} - e^{2x_i} > e^{x_i} - 2e^{2x_i} > 0</math>, so <math>x_{i+1} < x_i - 1</math>. Thus, if the initial guess is less than <math>\ln \left( \frac{1}{2} \right)</math>, each estimate is guaranteed to be less than the previous by a margin of more than <math>1</math>, so the estimates again diverge toward <math>-\infty</math>.
 +
 +
==Applications in Analytic Number Theory (Advanced)==
 +
The applications of Newton's Method actually apply to [[Hensel's Lemma]], specifically in the <math>p</math> adic integers <math>\mathbb{Z}_p</math>.  In particular, take some polynomial <math>p\in\mathbb{Z}[x]</math> and an integer <math>x</math> such that <math>p(x)\equiv 0\pmod{p}</math>.  Our goal is to approximate some <math>\hat{x}</math> such that <math>p(\hat{x})\equiv 0\pmod{p^2}</math>.  However, in the <math>p</math> adics, we can't just ``lift" arbitrarily.  We actually need to approximate such a solution in the next <math>p</math> power level, and then refine it iteratively.
 +
 +
Here's what that looks like realistically.  WLOG we assume that <math>x</math> is some <math>a_0</math> between <math>0</math> and <math>p-1</math>.  We search for <math>\hat{x}=a_0+a_1p</math> such that <math>p(\hat{x})\equiv 0\pmod{p^2}</math>.  Note the following critical step:
 +
<center><cmath>p(\hat{x})=p(a_0+a_1p)=p(a_0)+p'(a_0)a_1p+(a_1p)^2b</cmath></center>
 +
for some <math>b\in\mathbb{Z}</math>.  By assumption we know that <math>p(a_0)\equiv 0\pmod{p}</math> so let <math>p(a_0)=pt</math> for some <math>t</math>.  The desired congruence holds modulo <math>p^2</math> when <math>t+p'(a_0)a_1\equiv 0\pmod{p}</math>.  When <math>p'(a_0)\not\equiv 0\pmod{p}</math>, we see that <math>a_1\equiv -t/p'(a_0)\pmod{p}</math> which means that
 +
<center><cmath>\hat{x}=a_0+a_1p=a_0-\frac{pt}{p'(a_0)}=x-\frac{p(a_0)}{p'(a_0)}.</cmath></center>
 +
This is exactly the statement of the Newtonian Algorithm!  Our root is approximated, so we found some <math>\hat{x}</math> such that <math>p(\hat{x})\equiv 0\pmod{p^2}</math>.  But this isn't the whole story.
 +
 +
 +
'''Theorem''': Let <math>p(x)\in\mathbb{Z}_p[x]</math> and <math>x\in\mathbb{Z}_p</math> such that <math>p(x)\equiv 0\pmod{p^n}</math>.  If <math>k=v(P'(x))<n/2</math> then there exists an <math>\hat{x}=x-p(x)/p'(x)</math> such that (1) <math>p(\hat{x})\equiv 0\pmod{p^{n+1}}</math>, (2) <math>\hat{x}\equiv x\pmod{p^{n-k}}</math> and (3) <math>k=v(p'(x))=v(p'(\hat{x}))</math>.
 +
 +
''Proof'': Let <math>p(x)=p^ny</math> for <math>y\in\mathbb{Z}_p</math> and <math>p'(x)=p^ku</math> where <math>u\in\mathbb{Z}_p^{\times}</math>.  By definition we see that
 +
<center><cmath>\hat{x}-x=-\frac{p(x)}{p'(x)}=-\frac{p^ny}{p^ku}=-p^{n-k}yu^{-1}\in p^{n-k}\mathbb{Z}_p.</cmath></center>
 +
This proves (2).  Notice, however, that in the Taylor expansion of <math>p</math> about <math>x</math> we see that
 +
<center><cmath>p(\hat{x})=p(x)-\frac{p(x)p'(x)}{p'(x)}+(\hat{x}-x)^2t=(\hat{x}-x)^2t</cmath></center>
 +
where <math>t\in\mathbb{Z}_p</math>.  Also notice that
 +
<center><cmath>p(\hat{x})=(\hat{x}-x)^2t\in p^{2n-2k}\mathbb{Z}_p</cmath></center>
 +
but we finish (1) since <math>p^{2n-2k}\mathbb{Z}_p=p^n\cdot p^{n-2k}\mathbb{Z}_p\subset p^{n-1}\mathbb{Z}_p</math>.  To prove (3) we just compute the order of <math>p'(\hat{x})</math>.  Taking the Taylor expansion of <math>p'</math> at point <math>x</math> gives
 +
<center><cmath>p'(\hat{x})=p'(x+(\hat{x}-x))=p'(x)+(\hat{x}-x)s=p^ku+p^{n-k}zs=p^k(u+p^{n-2k}zs)=p^kv.</cmath></center>
 +
But since <math>k<n/2\implies 2k<n\implies 2k-n<0</math> and <math>u\in\mathbb{Z}_p^{\times}</math> we have
 +
<center><cmath>v=u+p^{n-2k}zs\in u+p\mathbb{Z}_p\subset \mathbb{Z}_p^{\times}</cmath></center>
 +
which proves that <math>k=v(p'(x))=v(p'(\hat{x}))</math> <math>\square</math>
 +
 +
This sets up the entire motivation behind Hensel's Lemma in a more formal way.  In addition, the proof of the lemma is just an extension of what we previously noted, which actually makes all of our previous work worth it in the long run. 
 +
 +
 +
'''Hensel's Lemma''': Assume <math>p\in\mathbb{Z}_p[x]</math> and <math>x\in\mathbb{Z}_p</math> satisfies <math>p(x)\equiv 0\pmod{p^n}</math>.  If <math>k=v(p'(x))<n/2</math>, then there exists a unique root <math>\xi</math> of <math>p\in\mathbb{Z}_p[x]</math> such that <math>\xi\equiv x\pmod{p^{n-k}}</math> and <math>k=v(p'(x))=v(p'(\xi))</math>. 
 +
 +
''Proof'':  We first prove the existence of <math>\xi</math>.  Let <math>x_0=x</math> be given.  We can construct <math>x_1\in\mathbb{Z}</math> such that
 +
<cmath>x_1\equiv x\pmod{p^{n-k}}~\text{and}~p(x_1)\equiv 0\pmod{p^{n+1}},~v(p'(x_0))=v(p'(x_1))=k.</cmath>
 +
We can do this for <math>x_2</math> and <math>x_n</math> such that we have a sequence <math>(x_n)_{n\ge 0}</math> where <math>x_n\to \xi</math> as <math>n\to \infty</math> where <math>p(\xi)=0</math> and <math>\xi\equiv x\pmod{p^{n-k}}</math>.  Now we prove that <math>\xi</math> is unique.  For the sake of contradiction, suppose that there are two roots <math>\xi</math> and <math>\omega</math> satisfying
 +
<center><cmath>\xi\equiv \omega\pmod{p^{n-k}}.</cmath></center>
 +
Since it was given that <math>k<n/2</math> we have <math>2k<n</math>.  This means that <math>n-k\ge k+1</math> so we have at the very least
 +
<center><cmath>\xi\equiv \omega\pmod{p^{k+1}}.</cmath></center>
 +
Now we have,
 +
<center><cmath>p(\omega)=p(\xi)+p'(\xi)(\omega-\xi)+(\omega-\xi)^2a</cmath></center>
 +
for some <math>a\in\mathbb{Z}_p</math>.  Since <math>\xi</math> and <math>\omega</math> were assumed to be roots, we factor to get
 +
<center><cmath>(\omega-\xi)(p'(\xi)+(\omega-\xi)a)=0</cmath></center>
 +
but since <math>v(p'(\xi))=k</math> and <math>v((\omega-\xi)a)\ge k+1</math>
 +
<center><cmath>p'(\xi)+(\omega-\xi)a\ne 0</cmath></center>
 +
which means that <math>\omega-\xi=0\implies \omega=\xi</math>, so uniqueness holds <math>\square</math>
 +
 +
==See also==
 +
*[[Secant method]]
 +
*[[Bisection method]]
  
 
[[Category: Calculus]]
 
[[Category: Calculus]]

Latest revision as of 22:34, 28 May 2023

Newton's method uses the derivative of a differentiable function to approximate its real or complex roots. For a function $f(x)$, the approximations are defined recursively by \[x_{i+1} = x_i - \frac{f(x_i)}{f'(x_i)}.\] To begin the recursion, an initial guess $x_0$ must be chosen. Often the choice of $x_0$ determines which of several possible roots is found, and in some cases the initial guess can cause the recursion to cycle or diverge instead of converging to a root.

Derivation

At each step of the recursion, we have $x_i$ and seek a root of $f(x)$. Since all nonconstant linear functions have exactly one root, as long as $f'(x_i) \neq 0$ we can construct a tangent-line approximation to $f(x)$ and find its root as an approximation. The tangent-line approximation of $f(x)$ about $x_i$ is \[f(x_i) + (x - x_i)f'(x_i).\] We seek the value $x = x_{i+1}$ such that the above expression equals $0$; hence, \[f(x_i) + (x_{i+1} - x_i)f'(x_i) = 0.\] Dividing by $f'(x_i)$ (as long as $f'(x_i) \neq 0$), \[\frac{f(x_i)}{f'(x_i)} + x_{i+1} - x_i = 0.\] Therefore, the desired value of $x_{i+1}$ is \[x_{i+1} = x_i - \frac{f(x_i)}{f'(x_{i})}.\]

Worked example

Problem: Find the values of the roots of $f(x) = x^2 - x - 1$.

Solution: We could use the quadratic formula to find that the roots are $\frac{1 \pm \sqrt 5}{2}$, but this approach does not immediately yield a decimal value.

To form a guess, we note that $\sqrt 5$ is slightly larger than $2$, so a suitable guess for the greater root is $\frac{1 + 2}{2} = 1.5$.

The derivative of $x^2 - x - 1$ is $2x - 1$ by the power rule for derivatives. Therefore, the recursive formula is

\[x_{i+1} = x_i - \frac{x_i^2 - x_i - 1}{2x_i - 1} = \frac{x_i^2 + 1}{2x_i - 1}.\]

Starting at $x_0 = 1.5$, we apply this formula repeatedly: \begin{alignat*}{2} x_1 &= \frac{1.5^2 + 1}{2 * 1.5 - 1} &&= 1.625, \\  x_2 &= \frac{1.625^2 + 1}{2 * 1.625 - 1} &&= 1.6180556, \\ x_3 &= \frac{1.6180556^2 + 1}{2 * 1.6180556 - 1} &&= 1.6180340, \\ x_4 &= \frac{1.6180340^2 + 1}{2 * 1.6180340 - 1} &&= 1.6180340. \\ \end{alignat*}

Because $x_4 = x_3$ as calculated, the ratio $\frac{f(x_3)}{f'(x_3)}$ must be very close to $0$. Since $f'(x_3) = 2.2360680$ is not too large, $f(x_3)$ must be quite close to $0$, so $x_3 = 1.6180340$ is a very good estimate of the greater root.

The sum of the roots is $1$ by Vieta's formulas, so the lesser root is simply $1 - 1.6180340 = -0.6180340$.

Failure cases

Although powerful, Newton's method is delicate and can be very sensitive to the initial guess and type of function, even when the function is differentiable everywhere.

Zero derivative

Suppose in the above example of finding a root of $x^2 - x - 1$, we had started with $x_0 = 0.5$. In the process of calculating $x_1$, then, we would have to divide by $f'(x_0) = 2x_0 - 1 = 0$, creating an undefined result.

Wrong root

Suppose we try to estimate the greater root as before, but start with $x_0 = -0.5$. Applying Newton's method to $x^2 - x - 1$ yields the following sequence of estimates:

\begin{alignat*}{2} x_1 &= \frac{-0.5^2 + 1}{2 * (-0.5) - 1} &&= -0.625, \\  x_2 &= \frac{(-0.625)^2 + 1}{2 * (-0.625) - 1}  &&= -0.6180556, \\ x_3 &= \frac{(-0.6180556)^2 + 1}{2 * (-0.6180556) - 1} &&= -0.6180340, \\ x_4 &= \frac{(-0.6180340)^2 + 1}{2 * (-0.6180340) - 1} &&= -0.61803395, \\ x_5 &= \frac{(-0.61803395)^2 + 1}{2 * (-0.61803395) - 1} &&= -0.61803406, \\ x_6 &= \frac{(-0.61803406)^2 + 1}{2 * (-0.61803406) - 1} &&= -0.61803395. \\ \end{alignat*}

These estimates converge to the lesser root, in this case because the initial estimate was closer to the lesser root than the greater root. In general, predicting which root Newton's method will finally converge to is difficult, and Newton's method may converge to a given root even if a closer root is available within the interval between the initial estimate and the given root.

Periodic behavior without finding a root

When finding roots of $x^2 - x - 1$ as above, the estimates eventually stabilize into a predictable pattern: in the $x_0 = 1.5$ case, $x_4 = x_3$, so every subsequent estimate will equal $x_3$; in the $x_0 = -0.5$ case, $x_6 = x_4$, so the subsequent estimates will oscillate, with every even estimate equal to $x_4$ and every odd estimate equal to $x_5$. These behaviors are due to the limited precision of the calculations; with indefinite precision, the estimates would continue to approach the root without becoming periodic.

However, for some functions, Newton's method can cycle without coming close to a root. Consider $f(x) = x^4 - 3x^2 - 2$, which indeed has two real roots $\pm \sqrt{r}$, where $r$ is the (unique) positive root of $x^2 - 3x - 2$. However, if we attempt to find the value of $\sqrt{r}$ using Newton's method on $f(x)$ with an initial guess $x_0 = 1$, then we get $f'(x) = 4x^3 - 6x$, so \begin{align*} x_1 &= 1 - \frac{1 - 3 - 2}{4 - 6} = -1, \\ x_2 &= -1 - \frac{1 - 3 - 2}{-4 + 6} = 1. \\ \end{align*} Because the value of $x_{i+1}$ only depends on the single previous term $x_i$, by induction $x_{2n} = 1$ and $x_{2n+1} = -1$ for all nonnegative integers $n$, so the estimates never converge to a root.

Unbounded sequence of estimates

The function $f(x) = e^x$ is always positive, so it has no roots. Since $f(x) = e^x$ satisfies the differential equation $f(x) = f'(x)$ for all $x$, the ratio $\frac{f(x)}{f'(x)}$ is always $1$, so the sequence of estimates produced by Newton's method becomes $x_{i+1} = x_i - 1$, diverging in the negative direction.

The estimates can also decrease (or increase) without bound even if the function has a root. For example, $g(x) = e^x - e^{2x}$ has a root at $x = 0$. We have $g'(x) = e^x - 2e^{2x}$ (using the chain rule), so the steps of Newton's method are \[x_{i+1} = x_i - \frac{e^{x_i} - e^{2x_i}}{e^{x_i} - 2e^{2x_i}}.\] However, for $x_i < \ln \left( \frac{1}{2} \right)$, we have $e^{x_i} - e^{2x_i} > e^{x_i} - 2e^{2x_i} > 0$, so $x_{i+1} < x_i - 1$. Thus, if the initial guess is less than $\ln \left( \frac{1}{2} \right)$, each estimate is guaranteed to be less than the previous by a margin of more than $1$, so the estimates again diverge toward $-\infty$.

Applications in Analytic Number Theory (Advanced)

The applications of Newton's Method actually apply to Hensel's Lemma, specifically in the $p$ adic integers $\mathbb{Z}_p$. In particular, take some polynomial $p\in\mathbb{Z}[x]$ and an integer $x$ such that $p(x)\equiv 0\pmod{p}$. Our goal is to approximate some $\hat{x}$ such that $p(\hat{x})\equiv 0\pmod{p^2}$. However, in the $p$ adics, we can't just ``lift" arbitrarily. We actually need to approximate such a solution in the next $p$ power level, and then refine it iteratively.

Here's what that looks like realistically. WLOG we assume that $x$ is some $a_0$ between $0$ and $p-1$. We search for $\hat{x}=a_0+a_1p$ such that $p(\hat{x})\equiv 0\pmod{p^2}$. Note the following critical step:

\[p(\hat{x})=p(a_0+a_1p)=p(a_0)+p'(a_0)a_1p+(a_1p)^2b\]

for some $b\in\mathbb{Z}$. By assumption we know that $p(a_0)\equiv 0\pmod{p}$ so let $p(a_0)=pt$ for some $t$. The desired congruence holds modulo $p^2$ when $t+p'(a_0)a_1\equiv 0\pmod{p}$. When $p'(a_0)\not\equiv 0\pmod{p}$, we see that $a_1\equiv -t/p'(a_0)\pmod{p}$ which means that

\[\hat{x}=a_0+a_1p=a_0-\frac{pt}{p'(a_0)}=x-\frac{p(a_0)}{p'(a_0)}.\]

This is exactly the statement of the Newtonian Algorithm! Our root is approximated, so we found some $\hat{x}$ such that $p(\hat{x})\equiv 0\pmod{p^2}$. But this isn't the whole story.


Theorem: Let $p(x)\in\mathbb{Z}_p[x]$ and $x\in\mathbb{Z}_p$ such that $p(x)\equiv 0\pmod{p^n}$. If $k=v(P'(x))<n/2$ then there exists an $\hat{x}=x-p(x)/p'(x)$ such that (1) $p(\hat{x})\equiv 0\pmod{p^{n+1}}$, (2) $\hat{x}\equiv x\pmod{p^{n-k}}$ and (3) $k=v(p'(x))=v(p'(\hat{x}))$.

Proof: Let $p(x)=p^ny$ for $y\in\mathbb{Z}_p$ and $p'(x)=p^ku$ where $u\in\mathbb{Z}_p^{\times}$. By definition we see that

\[\hat{x}-x=-\frac{p(x)}{p'(x)}=-\frac{p^ny}{p^ku}=-p^{n-k}yu^{-1}\in p^{n-k}\mathbb{Z}_p.\]

This proves (2). Notice, however, that in the Taylor expansion of $p$ about $x$ we see that

\[p(\hat{x})=p(x)-\frac{p(x)p'(x)}{p'(x)}+(\hat{x}-x)^2t=(\hat{x}-x)^2t\]

where $t\in\mathbb{Z}_p$. Also notice that

\[p(\hat{x})=(\hat{x}-x)^2t\in p^{2n-2k}\mathbb{Z}_p\]

but we finish (1) since $p^{2n-2k}\mathbb{Z}_p=p^n\cdot p^{n-2k}\mathbb{Z}_p\subset p^{n-1}\mathbb{Z}_p$. To prove (3) we just compute the order of $p'(\hat{x})$. Taking the Taylor expansion of $p'$ at point $x$ gives

\[p'(\hat{x})=p'(x+(\hat{x}-x))=p'(x)+(\hat{x}-x)s=p^ku+p^{n-k}zs=p^k(u+p^{n-2k}zs)=p^kv.\]

But since $k<n/2\implies 2k<n\implies 2k-n<0$ and $u\in\mathbb{Z}_p^{\times}$ we have

\[v=u+p^{n-2k}zs\in u+p\mathbb{Z}_p\subset \mathbb{Z}_p^{\times}\]

which proves that $k=v(p'(x))=v(p'(\hat{x}))$ $\square$

This sets up the entire motivation behind Hensel's Lemma in a more formal way. In addition, the proof of the lemma is just an extension of what we previously noted, which actually makes all of our previous work worth it in the long run.


Hensel's Lemma: Assume $p\in\mathbb{Z}_p[x]$ and $x\in\mathbb{Z}_p$ satisfies $p(x)\equiv 0\pmod{p^n}$. If $k=v(p'(x))<n/2$, then there exists a unique root $\xi$ of $p\in\mathbb{Z}_p[x]$ such that $\xi\equiv x\pmod{p^{n-k}}$ and $k=v(p'(x))=v(p'(\xi))$.

Proof: We first prove the existence of $\xi$. Let $x_0=x$ be given. We can construct $x_1\in\mathbb{Z}$ such that \[x_1\equiv x\pmod{p^{n-k}}~\text{and}~p(x_1)\equiv 0\pmod{p^{n+1}},~v(p'(x_0))=v(p'(x_1))=k.\] We can do this for $x_2$ and $x_n$ such that we have a sequence $(x_n)_{n\ge 0}$ where $x_n\to \xi$ as $n\to \infty$ where $p(\xi)=0$ and $\xi\equiv x\pmod{p^{n-k}}$. Now we prove that $\xi$ is unique. For the sake of contradiction, suppose that there are two roots $\xi$ and $\omega$ satisfying

\[\xi\equiv \omega\pmod{p^{n-k}}.\]

Since it was given that $k<n/2$ we have $2k<n$. This means that $n-k\ge k+1$ so we have at the very least

\[\xi\equiv \omega\pmod{p^{k+1}}.\]

Now we have,

\[p(\omega)=p(\xi)+p'(\xi)(\omega-\xi)+(\omega-\xi)^2a\]

for some $a\in\mathbb{Z}_p$. Since $\xi$ and $\omega$ were assumed to be roots, we factor to get

\[(\omega-\xi)(p'(\xi)+(\omega-\xi)a)=0\]

but since $v(p'(\xi))=k$ and $v((\omega-\xi)a)\ge k+1$

\[p'(\xi)+(\omega-\xi)a\ne 0\]

which means that $\omega-\xi=0\implies \omega=\xi$, so uniqueness holds $\square$

See also