There exists a $\xi$ in $[a,b]$ with $f(\xi) = 0$.
ANS: TRUE, IVT
There exists a $\xi$ in $[a,b]$ with $f'(\xi) = 0$.
ANS: FALSE, smells like mean value theorem, but isn't
There exists a $\xi$ in $[a,b]$ with $f'(\xi) \cdot (b-a) = f(b) - f(c)$.
ANS: FALSE, though if we assumed $f'(x)$ exists, then this is the mean value theorem.
What are the assumptions on $f$ used in your proof?
ANS: Did this in class assuming $C^2$, and using the following to solve for $f'(x)$:
$$ f(x+h) = f(x) + f'(x)h + \mathcal{O}(h^2), f(x-h) = f(x) - f'(x)h + \mathcal{O}(h^2), $$We needed a very large value of $k$. What if we tried this over a smaller interval, say $0 \leq \xi \leq 1/2$, instead? How big would $k$ need to be then.
We used $f^{(k)}(x) = \pm 1 / (k (1+x)^k)$.
ANS: We plug in and need to bound:
$$ |\pm 1| \cdot \frac{1}{(k+1)(k+1)!} \cdot \frac{1}{(1+\xi)^{k+1}} \cdot x^{k+1} $$ Taking $x=1/2$, we can bound this with $$ \frac{1}{(k+1)(k+1)!} \cdot 1 \cdot (1/2)^{k+1} $$ Solving with the computer we get 13, as $2^{-53} = 1.11\dots \cdot 10^{-16}$ and: ``` f(k) = 1/(k+1)/factorial(k+1) * (1/2)^(k+1) xs = 1:15 ys = map(f, xs) [xs ys] ``` ## Chapter 2 ### Some sample problemsCan you think of why the direct approach might cause issues for some values of $x$ in that range?
ANS: Near $x=0$ we have subtraction of like-sized quantities. This is a possible source of error. The new expression doesn't involve that. It has issues near $x=0$ that are addressed by using expm
.
ANS: Use $\log(x/y)$ to avoid issues if $x$ and $y$ are close
ANS: Using taylor, we see if $xS$ is close to $0$ that this is just $x^{-3}\cdot(-x^3/3!) = 1/6$.
.
ANS: The issue is near $0$, we have the difference in tangent line expressions is $x - x^3/3! + \mathcal{O}(x^5)$ and $x - x^3/3 + \mathcal{O}(x^5)$ so near $0$ we could use $-x^3/2$.
ANS: We had the theorem in class that said if $2^{-q} \leq 1 - y/x$ than at most $q$ binary bits are lost. So we have to solve for $x$ which makes $2^{-2} = 1 - 1/\sqrt{x^2 + 1}$. Being lazy, we have
using SymPy u = symbols("u") solve(1 - 1/sqrt(u^2 + 1) - 1/4, u)
What value of $k$ will ensure that the error over $[0, 1/4]$ is no more than $10^{-3}$?
ANS: The error term is the last term:
$$ |(-1)^{k+1}(\xi)^{2k+1}/(2k+1)!| = (\xi)^{2k+1} \cdot \frac{1}{(2k+1)!} \leq (1/4)^{2k+1} \cdot \frac{1}{(2k+1)!}. $$ What $k$'s make this less that $10^{-3}$? We check with the computer: ``` f(k) = (1/4)^(2k+1) * 1/factorial(2k+1) f(1), f(2) ``` So $k=2$ works.That is, floating point multiplication is not associative. You can verify by testing (0.1 * 0.2)*0.3
and 0.1 * (0.2 * 0.3)
.
ANS: Suppose $x$, $y$ and $z$ are floating point numbers. Then $fl(xy) = xy(1+\delta_1)$ and $fl(yz) = yx (1 + \delta_2)$ where both delta's are small but need not be the same. So the left hand side is:
$$ xy(1+\delta_1) \cdot z (1 + \delta_3) = xyz(1 + \delta_1)\cdot(1+\delta3) $$ Whereas the right hand side is: $$ x(yz(1+\delta_2))(1+\delta_4) = xyz (1+\delta_2)\cdot (1+\delta_4) $$ Since the $\delta$'s can't be assumed equal, the answers aren't the same every time.That is, if you computed the difference quotient, $(f(x+h)-f(x))/h$ in floating would you expect smaller and smaller values of $h$ would converge. Why?
ANS: NO! We get $fl(f(x+h))$ is basically $f(x+h)(1+\delta)$ and $fl(f(x)) = f(x)(1+\delta_2)$. So all told, the difference in the top is
$$ f(x + h) - f(x) + f(x+h) \cdot \delta_1- f(x) \cdot \delta_2 = f(x + h) - f(x) + c\delta, $$ where $c$ is some constant that we don't make precise, as the point is to point out that there can be an error of size constant time $\delta$. This is small, but is a problem as its size does not depend on $h$. If we let $h$ "go to " zero, the error is $c\delta/h$ which gets large.ANS: using the derivative, $[\log(y(x))]' = y'(x)/y(x)$ we get from a first order taylor expansion:
$$~ \log(y(x+h)) - \log(y(x)) \approx y'(x)/y(x) \cdot h. ~$$But $y'(x) \approx (y(x+h) - y(x))/h$, so the above becomes:
$$~ y'(x)/y(x) \cdot h \approx \frac{y(x+h) - y(x)}{h} \cdot \frac{1}{y(x)} \cdot h = \frac{y(x+h) - y(x)}{y(x)}. ~$$Let $f(x) = x^2 - 2$. Starting with $a_0, b_0 = 1, 2$, find a_4,
b_4$.
ANS: This is for the bisection method:
We can see that
a0, b0 = 1//1, 2//1 c0 = (a0 + b0)/2 ## f(c0) >0 a1,b1 = a0, c0 c1 = (a1 + b1)/2 ## f(c1) < 0 a2, b2 = c1, b1 c2 = (a2 + b2)/2 ## f(c1) < 0 a3, b3 = c2, b2 c3 = (a3 + b3)/2 ## f(c1) > 0 a4, b4 = a3, c3 c4 = (a3 + b3)/2 a4,b4, c4, abs(sqrt(2) - c4) <= 1/2^5*(b0 - a0)
(11//8, 23//16, 23//16, true)
Let $e_n$ be $c_n - c$. The order of convergence of $c_n$ is $q$ provided
Using the bound above, what is the obvious guess for the order of convergence?
ANS: We have $(b_n - a_n) / (b_{n-1} - a_{n-1}) = 1/2$, so we expect $c_{n+1}/c_n$ to be around 1/2 too. That is linear convergence.
Explain why the bisection method is no help in finding the zeros of
.
ANS: the function doesn't cross $0$, so we can't find $a_0$, $b_0$.
In floating point, the computation of the midpoint via $(a+b)/2$ is
discouraged and using $a + (b-a)/2$ is suggested. Why?
ANS: the errors follow $fl(a+b) = (a+b)(1+\delta)$, so if $a$ and $b$ are big, then the error $fl(a+b) - (a+b) = (a+b)(1+\delta)$ is bigger. This is most dramatic with overflow.
Mathematically if $a < b$, it is always the case that there exists a $c = (a+b)/2$ and $a < c < b$. Is this also always the case in floating point? Can you think of an example of when it wouldn't be?
ANS. No. If $a=1^+$ and $b=1$, then we can't fit a value in between.
To compute $\pi$ as a solution to $\sin(x) = 0$, one might use the bisection method with $a_0, b_0 = 3,4$. Were you to do so, how many steps would it take to find an error of no more than $10^{-16}$?
ANS: We saw that we need to solve for $n$ with
$$~ 2^{-(n+1)} (b_0 - a_0) \leq 10^{-16} ~$$This gave:
$$~ n \geq 16 \cdot \frac{\log(10)}{\log(2)} - 1 ~$$Or in this case
p = 16 ceil(p * log(10)/log(2) - 1)
53.0
A simple zero for a function $f(x)$ is one where $f'(x) \neq 0$. Some algorithms have different convergence properties for functions with only simple zeros as compared to those with non-simple zeros. Would the bisection algorithm have a difference?
ANS: Well, yes and no. The error bound doesn't depend on $f'(x)$ so the answer is no. However, when a zero is not simple, the function may not cross the $x$ axis. That can be an issue.
If you answered yes above, you could still be right, even though you'd be wrong mathematically (Why? look at the bound on the error and the assumptions on $f$.). This is because for functions with non simple zeros, you can have a lot of numeric issues creep in. The book gives an example of the function lie $f(x) = (x-1)^5$. Explain what is going on with this graph near $x=1$:
using Plots f(x) = x^5 - 5x^4 +10x^3 -10x^2 + 5x -1 plot(f, 0.999, 1.001)
For $f(x) = x^2 - 2$, and $x_0 = 1$ and $x_1 = 1.5$ compute 3 steps
of a) the bisection method, b) Newton's method, c) the secant method
ANS: The floating point computation has a lot of error, as we aren't doing it exactly (or efficiently). Roughly we have
$$~ fl(f(x)) = f(x) + \mathcal{O}(\epsilon ) ~$$Suppose $x<1$ but close to $1$. When $f(x)$ is near $0$, the error term – which may be positive or negative – can push the floating point value above the $x$ axis even though the mathematical part, $f(x)<0$.
We have $f(x) = \sin(x)$ has $[3,4]$ as a bracketing interval. Give a bound on the error $c_n - r$ after 10 steps of the bisection method.
ANS: Well, $n=10$ has $c_{10} = 1/2 (b_{10}-a_{10}) = 1/2 \cdot 2^{-10}(b_0-a_0)=2^{-11}$.
We have $f(x) = x^2 - s$ has a solution $\sqrt{s}$, $s > 0$. Compute $1/2 \cdot f''(\xi)/f'(x_0)$ for $x_0 = s$. Compute the error, $e_1$.
ANS: We have $f'(x) = 2s$, $f''(x) = 2$, so $1/2\cdot 2/(2s) = 1/(2s)$. $e_1 = 1/(2s)*e_0^2 = 1/(2s)*(s-\sqrt(s))^2$.
For $f(x) = \sin(x)$, find an interval $[-\delta, \delta]$ for which the
newton iterates will converge quadratically to $0$.
ANS: We need to solve $\delta \cdot C(\delta) < 1$, where C(\delta) = 1/2 \cdot max(|\sin(x)}) / min(|cos(y)|). We note:
$$~ C(\delta) \leq 1/2 \cdot \frac{1}{1 - \delta^2/2}. ~$$This uses $\cos(x) > 1 - x^2/2$ near $0$. Solving then
$$~ \delta \cdot \frac{1}{2}\frac{1}{1 - \delta^2/2} = 1, ~$$We get $\delta^2 + \delta - 2$ which is solved with $\delta =1$. So any $0 < \delta < 1$, should work.
Newton's method is applied to the function $f(x) = \log(x) - s$ to find $e^s$. If $x_n < e^s$ show $x_{n+1} < e^s$. If $e_0 > 0$ yet $x_1 > 0$, show $e_1 < 0$. (mirror the proof of one of the theorems)
ANS: This is $f'>0$ and $f''< 0$ so we have
$$e_{n+1} = - \lambda e_n^2$$, so $e_1$, $e_2$, $\dots$ are all negative meaning, $e_i < r$ for $i \geq 1$. Since $f'>0$, and $f(x_i) < 0$ when $i \geq 1$, we must have $f(x_i)$ are increasing for $i\geq 1$.
Suppose a student tries the following for Newton's method: $x_{n+1}=f(x_n)/f'(x_n)$. Is this method guaranteed to converge? (Is the mapping clearly contractive?). If by chance it did converge, describe what equation the fixed point would satisfy?
ANS: This is not clearly contractive – take $f(x) = x$ and it won't be. However, if it did converge, we would have $s=f(s)/f'(s)$, or $sf'(s) = f(s)$. It would be unlikely to be a zero of $f(s)$.
Is $F(x) = \sqrt(x)$ contractive over $C=[0,1]$? Show it is or
isn't.
ANS: We have $f(x)-f(y) = f'(\xi)(x-y)$, so we would need some guarantee that $|f'(\xi)| < 1$, but this won't hold for $x < 1/4$. Looking there we see:
a,b = 1/8, 1/4 abs(sqrt(a)-sqrt(b)) / (b-a) # bigger than 1
1.1715728752538097
The expression $\sqrt{p + \sqrt{p + \sqrt{p + \cdots}}}$ converges. What does the answer satisfy? (Express as $x_{n+1} = \sqrt{p + x_n}$.)
ANS: From $s = \sqrt{p+s}$ we get $s$ must solve $s^2 - s - p = 0$ with $s > 0$.