No. The functions does not cross 0
Yes, but the proofs of this we saw assume simple zeros, which $r=1$ is not.
Even though starting at $x_0 > 0$ is guaranteed to converge, the bound on convergence is quite large when the second derivative is large (which happens for $x_n >> 1$. And when $f'$ is close to zero which happens when $x_0$ is less than 1.
We know if both $f'$ and $f''$ are positive, there will be convergence.
Let's see on the computer:
f(x) = sin(x) appfp(x,y) = (f(y) - f(x))/(y-x) x0,x1 = 3//1 , 4//1 x2 = x1 - f(x1)/appfp(x1, x0) x3 = x2 - f(x2)/appfp(x2, x1) x0, x1, x2, x3
(3//1,4//1,3.157162792479947,3.1394590982180786)
We have this error
err = x3 - pi
-0.0021335553717145572
We need to compare:
## netwon mu_n=2; steps_n = 2 ## secant mu_s = (1+sqrt(5))/2; steps_s = 1 mu_n^(1/steps_n), mu_s^(1/steps_s)
(1.4142135623730951,1.618033988749895)
Divide this by $e_n$, then use for both the case $n$ and $n-1$. Use these pieces to identify the limit as $n$ goes to $\infty$ of
$$~ \frac{f(x_n)/e_n - f(x_{n-1})/e_{n-1}}{x_n - x_{n-1}}. ~$$We did this in class. It is also in the book in the estimate for the error in the secant method. The limit is $1/2 f''(r)$.
$$~ |F(x) - F(y)| \leq |x/2 - y/2 + f(x) - f(y)| \leq |x/2 - y/2| + |f(x) - f(y)| = (1/2)|x-y| + |f'(\xi)||x-y| \leq (1/2 + 1/4) |x-y|. ~$$We showed this in class:
$$~ F'(x) = (x/2 0 f(x))' = 1/2+ f'(x) \geq 1/2 - 1/4 = 1/4 > 0. ~$$We showed in class that $F^1(s) \neq 0$, so the order is linear.
So in particular, for a fixed point $x$, $F'(s) > 0$.
(As the book says, this is $x_{n+1} = \sqrt{p + x_n}$.)
We saw in class for $p > 0$, this will be when the graph of the function $F(x) = \sqrt{p + x}$ intersects the graph of the equations $y=x$, that is a solution to $x = \sqrt{p + x}$. We solved this using the quadratic equation.
$$~ a_{ij} = \sum l_{ik} m_{kj} ~$$Say $A=LM$ where both $L$ and $M$ are lower triangular. We need to show if $j > i$, that $a_{ij} = 0$. But
For $k>i$ we have $l_{ik}=0$ and for $k < j$ $m_{kj} = 0$. But, as $j > i$, this means for any $k$ the product will be zero and hence the sum of the products will be.
We solve by forward substitution:
x1 = 7/1 # 1 x1 = 7 x2 = (8 - 2x1)/3 # 2x1 + 3x2 = 8 x3 = (9 - 4x1 - 5x2)/6 # 4x1 + 5x2 + 6x3 = 9 [x1,x2,x3]
3-element Array{Float64,1}: 7.0 -2.0 -1.5
Did you need to use a permutation matrix?
We solved this in class with parital pivoting. It got a bit messier that way. Here we solve without pivoting:
Aorig = [2//1 0 -8; -4 -1 0; 7 4 -5] A = copy(Aorig) L = diagm([1//1,1,1]) L[2,1] = A[2,1]/A[1,1] A[2,:] = A[2,:] - L[2,1]*A[1,:] L[3,1] = A[3,1] / A[1,1] A[3,:] = A[3,:] - L[3,1]*A[1,:] L[3,2] = A[3,2] / A[2,2] A[3,:] = A[3,:] - L[3,2] * A[2,:] U = A L, U, Aorig - L*U
( 3x3 Array{Rational{Int64},2}: 1//1 0//1 0//1 -2//1 1//1 0//1 7//2 -4//1 1//1, 3x3 Array{Rational{Int64},2}: 2//1 0//1 -8//1 0//1 -1//1 -16//1 0//1 0//1 -41//1, 3x3 Array{Rational{Int64},2}: 0//1 0//1 0//1 0//1 0//1 0//1 0//1 0//1 0//1)
Use your answer to solve $Ax=[1,2,3]^T$.
We solve $Ly=b$ and then $Ux = y$. First:
b = [1,2,3] y = L \ b
3-element Array{Rational{Int64},1}: 1//1 4//1 31//2
And then
x = U \ b
3-element Array{Rational{Int64},1}: 17//82 -34//41 -3//41
By hand, we need to do forward substitution to solve for $y$ and back substitution to solve for $x$.
This is not true in general. Just take $L=I$, then $UL=U$ is upper triangular.
This not the case. Take $i=2, j=1$, then $a_{ij} = 2$ and $a_{22} a_{11} = 1 \cdot 1 = 1$.
Does $A$ have a Cholesky decomposition?
No, the matrix is not symmetric positive definite.
using SymPy A = [ Sym(4) 1 2; 1 2 3; 2 1 2]
The LU decomposition is given by:
L, U, p = lu(A)
( ⎡ 1 0 0⎤ ⎢ ⎥ ⎢1/4 1 0⎥ ⎢ ⎥ ⎣1/2 2/7 1⎦, ⎡4 1 2 ⎤ ⎢ ⎥ ⎢0 7/4 5/2⎥ ⎢ ⎥ ⎣0 0 2/7⎦, [1,2,3])
From this, we have inverses:
inv(L)
and
inv(U)
Show that we can find $A^{-1}$ using these two matrices. What is the value?
We have $A^{-1} = (LU)^{-1} = U^{-1} A^{-1}$ so we need to multiply:
inv(U) * inv(L)
This is the larger of 10, 5, and 16. So is 16.
For this we have to find $\|A\| \cdot \| A^{-1} \|$. We can tell that $\|A\|$ is the larger of $1$, $5$ or $15$, so is $15$. We have
A = [Sym(1) 0 0; 2 3 0; 4 5 6] inv(A)
So the norm of $A^{-1}$ is the larger of 1, 1, or 5/9. So is $1$. Hence $\kappa(A, \infty) = 15 \cdot 1 = 15$.
This is of the general form $x^{(k+1)} = Gx^{(k)} + c$. Using the criteria that the spectral radius of $G$ must be less than 1, how does this translate to a condition on $Q$ and $B$ that will ensure convergence?
We did this in class
A = [1 2; 3 4]
2x2 Array{Int64,2}: 1 2 3 4
and
b = [1,2]
2-element Array{Int64,1}: 1 2
A guess at an answer is $x^{(0)} = [-1/4, 1/4]$. Compute the residual value $r^{(0)} = b - Ax^{(0)}$. The error $e^{(0)}$ satisfies $Ae^{(0)} = r^{(0))}$. Find $e^{(0)}$.
We can compute this directly:
x0 = [-1/4, 1/4] r0 = b - A*x0
2-element Array{Float64,1}: 0.75 1.75
And solving for e0 can be done by
A \ r0
2-element Array{Float64,1}: 0.25 0.25
A = [1 1/2; 1/3 1/4]
2x2 Array{Float64,2}: 1.0 0.5 0.333333 0.25
The Jacobi method takes $Q$ to be the diagonal of $Q$. Is it guaranteed to converge for some $b$? (Use the $\|\cdot \|_\infty$ norm.)
You might be interested in this computation:
Q = diagm(diag(A)) # [1 0; 0 1/4] I - inv(Q) * A
2x2 Array{Float64,2}: 0.0 -0.5 -1.33333 0.0
We have that convergence is guaranteed if $\|I - Q^{-1} A \| < 1$, which we see.
$$~ \| I - A^{-1} B\| = \| A^{-1}A - A^{-1} B\| = \|A^{-1} (A-B)\| \leq \|A^{-1}\| \cdot \|A-B\| < \|A^{-1}\| \cdot \|A^{-1}\|^{-1}. ~$$This is done if we can show $\| I - A^{-1} B\| < 1$. But:
$$~ A^T \cdot (A^{-1})^T = (A^{-1} A)^T = IT = I. ~$$We show that $(A^{-1})^T$ is a right inverse and hence an inverse of $A^T$ using the fact $(MN)^T = N^TM^T$.
$$~ \| I - B \| = \| I - A + A - B\| \leq \|I - A\| + \|A-B\| \leq 1 ~$$We pick $\epsilon$ to be $(1 - \|I - A\|)/2$ where we know $\|I-A\| < 1$, as $A$ is invertible. Then: