Test 2 for MTH 335 covers a bit of chapter 3, chapter 4 and a bit of chapter 5.
To earn full credit, you must show all of your work and not just an answer.
You are expected to do this exam without consulting anyone else. You should also not bring in new techniques not covered in class or the book, unless you very carefully show how you know these new techniques.
Please turn in your work so that the problems are in order.
Question 1:
A method for finding a zero of a function $v=v(x)$ can be written as a fixed point problem through:
$$~ F(x) = x - \frac{v}{v'} + \frac{v v''}{2(v')^3} ~$$That is, if $x_0$ is given and $x_n = F(x_{n-1})$ converges to $s$, then $v(s) = 0$.
Assume $F$ has the following 3 derivatives:
$$~ F' = \frac{-v^2 v'''}{2(v')^3} + \frac{3v^2(v'')^2}{2(v')^4}, ~$$ $$~ F'' = \frac{v}{v'^2} \cdot ( \frac{v \cdot v''''}{2v'} + \frac{9v v'' v'''}{2(c')^2} - \frac{6v(v'')^3}{(v')^3} - v'''' + \frac{3(v'')^2}{v'}), ~$$ $$~ F''' = \frac{1}{v'} \cdot ( -\frac{-v^2 v'''''}{2(v')^2} + \frac{6v^2 v'' v''''}{(v')^3} + \frac{9v^2(v''')^2}{2(v')^3} - \frac{36v^2 (v'')^2v'''}{(v')^4} + \frac{30v^2 (v'')^4}{(v')^5} - \frac{2v v''''}{v'} + \frac{17 v v'' v'''}{(v')^2} - \frac{21v(v'')^3}{(v')^3} - v''' \frac{3(v'')^2}{v'} ). ~$$Based on the above, what is the order of convergence of $F$ as $F(x_n) \rightarrow s$? Be specific as to how you know and if you needed any assumptions beyond $v$ having sufficient derivatives.
Question 2:
For what values of $a$ will the following matrix be symmetric and postive definite? Write down the values of $a$ you can prove are true along with a proof.
Question 3:
By hand, find the Cholesky decomposition of the symmetric positive definite matrix
3×3 Array{Int64,2}: 2 1 1 1 2 1 1 1 2
Use this decomposition to solve $Ax=b$ with $b=[1, 3, 1]$. (Assume [a,b,c]
means a column vector.)
Question 4:
Use scaled row pivoting and Gaussian elimination to find $P$, $L$, and $U$ so that $PA=LU$ when $A$ is given by
3×3 Array{Int64,2}: -9 1 17 3 2 -1 6 8 1
Question 5:
Fix $n$, and let $\delta_{ij}$ be the $n\times n$ matrix with entries that are $0$ except row $i$ column $j$ and then it is $1$. We can express our two elementary row operations then as:
permute rows $i$ and $j$: $P_{ij} = I - \delta_{ii} - \delta_{jj} + \delta_{ij} + \delta_{ji}$.
Multiply row $i$ by $c$ and add to row $j$ (with $j > i$) by $R_{i,j} = I + c\delta_{ji}$
For what values of $i,j,k,l$ is $\delta_{ij} \cdot \delta_{kl}$ non zero? When it is non-zero, what is the answer in terms of $\delta$?
Using this fact, show that $P_{ij}P_{ij}$ is the identity.
Using this fact, suppose $i \leq j,k,l$, what conditions on $i,j,k,l$ will ensure $R_{ij}\cdot P_{k,l} = P_{k,l} \cdot R_{ij}$?
Question 6:
Consider $\Pi_n$ the vector space of all polynomials of degree $n$ or less. Define a "dot" product as $dot(p,q) = \int_{-1}^1 p(x) q(x) dx$. Define the $\| p \| = \sqrt{dot(p,p)}$.
Show that $\| p \|$ satisfies conditions (1), (2), and (3) at the top of p. 187 in the book.
Find $\| p \|$ when $p=x^2 - 1$.
Question 7:
Given $A$, $b$ and a $Q$ with $\| I - Q^{-1}A \| <1$, the iterative algorithm
$$~ Q x^{n} = (Q-A)x^{n-1} + b ~$$will converge to $x$, a solution of $Ax=b$.
Let A be given by
2×2 Array{Int64,2}: 1 2 3 5
let $Q$ be the diagonal matrix of $A$, and $b=[1, 1]$.
If $x^0 = [0, 0]$ compute $x^1$ and $x^2$ (show your work). What do these values converge to? Is it a solution to $Ax =b$? (Why or why not?)
Question 8:
We saw in class that if $A$ is $n \times n$ and has $n$ distinct eigenvalues, with $\lambda_1$ the largest in absolute value, then $A^kx \approx \lambda_1^k a_1 x_1$. We can use this to converge on the largest eigenvalue through matrix multiplication with this algorithm:
for k in 1:M y = A*x r = y[1]/x[1] x = y end r
Start with $A$ given by
2×2 Array{Int64,2}: 1 2 3 5
and $x=[1,1]$. Perform 3 steps ($M=3$) to compute values for $r_1$, $r_2$, and $r_3$. The eigenvalues of $A$ are $-0.16...$ and $6.16...$. Does it appear that $r_n$ is converging to the largest one? Show all your work.
Question 9:
To solve $Ax=b$ with a full $n\times n$ matrix, we saw in class that it takes basically $n^3/3$ ops to find $U$ and $n^2/2$ ops to perform the back substitution. For a tri-diagonal matrix (where no more than $p=3$ entries in a row are non-zero), it takes $np(2p+1)$ steps to find $U$ and $n(p+1)$ steps to do the back substitution.
For $n=10,000$. Compare the number of ops to solve $Ax=b$ when $A$ is a full matrix to the number of ops to solve $Ax=b$ when $A$ is tri-diagonal ($p=3$). Compute both, and then find their ratio.
In a simulation, it took $3.88 \cdot 10^{-4}$ seconds to solve $Ax=b$ when $A$ was tri-diagonal and $12.45\cdot 10^0$ seconds when $A$ was a full matrix. Is the ratio of times more, less our basically the same as you expect from your calculation above?
Question 10:
Let $\kappa$ be the condition number, $\kappa(A) = \|A\| \cdot \|A^{-1}\|$, for some matrix norm.
If $A$ and $B$ are two non-singular square matrices of the same size, prove that: $$ \kappa(AB) \leq \kappa(A) \kappa(B). $$