Currently, I am taking CPSC 302 at UBC which is Numerical Approximations in Linear Algebra.
So far, the course has taught me about round-off and convergence errors as well as machine eta and the floating point representation. Next, we covered methods for solving f(x) = 0 such as the bisection method (using the Intermediate Value Theorem) as well as Newton's Method and the Secant Method. The assignment had us modifying the Newton's Method to become more robust, that is, if the solution did not improve by more than 0.5, we would switch over to the bisection method to obtain a tighter bound on the root and use the Newton's Method with the new bounds.
What I want to talk about today is that during our lesson in matrices (the 3rd section of the course), we were talking about permutations and what they were useful for. One thing that is good about a permutation is for the stability of an algorithm, say the Gauss Elimination algorithm. We started to talk about how permutations are not only good for stability of algorithms, but also for reducing the number of operations needed to reduce a matrix into a lower triangular form so that we can apply backward substitution to the matrix. The example given in class was
and our permuted matrix was
What we did first was we analyzed the sparsity of the L matrix of the LU decomposition of this matrix. What we saw was that L was an upper triangular matrix with the lower half all non-zeros (in general). However, looking at the permuted matrix, the L matrix would only have the 3 entries located in the 4th row. Given that the L matrix of the permuted matrix had less non-zero elements, the work needed to reduce the matrix into lower triangular form would be less than the work needed to reduce the un-permuted matrix.
Then, our prof showed us something just mind-blowing. He showed that the matrix can be represented in graph form. So, for example
For the first matrix,
And for the permuted matrix,
What these two graphs are saying is that there are elements (1,3), (1,2), (1,4) , (3,1) , (2,1), (4,1) in the un-permuted matrix. And for the permuted matrix, it is saying the same thing, only for different elements (3,4), (4,1), etc. Following the Gaussian Elimination for our first matrix, we see that we are trying to eliminate, in a sense, the connection between node 1 and node 2 (ie. in the first matrix, we are trying to get rid of the 5). But doing the row operation will fill the rest of row 2 and we destroy the zeros that we had in the row. In a sense, using Gaussian Elimination on the first column will fill the matrix so that it no longer has any zeros in the lower half. If we were to look at that in terms of the graph, we are building more connections between the nodes (and thus, more work is needed to eliminate those connections).
However, when we look at the permuted matrix, again, we use row operations following the Gaussian Elimination algorithm but instead of filling our matrix, we don't. Our row operations never turn any zero element into a non-zero element - all the non-zero elements are changing, but since they were non-zero anyway, this doesn't change our graph. What this means is that no new connections are being built as we are applying row operations to the permuted matrix (aka we are not increasing the amount of work needed to change the permuted matrix into a lower triangular matrix).
It's amazing what applications of mathematics can overlap to provide us with different and interesting perspectives.
Blogger Traffic & Forays into Publishing
13 years ago
No comments:
Post a Comment