I also watch Cleve Moler‘s MathWorks blog, Cleve’s Corner, like a hawk. So when he recently posted about the Rosser Matrix I was left disappointed by what he didn’t talk about. The matrix itself is interesting because of its place in eigenvalue history. Eigenvalue: the word is just awesome. If it’s not comfortable for you, just think of the eigenvalues of a matrix like they are your…values. When you go out in the world, you make an impact and push things in the direction of the values you believe in, and certain values are more important to you than others. Matrices do the same thing with the (eigen)values they espouse.
So it’d be great if we could compute the eigenvalues of a matrix: they tell us a lot (or at least something) about who they are. These days, this is straightforward, there are many (even free) computational tools to do it. Back in the day, however, eigenvalues were a difficult thing to find, and some were harder than others. For example, eigenvalues that are really close to one another are hard to pin down precisely, and when an eigenvalue is repeated (that’s a thing) we’d like to find every copy of it.
So, back to Rosser. He makes this test matrix in 1950 that’s got a lot of good stuff in there that they could compute exactly:
- A double eigenvalue.
- Three nearly equal eigenvalues.
- Dominant eigenvalue of opposite sign.
- A zero eigenvalue.
- A small, nonzero eigenvalue.
Then they could benchmark proposed eigenvalue-finding-algorithms (which would run for days on behemoth computers) against how close they were to the actual eigenvalues.
I love this steampunk mathematics, but the juiciest parts seemed to be left out of Cleve’s post: what algorithms were they actually using back then and (more importantly) how does one make a test matrix? It appeared that it wasn’t just Cleve leaving out the good stuff either, MATLAB itself doesn’t tell us anything interesting about how to make the Rosser matrix:
% Copyright 1984-2005 The MathWorks, Inc. % $Revision: 188.8.131.52 $ $Date: 2005/11/18 14:15:39 $ R = [ 611. 196. -192. 407. -8. -52. -49. 29. 196. 899. 113. -192. -71. -43. -8. -44. -192. 113. 899. 196. 61. 49. 8. 52. 407. -192. 196. 611. 8. 44. 59. -23. -8. -71. 61. 8. 411. -599. 208. 208. -52. -43. 49. 44. -599. 411. 208. 208. -49. -8. 8. 59. 208. 208. 99. -911. 29. -44. 52. -23. 208. 208. -911. 99.];
After more slightly digging than expected, I found Rosser’s original paper on the subject (and an incredible bible of math I hadn’t heard of before). The first thing I noticed was that there were many other people involved than just Rosser, none of which were slouches: Lanczos has eponymous algorithms, Hestenes with him crushed some linear systems, and Karush killed it at nonlinear programming. Another name I saw which deserves mention here is in the footnote below:
There’s isn’t much on the internet about Miss Gordon, but it appears she was working at INA along with Lanczos. In his paper on “his” algorithm (not yet named as such) to which the Rosser matrix paper is a direct follow-on, another footnote talks about her in much more grateful detail:
While she didn’t go down in the record books like Lanczos and friends, it’s great to see that her work behind the scenes was appreciated and talked about, a part of mathematical history we don’t talk about now as much as we should. For another peak into this corner of the mathematical world, check out the list of all of the NBS/NIST staff members mentioned in A Century of Excellence in Measurements, Standards, and Technology: A Chronicle of Selected NBS/NIST Publications, 1901-2000 [Text, Google Books].
With all this information at my fingertips, I could get a much clearer picture of how to get your hands dirty and find an eigenvalue. It’s only in another appendix, however, that Rosser tells us how to make actually make a test matrix, the key ingredient that was used to benchmark algorithms across decades of computational and mathematical advancement. There, on the bottom of page 293, are the 64 entries of the matrix (color coded in the image above), just as they are in rosser.m:
I had to see how it actually worked, so in the paste below you’ll find a MATLAB Rosser recipe, the way sausage is actually made (you can skip the code for a visual explanation):
% Created by David A. Gross. Inspired by . % Construction from [2,3]. % %REFERENCES %  http://blogs.mathworks.com/cleve/2014/01/06/ ... % the-rosser-matrix/, accessed on 2014/01/07 % %  Rosser, J.B.; Lanczos, C.; Hestenes, M.R.; Karush, W. % Separation of close eigenvalues of a real symmetric matrix % (1951), J. Res. Natl. Bur. Stand., Vol. 47, No. 4, p. 291, % Appendix 1, https://archive.org/details/jresv47n4p291, % accessed on 2014/01/07 % %  T. Muir, History of Determinants III, 289 (Macmillan % and Co., Ltd., London, 1920), http://igm.univ-mlv.fr/ ... % ~al/Classiques/Muir/History_3/, accessed on 2014/01/07 % make our eigenvalues in 2x2 matrices M1 = [102 1 ; 1 -102]; % lambda = ± sqrt(102^2 + 1) M2 = [101 1 ; 1 101]; % lambda = 101 ± 1 M3 = [ 1 10 ; 10 101]; % lambda = 51 ± sqrt(51^2-1) M4 = [ 98 14 ; 14 2]; % lambda = 100, 0 B = zeros(8); % explode M[1...4] into an 8x8 matrix B([1,6],[1,6]) = M1; B([2,8],[2,8]) = M2; B([4,5],[4,5]) = M3; B([3,7],[3,7]) = M4; sylvester88_A = @(a,b,c,d) [ ... a b c d ; ... b -a -d c ; ... c d -a -b ; ... d -c b -a ]; sylvester44 = @(a,b,c,d) [ ... a b c d ; ... b -a d -c ; ... c -d -a b ; ... d c -b -a ]; % make Sylvester's "penorthogonant" of determinant 10^8 P = blkdiag(sylvester88_A(2,1,1,2),sylvester44(1,-1,-2,2)); % P'*P = 10I R = P'*B*P;
It’s quite cool, actually. Four 2×2 symmetric matrices are constructed to have the desired eigenvalues, and those matrices are exploded into an 8×8 sparse matrix (sparse in that it’s all zeros where there aren’t any dots):
Lastly, a special matrix (magenta & yellow, above) is smashed on either side of our sparse matrix and BAM!–you’ve got yourself a full test matrix with the eigenvalues you wanted.
There’s a lot of this that’s wonderfully clear and clever, in hindsight: how Rosser forced and hand-calculated the eigenvalues he wanted, how he kept the matrix symmetric. But there are many things that were left up to Rosser to decide, almost artistically, about how the matrix should be made. The special smashing matrix, for example, actually scales up the eigenvalues of the original four matrices by a constant factor. A guy named Sylvester said that it was easy to make it scale up by powers of 2, but that if you were careful you could make a matrix that scales up by any number you want. Rosser had to cleverly find the integer entries of that special matrix that would give him a scale that meaningfully preserved the original eigenvalues he picked (for usability and clarity) and he chose to scale them up by 10.
Another artistic choice Rosser made was how to explode the original matrices into the sparse 8×8 matrix. What I mean is all of the following matrix explosions:
(and 2,511 other possibilities) have the same eigenvalues, but would make completely different full matrices after being smashed. Computational eigenvalue history would have looked very–well, slightly–different had Rosser picked any of these as the base for his test matrix. Maybe there’s something deeper to uncover here about his choices, but I’d like to think that Rosser loved matrices as much as I do, and that’s just one that he liked more than the rest.
If you don’t love matrices now, that’s ok. What originally started as a small coding exercise turned into a much deeper and richer look at the history of computational linear algebra (matrix algorithm stuff). I hope that some of you take the code and have fun making matrices that match up with your own values, and that others learned a little about how math was done almost 65 years ago.