Maths - Reorthogonalising a matrix Forum message from Hauke

By: Hauke (haukeheibel) - 2007-06-29 01:05
Hi Matrin, 
 
I took a look at the discussion about re-orthogonalization matrices. I think that the Taylor series expansion will never work since you are omitting a term (the part converging toward zero) which will always keep you from really being in SO3, i.e. det(R)=1. I recently stumbled over the following way to do it. It is much shorter and will yield an orthogonal matrix with determinant one (so no reflection) for sure. Assume R' is a matrix which is meant to be a rotation matrix but due to numerical error is not. If we apply an SVD to that matrix R' we yield U*S*V^T where U and V^T are both orthogonal having a determinant of +- 1 (the SVD may yield reflections for U and V). If R' were from O3 det(S) = +-1 following S being a diagonal matrix consisting only of +-1 on the diagonal. Since we want det(R')=1 we can now simply set S=diag([1 1 det(U*V^T)]). The last value can not be fixed to 1 but should be either +1 or -1, depending on the determinants of U and V. So finally your SO3 matrix R = U*diag([1 1 det(U*V^T])*V^T. 
 
Regards, 
Hauke

By: Martin Baker (martinbakerProject Admin) - 2007-06-29 10:26
Hi Hauke, 
 
Thanks very much for this, I was suspicious of the Taylor series without knowing why.  
SVD is new to me, I guess I need to find out more about SVD, so that I can understand this better. 
I did a quick search: 
http://en.wikipedia.org/wiki/Singular_value_decomposition 
http://www.uwlax.edu/faculty/will/svd/svd/index.html 
 
I'll do some reading, in the meantime I'll put your message on the webpage if that's alright with you. 
 
Cheers, 
 
Martin
By: Hauke (haukeheibel) - 2007-06-30 03:03
Hi Martin, 
 
Feel free to put the procedure on the web-site. You should just note that it is absolutely not new... ;) 
 
For the web-sites I think the wiki site is quite good. I did recently take a look at an existence proof but in general I don't care since I usually just "use" the SVD as a tool and that's that. I am no mathematician. 
 
If you want to point users to existing implementations you might find the following links of interest: 
 
http://math.nist.gov/tnt/index.html 
http://sourceforge.net/projects/opencvlibrary/ 
 
Of course there are much more implementations around. But these are the ones I used to use. 
 
 
Regards, 
Hauke

By: Martin Baker (martinbakerProject Admin) - 2007-06-30 07:04
Hi Hauke, 
 
Yes, I can't say I understood the existence proof, but it would be nice to find an algorithm to explain what this code is doing. 
 
It seems to me that a lot of the complexity in this code comes because SVD works on a non-square matrix. I read somewhere that for a square matrix then SVD is equivalent to eigen-decomposition. In that case : 
R' = U * S * U^T 
In other words the same as SVD but with V=U 
where: 
S is a diagonal matrix made up from the eigenvalues. 
U is a matrix made up from the eigenvectors. 
 
Unfortunately I think there are some conditions for this to be true so possibly this does not help? Anyway it would be nice to relate this to eigenvalues and eigenvectors. 
I'll continue investigating this, its not only useful for the reorthogonalisation page, I think the theory is also interesting for the physics pages about inertia matrix. 
 
Thanks, 
 
Martin

By: Hauke (haukeheibel) - 2007-07-02 00:49
Hi Matrin, 
 
For some reason my last message ended up in the nirvana... 
 
I think the SVD code by itself is even for square matrices a little bit more complex. Actually I would suggest to not put it on the web-site. As said before, it exists in many libraries. 
 
As far as I know, the singular values correspond to the square-roots of the eigenvalues. And regarding the orthogonal matrices, you have A = U*S*V^T and A*A^T = V*S^T*S*V^T. So what you have written holds only for the eigenvalue decomposition of A*A^T. 
 
Furthermore I have little left to contribute. Just some applications the SVD can be used for: 
- computation of the determinant (product of the singular values, probably there are easier ways to do it...) 
- computation of the condition number (quotient of biggest and smallest singular value) 
- minimization of |Ax|=0 with subject to |x|=1 (L2 norm) 
- computation of a pseudo-inverse 
... 
 
Probably the list can be made much longer. These are the main tasks where I use the SVD. 
 
Actually there is one thing I was always looking for which is a nice geometric explanation of the SVD. The wiki explanation might be a good start but unfortunately I did not yet find (resp. take) the time to dig deeper into the topic. 
 
Regards, 
Hauke

By: Martin Baker (martinbakerProject Admin) - 2007-07-03 01:11
Hauke, 
 
It would be really good to add some pages about matrix factoring containing these topics and cross linking to the various applications. I think I'll try that. 
 
I agree it would be good to have a better geometric explanation of the SVD. I think perhaps eigenvalue decomposition has a reasonable geometric interpretation as follows: 
Since the full matrix represents the scaling by the eigenvalues in the direction of the corresponding eigenvectors, this can be factored into the following sequence: 
1) rotate so that the eigenvectors align with the x,y and z coordinates. 
2) diagonal matrix scales in the direction of the x,y and z coordinates. 
3) rotate back to the original alignment. 
Although the SVD is more general than this, the diagonal matrix still seems to provide the scaleing in the direction of the x,y and z coordinates, I guess this is as close as we will get to being able to factor a general matrix into: rotation, scaling, shear, reflect, etc. 
 
Thanks, 
 
Martin
  1. ermishinf

    2009-10-01 18:33:06 GMT

    Martin,

    this post concerns a page "Maths - Reorthogonalising a matrix".

    I have some ideas about correction matrix. As C is symmetric, then we can use arbitrary powers of a matrix. To calculate power of a matrix an eigenvalue decomposition can be used. Matrix is written as Q*D*Q^T, where Q is orthogonal and D is diagonal. Then any function F(that can be expanded in Taylor series) of a matrix can be calculated as Q*F(D)*Q^T, where F can be applied to D elementwise. If such method is used for computation of the correction matrix, we get exactly orthogonal matrix (except for rounding errors). But it is more computationally expensive then just using of the first two terms of Taylor expansion, though more precise. I haven't yet compared it to orthogonalization with SVD, but it seems that eigenvalue decomposition is somewhat simpler.

    \>>Is this more orthogonal than the original matrix, I can't really tell, it looks as though it might be?

    In the given example it seems to be so, because applying reorthogonalization to a matrix several times converges to an orthogonal matrix, and this matrix equals one computed with eigenvalue decomposition. So, using the first two terms of Taylor expansion may be not so bad if we need fast algorithm.


  2. martinbaker

    2009-10-02 15:18:21 GMT

    Thanks for this. I don't understand as much of this as I would like, so I appreciate any help.

    I am not quite clear why we can set: O=C*M where:

    • O is orthogonal: O^t = O^-1
    • C is symmetric: C^t = C
    • M is almost orthogonal but not quite.

    As you say, eigenvalue decomposition looks similar to SVD:

    • eigenvalue decomposition: QDQ^T
    • SVD: [U][D][V]^t

    The difference appears to me to be (at first sight) that SVD is a more general version of eigenvalue decomposition where U is not equal to V?

    I guess I need to find out the general rules for these matrices:

    • orthogonal*orthogonal=orthogonal
    • symmetric*symmetric=symmetric
    • orthogonal*symmetric=?

    Also why any function F can be applied to Q*F(D)*Q^T

    Do you know a good web source for this topic?

    Thanks,

    Martin


  3. ermishinf

    2009-10-04 13:35:00 GMT

    Sorry, I mistakenly supposed that we need to decompose an orthogonal matrix. So in general case the eigenvalue decomposition is A=QDQ^-1 (But not QDQ^T), where Q is not necessary orthogonal.

    I am not quite clear why we can set: O=C*M where:
    
    O is orthogonal: O^t = O^-1
    C is symmetric: C^t = C
    M is almost orthogonal but not quite.
    

    We just assume that for given M, exists such symmetric C, that C*M gives us some orthogonal matrix. Then we prove that such C exists and find it. Of course, instead of symmetric matrix any other matrix can be chosen, but it is convenient for us if it is symmetric (otherwise we couldn't be able to use this method). May be some other types will give us interesting results.

    As you say, eigenvalue decomposition looks similar to SVD:
    
    eigenvalue decomposition: QDQ^T
    SVD: [U][D][V]^t
    
    The difference appears to me to be (at first sight) that SVD is a more general
    version of eigenvalue decomposition where U is not equal to V?
    

    Eigenvalue decomposition of a non-orthogonal matrix gives us QDQ^-1. If we decompose an orthogonal matrix, we get QDQ^-1=QDQ^T, where Q is orthogonal. So SVD is the general case of eigenvalue decomposition only for orthogonal matrices. In our case we have to decompose MM^T, which is not orthogonal.

    I guess I need to find out the general rules for these matrices:
    
    orthogonal*orthogonal=orthogonal
    symmetric*symmetric=symmetric
    orthogonal*symmetric=?
    

    I suppose that in the latter case we can get any square matrix.

    Also why any function F can be applied to Q*F(D)*Q^T
    

    If we want to calculate the square of matrix A=QDQ^-1, then we get A^2=QD(Q^-1Q)DQ^-1=QDEDQ^-1=QD^2Q^-1. This is true for all other powers: A^n=QD^nQ^-1, where integer n>=0. If we define function of a matrix through Taylor series, then F(A)=a0E+a1A+a2A^2+...=QEQ^-1+QDQ^-1+QD^2Q^-1+...=Q(E+D+D^2+...)Q^-1=QF(D)Q^-1. This is from wikipedia, section Functional calculus.

    But this is not exactly what we need. We need to find such C that (MM^T)^-1=C^2. Let us decompose MM^T=QDQ^-1, then QD^-1Q-1=C^2, (QD^-0.5Q^-1)(QD^-0.5Q^-1)=C^2. So, we can choose C=QD^-0.5Q^-1. We can raise D to power -0.5, because MM^T has real nonnegative eigenvalues, and as M is almost orthogonal, so it cannot have zero eigenvalues.

    Do you know a good web source for this topic?
    

    I can only suggest wikipedia and wolfram mathworld


  4. martinbaker

    2009-10-04 17:10:23 GMT

    Thanks very much, even though the original idea didn't work out you have given me some interesting things to think about,

    Cheers - Martin


metadata block
see also:

 

Correspondence about this page

Book Shop - Further reading.

Where I can, I have put links to Amazon for books that are relevant to the subject, click on the appropriate country flag to get more details of the book or to buy it from them.

cover Mathematics for 3D game Programming - Includes introduction to Vectors, Matrices, Transforms and Trigonometry. (But no euler angles or quaternions). Also includes ray tracing and some linear & rotational physics also collision detection (but not collision response).

Terminology and Notation

Specific to this page here:

 

This site may have errors. Don't use for critical systems.

Copyright (c) 1998-2023 Martin John Baker - All rights reserved - privacy policy.