from: manfredvw

date 2011-01-04 12:12:16 UTC

I've been learning about GA, and have mainly a "layman mathematician's" interest. I was looking at

https://www.euclideanspace.com/maths/algebra/clifford/algebra/functions/inverse/index.htm

and other places on the web, and see that there is very little publicly

visible (without looking into tar files) with regard to actually

calculating a general multivector inverse. Having experimented a

little, I have gone a good deal further than the page referred to above -

I have found straightforward expressions for the inverse for any

dimension up to 5 with any metric (signature), and should be able to

extend this to higher dimensions with some tinkering.

Here are examples, where the variable 'a' is a general multivector.

When a division is shown, the divisor is necessarily a pure scalar. In

every case, only geometric multiplication and a variety of conjugations

are used.

0D: `inv(a) = 1/a`

;

1D & 2D: `inv(a) = ci(a) / (a * ci(a))`

;

3D & 4D: `inv(a) = ci(a) * cc(a * ci(a)) / (a * ci(a) * cc(a * ci(a)))`

;

`ci()`

; is a conjugation that inverts all except the scalar component.

`cc()`

; is the Clifford a conjugation - it inverts grades 1,2

in up to 4D.

There are clearly many repeated sub-expressions, which would be

calculated accordingly.

The construction extends to 5D using another conjugation (the reverse),

and if done properly I suspect to 6D.

The total complexity seems to be baout as low as one can get (after

mechanical optimisation), and does not require matrix inverse,

determinant etc. (though I guess that technically the denominator is the

determinant of an optimal real-matrix representation for those who want

to quibble).

Note that the scalar denominator being zero distinguishes non-invertable

from invertable elements. In the process of the calculation, many of

the components are inherently zero, which reduces the actual number of

scalar multiplications needed quite substantially.

I may be way behind the times, but if anyone (Martin?) is interested

(including about the component expressions), drop me a line.

Manfred

from martinbaker

date 2011-01-04 16:00:25 UTC

Hi Manfred,

Yes, I certainly am interested and it would also be good to improve this

web page.

It would be good to have definitions and formula/algorithms for all

types of inversions (conjugation, reversal, grade involution, outer

product inverse) and especially the inverse of the geometric product.

When you say that it works with any metric do you mean:

a) metrics where pure vector bases square to plus or minus one

(orthogonal)?

b) metrics where pure vector square is given by quadratic form?

c) most general metric where pure vector product is given by bilinear

form?

I am also interested to hear how you are doing your experiments.

Would it be alright if I put this on the website (linked to this page)?

Martin

from: manfredvw

date 2011-01-04 17:07:31 UTC

Hi Martin

I meant (a) - in particular that one has the freedom of choosing the

signature of the algebra (i.e. which dimensions are spacelike and which

are timelike). As I understand it, (b) and (c) are the same except that

the basis vectors are linear functions of the orthogonal case, and I

imagine it would be straightforward to use a dual basis to generalize to

those cases.

You are welcome to publish whatever I give you on your website. I am

having difficulty finding useful code on the web, and would be happy to

have the detail published so that others learning can benefit. Your

site is exceptional in that it goes through the useful explanations in

understandable detail, even though it is clearly a work in progress.

I'm not sure what you would like - a more detailed outline of the

mathematical result I have already given, or detailed source code

My stuff is in the form of C++ templates, with the number of dimensions

and the signature as template parameters. I prefer this to your

approach, where you seem to name each element in a dimension-specific

class, but should have similar efficiency with most compilers.

Generalised multivectors are most easily represented using "bitmap

ordering" in this context - which is at odds with your naming each

component separately. For consistency, you might have to convert my

code, but this may result in loss of the freedom of choice of the

signature.

To give you a sense, my header file is shown below. I have also

included a snippet that calculates the 3D inverse written out from the

formulae I gave last time.

Manfred

//

// This class is intended for generality, and not for optimization. Hence,

// it does not make any use of "special forms" (pure vectors, rotors, blades

// etc.) that allow substantial optimizations. Yet, in the general

// multivector case it is hopefully not wasteful in processing time.

//

// A general guideline of defining the assignment operators internally to

// the class and non-assignment operators externally has been followed.

// This guideline the following motivations:

// * Operators (assignments excluded) are intuitively uniform in terms of

// implicit conversions of parameters across all operators. In C++, this

// is not true of member functions (the first "parameter", namely the

// object, will not participate in implicit conversions other than

// upcasts), whereas the implicit casting rules apply uniformly to

// non-member function parameters, with the exception that with template

// functions implicit conversion of the parameters is so strongly

// constrained.

// * Defining friend functions of template classes is proving problematic

// (V++C2005), making it preferable to define operators in terms of public

// methods of the class.

// * Non-member template functions are tedious to instantiate explicitly

// for all anticipated template parameter combinations, so it makes sense

// to keep these short and define them in the header file.

//

// Author: Manfred von Willich

```
```#ifndef GA_MULTIVECTOR_H

namespace Math {

namespace Geometric_algebra {

//

// The metric is psuedo-orthonormal with the signature set by the mask s.

// In s, bit i set indicates the signature is '-' (else '+') for ei.

//

template <unsigned n, unsigned s>

class Multivector sealed {

public:

// Multivector (Multivector const&); -- compiler generated

Multivector (double a = 0.0, unsigned index = 0);

~Multivector ();

// Assignment operators

// Multivector& operator= (Multivector const&); -- compiler generated

Multivector<n,s>& operator*= (double b); // scalar product

Multivector<n,s>& operator+= (Multivector<n,s> const& b); // multivector addition

Multivector<n,s>& operator-= (Multivector<n,s> const& b); // multivector subtraction

Multivector<n,s>& operator*= (Multivector<n,s> const& b); // geometric product

// Unary operators

Multivector<n,s> operator~ () const; // reverse conjugate (negate grades 2,3,6,7 etc.)

Multivector<n,s> operator! () const; // multiplicative inverse

private:

Multivector<n,s> conj_b () const; // clifford conjugate (negate grades 1,2,5,6 etc.)

Multivector<n,s> conj_c () const; // special conjugate

// Components are binary-indexed, e.g. 0:scalar, 5:gamma02

static unsigned const ga_dim = 1u << n;

double component [ga_dim];

}; // class Multivector

// The index is based on bit-ordering - one bit per vector dimension.

static unsigned const e = 0x0000u;

static unsigned const e0 = 0x0001u;

static unsigned const e1 = 0x0002u;

static unsigned const e01 = 0x0003u;

static unsigned const e2 = 0x0004u;

static unsigned const e02 = 0x0005u;

static unsigned const e12 = 0x0006u;

static unsigned const e012 = 0x0007u;

static unsigned const e3 = 0x0008u;

static unsigned const e03 = 0x0009u;

static unsigned const e13 = 0x000Au;

static unsigned const e013 = 0x000Bu;

static unsigned const e23 = 0x000Cu;

static unsigned const e023 = 0x000Du;

static unsigned const e123 = 0x000Eu;

static unsigned const e0123 = 0x000Fu;

static unsigned const e4 = 0x0010u;

static unsigned const e04 = 0x0011u;

static unsigned const e14 = 0x0012u;

static unsigned const e014 = 0x0013u;

static unsigned const e24 = 0x0014u;

static unsigned const e024 = 0x0015u;

static unsigned const e124 = 0x0016u;

static unsigned const e0124 = 0x0017u;

static unsigned const e34 = 0x0018u;

static unsigned const e034 = 0x0019u;

static unsigned const e134 = 0x001Au;

static unsigned const e0134 = 0x001Bu;

static unsigned const e234 = 0x001Cu;

static unsigned const e0234 = 0x001Du;

static unsigned const e1234 = 0x001Eu;

static unsigned const e01234 = 0x001Fu;

static unsigned const e5 = 0x0020u;

static unsigned const e05 = 0x0021u;

static unsigned const e15 = 0x0022u;

static unsigned const e015 = 0x0023u;

static unsigned const e25 = 0x0024u;

static unsigned const e025 = 0x0025u;

static unsigned const e125 = 0x0026u;

static unsigned const e0125 = 0x0027u;

static unsigned const e35 = 0x0028u;

static unsigned const e035 = 0x0029u;

static unsigned const e135 = 0x002Au;

static unsigned const e0135 = 0x002Bu;

static unsigned const e235 = 0x002Cu;

static unsigned const e0235 = 0x002Du;

static unsigned const e1235 = 0x002Eu;

static unsigned const e01235 = 0x002Fu;

static unsigned const e45 = 0x0030u;

static unsigned const e045 = 0x0031u;

static unsigned const e145 = 0x0032u;

static unsigned const e0145 = 0x0033u;

static unsigned const e245 = 0x0034u;

static unsigned const e0245 = 0x0035u;

static unsigned const e1245 = 0x0036u;

static unsigned const e01245 = 0x0037u;

static unsigned const e345 = 0x0038u;

static unsigned const e0345 = 0x0039u;

static unsigned const e1345 = 0x003Au;

static unsigned const e01345 = 0x003Bu;

static unsigned const e2345 = 0x003Cu;

static unsigned const e02345 = 0x003Du;

static unsigned const e12345 = 0x003Eu;

static unsigned const e012345 = 0x003Fu;

// Unary operators

template <unsigned n, unsigned s>

inline Multivector<n,s> operator+ (Multivector<n,s> const& a) { // identity

return a;

}

template <unsigned n, unsigned s>

inline Multivector<n,s> operator- (Multivector<n,s> const& a) { // negation

return Multivector<n,s>(0.0) -= a;

}

// Binary operators

template <unsigned n, unsigned s>

inline Multivector<n,s> operator* (Multivector<n,s> const& a, double b) {

return Multivector<n,s>(a) *= b;

}

template <unsigned n, unsigned s>

inline Multivector<n,s> operator* (double a, Multivector<n,s> const& b) {

return Multivector<n,s>(b) *= a;

}

template <unsigned n, unsigned s>

inline Multivector<n,s> operator+ (Multivector<n,s> const& a, Multivector<n,s> const& b) {

return Multivector<n,s>(a) += b;

}

template <unsigned n, unsigned s>

inline Multivector<n,s> operator- (Multivector<n,s> const& a, Multivector<n,s> const& b) {

return Multivector<n,s>(a) -= b;

}

template <unsigned n, unsigned s>

inline Multivector<n,s> operator* (Multivector<n,s> const& a, Multivector<n,s> const& b) {

return Multivector<n,s>(a) *= b;

}

// Right division: a/b

template <unsigned n, unsigned s>

inline Multivector<n,s> operator/ (Multivector<n,s> const& a, Multivector<n,s> const& b) {

return a * !b;

}

// Left division: a\b

template <unsigned n, unsigned s>

inline Multivector<n,s> operator% (Multivector<n,s> const& a, Multivector<n,s> const& b) {

return !a * b;

}

} // namespace Geometric_algebra

} // namespace Math

#define GA_MULTIVECTOR_H

#endif //GA_MULTIVECTOR_H

Inverse in 3D:

// Note: * product_sign() compiles to a unary + or - given constant parameters

// This allows use of the selected signature s.

double y_0

= (a.component[0] * a.component[0] * product_sign(s, 0, 0))

- (a.component[1] * a.component[1] * product_sign(s, 1, 1))

- (a.component[2] * a.component[2] * product_sign(s, 2, 2))

- (a.component[3] * a.component[3] * product_sign(s, 3, 3))

- (a.component[4] * a.component[4] * product_sign(s, 4, 4))

- (a.component[5] * a.component[5] * product_sign(s, 5, 5))

- (a.component[6] * a.component[6] * product_sign(s, 6, 6))

+ (a.component[7] * a.component[7] * product_sign(s, 7, 7));

double y_7

= (a.component[0] * a.component[7] * product_sign(s, 0, 7))

- (a.component[1] * a.component[6] * product_sign(s, 1, 6))

- (a.component[2] * a.component[5] * product_sign(s, 2, 5))

- (a.component[3] * a.component[4] * product_sign(s, 3, 4))

- (a.component[4] * a.component[3] * product_sign(s, 4, 3))

- (a.component[5] * a.component[2] * product_sign(s, 5, 2))

- (a.component[6] * a.component[1] * product_sign(s, 6, 1))

+ (a.component[7] * a.component[0] * product_sign(s, 7, 0));

// d = y * y_c

double d_0

= (y_0 * y_0 * product_sign(s, 0, 0))

- (y_7 * y_7 * product_sign(s, 7, 7));

// t = y_c * !d

double scaling = 1.0 / d_0;

double t_0 = +y_0 * scaling;

double t_7 = -y_7 * scaling;

// r = a_c * t

r.component[0]

= (+a.component[0] * t_0 * product_sign(s, 0, 0))

+ (+a.component[7] * t_7 * product_sign(s, 7, 7));

r.component[1]

= (-a.component[1] * t_0 * product_sign(s, 1, 0))

+ (-a.component[6] * t_7 * product_sign(s, 6, 7));

r.component[2]

= (-a.component[2] * t_0 * product_sign(s, 2, 0))

+ (-a.component[5] * t_7 * product_sign(s, 5, 7));

r.component[3]

= (-a.component[3] * t_0 * product_sign(s, 3, 0))

+ (-a.component[4] * t_7 * product_sign(s, 4, 7));

r.component[4]

= (-a.component[4] * t_0 * product_sign(s, 4, 0))

+ (-a.component[3] * t_7 * product_sign(s, 3, 7));

r.component[5]

= (-a.component[5] * t_0 * product_sign(s, 5, 0))

+ (-a.component[2] * t_7 * product_sign(s, 2, 7));

r.component[6]

= (-a.component[6] * t_0 * product_sign(s, 6, 0))

+ (-a.component[1] * t_7 * product_sign(s, 1, 7));

r.component[7]

= (+a.component[7] * t_0 * product_sign(s, 7, 0))

+ (+a.component[0] * t_7 * product_sign(s, 0, 7));

;

from martinbaker

date 2011-01-04 18:57:32 UTC

Manfred

> I meant (a) - in particular that one has the freedom of choosing the

> signature of the algebra (i.e. which dimensions are spacelike and which

> are timelike). As I understand it, (b) and (c) are the same except that

> the basis vectors are linear functions of the orthogonal case, and I

> imagine it would be straightforward to use a dual basis to generalize to

> those cases.

Well (b) and (c) are simple to calculate the geometric product for pure

vectors, it is just the inner + outer product. Where the outer product

is just the Grassman product and the inner product is given by a matrix

representing the bilinear signature where the rows and columns represent

the bases of the vector. However the higher order component (bivector

and above) are very messy to calculate, involving the contraction inner

product and a lot of highly recursive code. So I suspect calculating the

inverse geometric product in the most general case might be very

difficult.

It would be good to calculate the inverse geometric product upto (b)

because then it can be used for conformal and projective space which

have a lot of practical applications. I read somewhere that the reversal

can be used for this but I don't know how to prove it.

> You are welcome to publish whatever I give you on your website. I am

> having difficulty finding useful code on the web, and would be happy to

> have the detail published so that others learning can benefit. Your site

> is exceptional in that it goes through the useful explanations in

> understandable detail, even though it is clearly a work in progress.

Yes, you are right about it being a work in progress and I need all the

help I can get, I think it could be helpful to others to link a copy of

this thread (so I will do that) and, as you say, it may encourage others

to help.

> I'm not sure what you would like - a more detailed outline of the

> mathematical result I have already given, or detailed source code

>

> My stuff is in the form of C++ templates, with the number of dimensions and

> the signature as template parameters. I prefer this to your approach,

> where you seem to name each element in a dimension-specific class, but

> should have similar efficiency with most compilers. Generalised

> multivectors are most easily represented using "bitmap ordering" in this

> context - which is at odds with your naming each component

> separately. For consistency, you might have to convert my code, but this

> may result in loss of the freedom of choice of the signature.

Its a long time since I programmed in C++ (I have bad and distant

memories of trying to debug memory leaks) so the code probably would not

be of direct use to me, but it could be of use to others so I would be

happy to put it on the site. I programmed the general case of the

geometric product using an open source program called FriCAS (Axiom),

see this page:

"https://www.euclideanspace.com/maths/standards/program/mycode/clifford/index.htm

I was lucky enough to get a lot of help from Dr Bertfried Fauser to do

this but he was very busy and I never really understood the various

types of inverse, how they relate and how to calculate them. So I guess

that's what I am looking for.

Sometime I would like to reorganise the Clifford Algebra part of the

website, when I originally did it, it seemed a good idea to have

separate pages for each dimension, but now that does not seem such a

good idea. So when I get a bit more inspiration about how to organise it

I think I will need a big reorganisation.

Martin

from: manfredvw

date">2011-01-04 21:45:43 UTC

Martin

> ... However

> the higher order component (bivector and above) are very messy to calculate,

> involving the contraction inner product and a lot of highly recursive code.

> So I suspect calculating the inverse geometric product in the most general case

> might be very difficult.

Thinking a little more closely relating to (c), you are probably right.

So I will retract my blithe statement in this case.

Yet in the case of (b) i.e. orthogonal basis with arbitrary but diagonal

quadratic form (given by the squares of the basis vectors), the product

of any pair of blades will simply have a scaling factor determined by

the signature, and would be provided by tweaking the signature (s in my

code) to a quadratic form (a series of n real constants). The only

other modification would be to the function product_sign in my code,

which would multiply together the necessary values in the quadratic form

to make a single real mutiplier. But I must try this before I get too

confident.

> It would be good to calculate the inverse geometric product upto (b) because

> then it can be used for conformal and projective space which have a lot of practical

> applications. I read somewhere that the reversal can be used for this but I

> don't know how to prove it.

As I have indicated above supporting (b) might be straightforward. The

argument applies to all the operations in my code. Not that I

understand why the conformal and projective cases need a scaled basis

(quadratic form deviating from +1 and -1) - I would have thought any

basis can easily be changed to a case (a) type.

> Its a long time since I programmed in C++ (I have bad and distant memories of

> trying to debug memory leaks) so the code probably would not be of direct use

> to me, but it could be of use to others so I would be happy to put it on the

> site.

I'm an embedded programmer - I generally avoid the heap altogether. No

memory leaks! ;-)

> ... FriCAS (Axiom), see this page:

> https://www.euclideanspace.com/maths/standards/program/mycode/clifford/index.htm

I'm probably being obtuse - I see no readily accessible code, and I tend

to work at the nuts-and-bolts end of programs. So I get little idea of

the detail of the implementation.

> ... I never really understood the various types of inverse,

> how they relate and how to calculate them. So I guess that's what Iam looking

> for.

Don't you mean "various types of conjugation"? There is only one

multiplicative inverse that I'm aware of, but there are numerous

conjugations - essentially changing the sign of selected components in a

multivector, at least with an orthogonal basis. As I'm finding, many

conjugations are needed for calculating the inverse. Negation (additive

inverse) and reversal are two examples of conjugations if defined this

way.

A really compact description of numerous concepts, operations,

conjugations etc.:

http://www.iancgbell.clara.net/maths/geoalg1.htm

I'm really rather illiterate when it comes to web site interactions etc.

I probably couldn't manage an FTP transfer, so for code transfer

you'll have to make a suggestion (e.g. email or pasting it into the

thread). I'd probably best develop some ideas first, but you may make

suggestions for what would be useful (e.g. some generalisation of

metric, remove templates and make a separate class for each number of

dimensions).

Manfred

from martinbaker

date">2011-01-05 11:53:08 UTC

Manfred,

> Yet in the case of (b) i.e. orthogonal basis with arbitrary but diagonal

> quadratic form (given by the squares of the basis vectors), the product of

> any pair of blades will simply have a scaling factor determined by the

> signature, and would be provided by tweaking the signature (s in my code)

> to a quadratic form (a series of n real constants). The only other

> modification would be to the function product_sign in my code, which would

> multiply together the necessary values in the quadratic form to make a

> single real mutiplier. But I must try this before I get too confident.

I think that to transform case (b) into case (a) we have to rotate

rather than scale the bases.

If we take the null basis as required for conformal space for example,

if we have one space-like and one time-like basis and we rotate the

bases by 45degrees we get two light-like bases (square to zero) so to

calculate the geometric product in this case we either have to:

* Use the contraction inner product and a lot of highly recursive code as for the general case.

* Or rotate the bases to get to (a), do the multiplication, rotate the bases back.

> I'm probably being obtuse - I see no readily accessible code, and I tend to

> work at the nuts-and-bolts end of programs. So I get little idea of the

> detail of the implementation.

FriCAS/Axiom has a language called SPAD which is not easy to learn, its

not very well documented and its not easy to work with (no IDE, poor

error messages). It is very powerful though, for example, the Clifford

algebra is defined over a field so it can be used to work with float

types (as an approximation to reals) but it can also work in terms of

symbolic values, so it should be able to derive formulae, I am hoping

this will be useful to help me improve the website.

> Don't you mean "various types of conjugation"? There is only one

> multiplicative inverse that I'm aware of, but there are numerous

> conjugations - essentially changing the sign of selected components in a

> multivector, at least with an orthogonal basis. As I'm finding, many

> conjugations are needed for calculating the inverse. Negation (additive

> inverse) and reversal are two examples of conjugations if defined this

> way.

>

> A really compact description of numerous concepts, operations, conjugations

> etc.:

> http://www.iancgbell.clara.net/maths/geoalg1.htm

I would have thought that each type of multiplication (geometric,

exterior and so on) would have its own inverse although, I agree, that

the geometric inverse is the main interest.

I've never been sure whether the definition of conjugation is defined

first in terms of inverting bases or a×a* = scalar, with complex

conjugates its all the same, the way its defined on Ian Bells website

seems a good way to do it.

So I guess what I need to do is understand the reversal (reversing

conjugation) (ab)^ = (b^)(a^), when it is equivalent to geometric

inverse and when it has the signature:

(+,+,-,-, +,+,-,-, +,+,-,- .....)

> I'm really rather illiterate when it comes to web site interactions etc. I

> probably couldn't manage an FTP transfer, so for code transfer you'll have

> to make a suggestion (e.g. email or pasting it into the thread).

I could include a zip file with your code on my website but it would be

better if you put it on SourceForge or Github or something like that in

your own right. Then you could update when you want and get full credit

for your own work. Its a long time since I uploaded stuff to

SourceForge, I seem to remember that there is a bit of arcane stuff to

do with ssh keys and generating a public key, however, its not too bad

if you follow the instructions on SourceForge. There may be sites that

are easier to upload to? It might be worth looking at <a href="http://code.google.com/">http://code.google.com/</a>

although I've not tried it myself.

> I'd probably best develop some ideas first, but you may make suggestions for

> what would be useful (e.g. some generalisation of metric, remove templates

> and make a separate class for each number of dimensions).

I guess it depends on the purpose of the program, is it as a study tool

for the mathematical theory, in which case it needs to be as general as

possible, or is it to use practical applications such as realtime

raytracing in which case it needs to be more specific to a fixed number

of dimensions.

Martin

from brombo14

date">2011-01-05 13:09:36 UTC

With regard to geometric algebra software you should look at http://docs.sympy.org/dev/modules/galgebra/GA/GAsympy.html

For identifying if a multivector is a blade see "Geometric Algebra for

Computer Science" by Dorst, etc. starting on page p532 (section 21.5).

Finally, if the multivector is a spinor or rotor it is simple to

calculate the inverse. If A is a multivector and A*reverse(A) is a

scalar not equal to zero then A has a simple inverse. I do not know if

A*A.reverse() is not a scalar implies that A does not have an inverse.

from: manfredvw

date">2011-01-05 13:52:18 UTC

Martin

> I think that to transform case (b) into case (a) we have to rotate rather than

> scale the bases.

>

> If we take the null basis as required for conformal space for example, if we

> have one space-like and one time-like basis and we rotate the bases by 45degrees

> we get two light-like bases (square to zero) so to calculate the geometric product

> in this case we either have to:

> * Use the contraction inner product and a lot of highly recursive code as for

> the general case.

> * Or rotate the bases to get to (a), do the multiplication, rotate the bases

> back.

What I was thinking of for (b) was not this. As I understand it, a null

basis is never orthogonal. I seem to have incorrectly reinterpreted

your definition of (b). So, yes: I agree with you, my software will not

handle it. This is not, as yet, a direction I'm looking at.

> I would have thought that each type of multiplication (geometric, exterior and

> so on) would have its own inverse although, I agree, that the geometric inverse

> is the main interest.

The exterior and all the other products excluding the geometric product

that come to mind throw away information. Consequently they do not have

inverses - only the geometric product permits an inverse.

> I've never been sure whether the definition of conjugation is defined first

> in terms of inverting bases or a×a* = scalar, with complex conjugates its all

> the same, the way its defined on Ian Bells website seems a good way to do it.

Many other sources also refer to multiple types of conjugation in

geometric algebra. So I like the definition to include any set pattern

of sign changes on components (not necessarily by grade either - they

can vary within a grade). I regard them as tools for manipulation, not

fundamental operations.

> So I guess what I need to do is understand the reversal (reversing conjugation)

> (ab)^ = (b^)(a^), when it is equivalent to geometric inverse and when it has

> the signature:

> (+,+,-,-, +,+,-,-, +,+,-,- .....)

Reversal is defined as what is obtained when a multivector is expressed

as the sum of products of 1-vectors, and the vectors in these products

are then reversed in order. The reversal conjugation is when grades

2,3,6,7,... are negated, though I would not use the term "signature" in

this context. They are equivalent operations, hence the name.

For a versor, reversal is the same as the geometric inverse except for a

scalar multiplier. This can be seen from the definition of a versor as

the product of any number of 1-vectors. Multiply it by its reversal,

and by associativity you end up with the product of the squares of all

the vectors, which is a scalar. If the versor is normalised, then they

are equivalent up to a sign. In case of confusion of the term

signature, this is independent of the signature of the algebra.

> I guess it depends on the purpose of the program, is it as a study tool for

> the mathematical theory, in which case it needs to be as general as possible,

> or is it to use practical applications such as realtime raytracing in which

> case it needs to be more specific to a fixed number of dimensions.

I don't intend to put together a project - this is simply a mathematical

investigation for me. I suggest the most appropriate route would be

for me to generate text (mainly the math) to fill in a few of your gaps,

enough for anyone to quickly put together the code in a language of

their choice, assuming they're using an orthonormal basis. It will be

simplest for me to post this text in this thread. You could note a

credit if you include it on your site if you wish.

Manfred

from: manfredvw

date 2011-01-05 14:50:21 UTC

Hi brombo14

> With regard to geometric algebra software you should look at

>http://docs.sympy.org/dev/modules/galgebra/GA/GAsympy.html

For identifying

> if a multivector is a blade see "Geometric Algebra for Computer Science" by

> Dorst, etc. starting on page p532 (section 21.5).

Thanks for the URL and the input. I see no mention of the geometric

inverse on it, other than with respect to the reverse of a rotor. I do

not yet have "Geometric Algebra for Computer Science", though I am

strongly considering acquiring it.

> Finally, if the multivector

> is a spinor or rotor it is simple to calculate the inverse. If A is a multivector

> and A*reverse(A) is a scalar not equal to zero then A has a simple inverse.

I am not convinced that arbitrary spinors have simple inverses in the

sense that A*A.reverse() will be a scalar. Spinors are generally

defined to be arbitrary even-graded multivectors. Versors (including

rotors) in general do have simple inverses of the form

A.reverse()*(1/(A*A.reverse)), where the denominator of the fraction

will be a scalar.

> I do not know if A*A.reverse() is not a scalar implies that A does not have

> an inverse.

A multivector always has an inverse unless it is singular (null). There

is a more complicated test for singularity. Only when the multivector

is of a special form will the reverse be closely related to the inverse.

I have been looking at calculating inverses of general multivectors,

and the general expression up to 4D is given at the start of this thread

- you will see that it gets quite a lot more complicated than reversal.

Manfred

from brombo14

date 2011-01-05 15:53:05 UTC

I was thinking of the physical definition of a spinor as rotation and

dilation (see page 106 equation 8.11 in "Clifford Algebra to Geometric

Calculus" by Hestenes and Sobczyk) . For inverse of multivector see

Dorst page 530 (section 21.2).

from: manfredvw

date">2011-01-05 16:30:33 UTC

I agree about the spinor with the revised definition (as per "Clifford

Algebra to Geometric Calculus" sandwiching a vector between the spinor

and its reversal producing another vector). I think this is a more

general than a dilating rotor, but the definition using the reversal

seems to guarantee that the reversal will be the inverse multiplied by a

scalar.

The poorly remembered glimpse I had of Dorst p530 gave the general

inverse in terms of the 2^n x 2^n matrix representation of the

multivector. This seems very inefficient, as I have found substantially

simpler computations for up to 5D. So perhaps I'm not reinventing the

wheel.

from: manfredvw

date">2011-01-07 18:59:51 UTC

Well, here it is.

Formatting note: subscripting etc. has been lost. Infer from context -

trailing numberic is a subscript, all symbols a single letter with

possible subscript. The "n" in "fn" is a subscript. I may have made

errors inserting exponentiation (^).

Geometric Algebra: Computing the inverse of a general multivector

by Manfred von Willich

Synopsis

This is intended to be a brief outline to a way of computing general

multivector inverses much more efficiently for small n than finding the

inverse of a 2^n×2^n matrix. We do this by finding a function fn(A)

such that d=Afn(A) is a scalar, and then computing A^−1=fn(A)/d except

when d=0 indicates no inverse. The many zero components of the

intermediate products are omitted from the calculation, reducing the

amount of computation. This has been successful up to 5D; and may

extend by suitable choice of conjugations.

Assumptions and notation

We assume orthogonal basis vectors ei squaring to ei2=si.

The variable n denotes the dimension of the generating vector space.

A = [a,a0,a1,a01,a2,…] = ae+a0e0+a1e1+a01e0e1+a2e2+… with components

listed in binary order.

‹X›: The scalar component of (in this context all other components are

zero anyway).

X~: Sign changes in X by grade are ++−−++ (Reversal conjugation).

X*: Sign changes in X by grade are +−−++− (Clifford conjugation).

X$: Sign changes in X by grade are ++-+-−−+- (+- for immaterial, applied

only to zeros).

X%: Sign changes in X by grade are +−+-+-−+.

X#: Sign changes in X by grade are ++-+-+-+-−.

Defining identities

d=‹Afn(A)› and A^−1=fn(A)/d

f0(A)=[1]

f1(A)=A*

f2(A)=A*

f3(A)=A*(AA*)$

f4(A)=A*(AA*)$ or f4(A)=A~(AA~)%

f5(A)=A~(AA~)%(AA~(AA~)%)#

Derivation

n = 0

A = [a]

f0(A) = [1]

d = ‹Af0(A)›=a

A^−1 = f0(A) / d = [1] / d

n = 1

A = [a,a0]

f1(A) = A* = [a,−a0]

d = ‹Af1(A)› = ‹[a,a0][a,−a0]› = a^2−s0a0^2

A^−1 = f1(A) / d = [a,−a0] / d

n = 2

A = [a,a0,a1,a01]

f2(A) = A* = [a,−a0,−a1,−a01]

d = ‹Af2(A)› = ‹AA*› = ‹[a,a0,a1,a01][a,−a0,−a1,−a01]› =

a^2−s0a0^2−s1a1^2+s0s1a01^2

A^−1 = f2(A) / d = [a,−a0,−a1,−a01] / d

n = 3

A = [a,a0,a1,a01,a2,a02,a12,a012]

Z = AA* = [z,0,0,0,0,0,0,z012]

z = a^2−s0a0^2−s1a1^2+s0s1a01^2−s2a2^2+s0s2a02^2+s1s2a12^2−s0s1s2a012^2

z012 = 2(aa012−a0a12+a1a02−a01a2)

f3(A) = A*(AA*)$ = A*Z$

d = ‹Af3(A)› = ‹AA*Z$› = ‹ZZ$› =

‹[z,0,0,0,0,0,0,z012][z,0,0,0,0,0,0,−z012]› = z^2+s0s1s2z012^2

A^−1 = f3(A) / d = A*Z$ / d =

[a,−a0,−a1,−a01,−a2,−a02,−a12,a012][z,0,0,0,0,0,0,−z012] / d

= [az+s0s1s2a012z012, −a0z−s1s2a12z012, −a1z+s0s2a02z012,

−a01z+s2a2z012,

−a2z−s0s1a01z012, −a02z−s1a1z012, −a12z+s0a0z012,a012z−az012] / d

n = 4

A = [a,a0,a1,a01,a2,a02,a12,a012,a3,a03,a13,a013,a23,a023,a123,a0123]

Z = AA* = [z,0,0,0,0,0,0,z012,0,0,0,z013,0,z023,z123,z0123]

z =

a^2−s0a0^2−s1a1^2+s0s1a01^2−s2a2^2+s0s2a02^2+s1s2a12^2−s0s1s2a012^2−s3a3^2

+s0s3a03^2+s1s3a13^2−s0s1s3a013^2+s2s3a23^2−s0s2s3a023^2−s1s2s3a123^2+s0s1s2s3a0123^2

z012 = 2(aa012−a0a12+a1a02−a01a2)+2s3(−a3a0123+a03a123−a13a023+a013a23)

z013 = 2(aa013−a0a13+a1a03−a01a3)+2s2(a2a0123−a02a123+a12a023−a012a23)

z023 = 2(aa023−a0a23+a2a03−a02a3)+2s1(−a1a0123+a01a123−a12a013+a012a13)

z123 = 2(aa123−a1a23+a2a13−a12a3)+2s0(a0a0123−a01a023+a02a013−a012a03)

z0123 = 2(aa0123+a0a123−a1a023−a01a23+a2a013+a02a13−a12a03−a012a3)

f4(A) = A*(AA*)$ = A*Z$

d = ‹Af4(A)› = ‹ZZ$› =

z^2+s0s1s2z012^2+s0s1s3z013^2+s0s2s3z023^2+s1s2s3z123^2−s0s1s2s3z0123^2

A^−1 = f4(A) / d = A*Z$ / d

The detailed working is not shown.

n = 5

A = [a,a0,a1,a01,a2,…,a01234]

Y = AA~ (has 12 of 32 nonzero components)

Z = YY% (has 2 of 32 nonzero components)

f5(A) = A~(AA~)%(AA~(AA~)%)# = A~Y%Z#

d = ‹Af5(A)› = ‹ZZ#›

A^−1 = f4(A) / d = A~Y%Z# / d

The detailed working is not shown.

Resulting formulae for computation

Note: in the following, the number of multiplications may be reduced

further by factorizing subexpressions, but this depends on the signs of

si.

0D

t = 1 / a

A^−1 = [t]

1D

t = 1 / (a^2−s0a0^2)

A^−1 = [ta,−ta0]

2D

t = 1 / (a^2−s0a0^2−s1a1^2+s0s1a01^2)

A^−1 = [ta,−ta0,−ta1,−ta01]

3D

z = a^2−s0a0^2−s1a1^2+s0s1a01^2−s2a2^2+s0s2a02^2+s1s2a12^2−s0s1s2a012^2

z012 = 2(aa012−a0a12+a1a02−a01a2)

m = 1 / (z^2+s0s1s2z012^2)

t = mz

t012 = mz012

A^−1 = [at+s0s1s2a012t012, −a0t−s1s2a12t012, −a1z+s0s2a02t012,

−a01z+s2a2t012, −a2t−s0s1a01t012, −a02t−s1a1t012, −a12t+s0a0z012,

a012t−at012]

4D & 5D: Not shown

Complexity

We count only the number of multiplications, excluding by a constant and

divide by the number of multivector components to get the

multiplications per component.

n=0: 0/1 = 0.0

n=1: 4/2 = 2.0

n=2: 8/4 = 2.0

n=3: 32/8 = 4.0

n=4: 164/16 = 10.25

n=5: 932/32 = 29.1 (estimate)