Maths - Tensor Indices

One of the difficult things to understand about tensors is their terminology, especially related to the indicies. Here we introduce contravariant covariant and tensor indices.

We have already seen how the rank and dimension of tensors relates to their indices.

We now see that it is useful to have 2 types of indices:

Vectors and Covectors

Usually a vector is denoted as a column like this:

a
b
c

This can represent say a position, relative to the origin, at x=a ,y=b, z=c.

Sometimes we might write this as (a,b,c) just to make it more conveniant to write on the page.

However what is the meaning of a true row vector like this?

a b c

It would take value of x,y and z and give a scalar with the following value:

x*a + y*b + z*c

We can express these in tensor terminology as follows, for a vector we can use superscripts to denote the elements:

t1
t2
t3

and for a covector we can use subscripts:

t1 t2 t3

which gives a scalar value of:

s = ∑ tiei

where:

Matrix as a Tensor

If we combine vectors and covectors we can write a matrix as a tensor:

t 1
1
t 1
2
t 1
3
t 2
1
t 2
2
t 2
3
t 3
1
t 3
2
t 3
3

with the superscripts denoting the rows and the subscripts denoting the columns. A matrix with one contravariant index and one covariant index is known as a 'linear operator', it is possible to define other matrices, two contravariant or two covariant indicies. But, for now, back to the linear operator:

p'1
p'2
p'3
=
t 1
1
t 1
2
t 1
3
t 2
1
t 2
2
t 2
3
t 3
1
t 3
2
t 3
3
p1
p2
p3

This is equivalent to these linear equations:

p'1 = t11 p1 + t12 p2 + t13 p3
p'2 = t21 p1 + t22 p2 + t23 p3
p'3 = t31 p1 + t32 p2 + t33 p3

Remember that, in this context, superscripts are treated as indicies and not exponents. The above equations can be represented as:

p'1 = ∑ t1i pi
p'2 = ∑ t2i pi
p'3 = ∑ t3i pi

Where 'i' is summed over the values 1,2 and 3. In general where an index is repeated then it is summed over its range, in this case 3 dimensions.

These three equations can be combined into one equation:

∑ p'k = tki pi

This can be interpreted as follows: The repeated index 'i' is summed in each equation, in this case with the values 1,2 and 3. The remaining superscript index represents repeated equations.

Einsteins Summation Convention

A repeated index on one side of the equation indicates summation. So we can remove the summation symbol ∑ in the above equations and it will be implied.

So the equation:

p'k = tki pi

represents an complete matrix equation as above.

Basis

A vector 'a' can be defined as a linear combination of a set of basis vectors: e1, e2, e3 ...

This can be written as a vector equation:

a = a1 e1 + a2 e2 + a3 e3 ...

or in terms of linear equations:

ai
aj
ak
= a1 e1i
e1j
e1k
+ a2 e2i
e2j
e2k
+ a3 e3i
e3j
e3k

Don't forget: superscripts are indexes in this context, not exponents .

So the vector a is represented as a linear combination of the basis vectors. These basis vectors need to be independent (not parallel) to each other. It can provide simplifications if the basis vectors are orthogonal (mutually perpendicular) although this is not a requirement. We will treat the orthogonal and non-orthogonal cases separately.

Dual Basis

We now define vector 'a' in terms of an alternative set of basis vectors: e1, e2, e3 ... which are the dual of the above basis.

This can be written as a vector equation:

a = a1 e1 + a2 e2 + a3 e3 ...

or in terms of linear equations:

ai = a1 e1i + a2 e2i + a3 e3i
aj = a1 e1j + a2 e2j + a3 e3j
ak = a1 e1k + a2 e2k + a3 e3k

The basis vectors and the dual basis vectors are related by:

eiej = δij

where

δij= Kronecker Delta (identity element as described on this page)

We can calculate the dual basis from:

V = volume of parallelped = e1 (e2 × e3)

so:

e1= (1/V)(e2 × e3)

and so on for the other bases.

Example 1

If the basis vectors are:

e1=
1
0
0
e2=
0
1
0
e3=
0
0
1
 

then what are the dual basis vectors?

V = 1*1*1 = 1

using e1= (1/V)(e2 × e3) we get:

e1= e2 × e3= (1,0,0)

e2= e3 × e1= (0,1,0)

e3= e1 × e2= (0,0,1)

Example 2

If the basis vectors are:

e1=
0.7071
0.7071
0
e2=
-0.7071
0.7071
0
e3=
0
0
1
 

then what are the dual basis vectors?

V = 1*1*1 = 1

using e1= (1/V)(e2 × e3) we get:

e1= e2 × e3= (0.7071,0.7071,0)

e2= e3 × e1= (-0.7071,0.7071,0)

e3= e1 × e2= (0,0,1)

Components of a Tensor

These methods define the tensor in terms of its components (the elements of the vector, matrix, etc.). It helps if we have an arithmetic expression which allows us to determine all the component values from a single algebraic expression. It is best if this is expressed in terms of a dot product.

Transformations and Vectors

If x is a physical vector then it can be represented by the following expressions:

x = xi ei = xi ei = x'i e'i = x'i e'i

where:

We can invert these to give these two dual or reciprical basis:

x • ei = xi

x • ei = xi

So ei is a different basis.

When we are using the dual frame then:

xi = gji xi

xi = gji xi

x1
x2
x3
=
g11 g21 g31
g12 g22 g32
g13 g23 g33
x1
x2
x3

where

g is known as the Gram matrix it converts a vector to its reciprical. If its determinant is zero then the basis vectors are linearly independant.

Contravariant tensor

If the basis vectors are transformed according to the relation:

ei= tji e'j

And the components xi of a vector x are transformed according to the relation:

xi= t'ij x'j

Then index 'i' is contravariant, that is the component transform in the opposite way to the bases.

The tangent of a differential function is a contravatiant vector.

If we take the example of a vector field

Ti = Tr ∂x'i / ∂xr

= ∑ Tr ∂x'i / ∂xr

= T1 ∂x'i / ∂x1 + T2 ∂x'i / ∂x2 + T3 ∂x'i / ∂x3 + ...

Example take the mapping:

x' = x*cos(θ) - y*sin(θ)
y' = x*sin(θ) + y*cos(θ)

and put this in tensor notation:

x'0 = x0*cos(θ) - x1*sin(θ)
x'1 = x0*sin(θ) + x1*cos(θ)

so

Ti = Tr ∂x'i / ∂xr =

Covariant tensor

If the basis vectors are transformed according to the relation:

ei= tji e'j

And the components xi of a vector x are transformed according to the relation:

xi= tji x'j

Then index 'i' is covariant, that is the component transform in the same way as the bases.

The gradient of a differential function is a covariant vector.

Ti = Tr ∂xr / ∂x'i

 


metadata block
see also:
Correspondence about this page

Book Shop - Further reading.

Where I can, I have put links to Amazon for books that are relevant to the subject, click on the appropriate country flag to get more details of the book or to buy it from them.

cover Tensor Analysis.

Terminology and Notation

Specific to this page here:

 

This site may have errors. Don't use for critical systems.

Copyright (c) 1998-2023 Martin John Baker - All rights reserved - privacy policy.