Search Knowledge

© 2026 LIBREUNI PROJECT

Mathematics / Linear Algebra

Tensors & Multilinear Algebra

Tensors and Multilinear Algebra

Tensors provide the ultimate generalization of scalars, vectors, and linear operators. While elementary linear algebra focuses on maps between two vector spaces, multilinear algebra treats functions that depend linearly on multiple variables simultaneously. This framework is essential for general relativity, continuum mechanics, and quantum information theory.

1. Multilinear Maps

Let and be vector spaces over a field . A map is multilinear if it satisfies the linearity condition in each slot independently. For any and :

The set of all such multilinear maps forms a vector space, denoted . When , these are called multilinear forms.

2. Formal Construction of the Tensor Product

The tensor product is the “friendliest” space that can represent all bilinear maps from . Its construction ensures that multilinear maps can be studied purely through linear tools.

The Free Vector Space

Consider the Cartesian product . We construct the free vector space , which consists of all finite linear combinations of ordered pairs . This space is massive—it treats every pair as a linearly independent basis vector.

Quotienting by Bilinear Relations

To enforce linearity, we define a subspace spanned by the following relations for all , , and :

The tensor product is the quotient space: The equivalence class of in this quotient is denoted by the tensor product symbol: .

3. The Universal Property

The utility of the tensor product is captured by its Universal Property: For any bilinear map , there exists a unique linear map such that the following diagram commutes:

In other words, . This property allows us to replace the non-linear “bilinear” nature of with the strictly linear map .

4. Tensor Components and (r, s) Notation

A tensor of type is an element of the tensor product of copies of and copies of the dual space :

Using the Einstein summation convention, where repeated indices (one upper, one lower) imply summation, a tensor is expressed in components relative to a basis and its dual :

  • Contravariant indices (): Upper indices corresponding to the vector space .
  • Covariant indices (): Lower indices corresponding to the dual space .

5. Change of Basis and Transformation Laws

How does the component array change when we pick a new basis? If is the transition matrix (with inverse ), then:

  1. Contravariant components (vectors) transform with the inverse matrix:
  2. Covariant components (covectors) transform with the matrix:

An tensor transforms as the product of inverse transformations and forward transformations.

6. The Metric Tensor: Index Gymnastics

On an inner product space, the metric tensor is a symmetric tensor that defines distances and angles:

The metric provides a canonical isomorphism between and , allowing us to “raise” or “lower” indices.

  • Lowering: Given , the covariant version is .
  • Raising: Given , the contravariant version is , where is the matrix inverse of .

7. Symmetry, Antisymmetry, and the Wedge Product

Tensors can possess symmetry properties under the permutation of their arguments.

  • Symmetric Tensors: . These form the symmetric algebra .
  • Alternating Tensors: . These form the exterior algebra .

The wedge product () creates an alternating tensor from two vectors: This is the foundation of differential forms and integration on manifolds.

8. Tensor Contraction

Contraction is the operation of summing over an upper and lower index pair. For a tensor , the contraction is simply the trace: This operation reduces a tensor of type to type . Contraction is basis-independent and represents a “projection” of the tensor’s multilinear capacity.

Python: Efficient Tensor Operations with einsum

The numpy.einsum function allows for a readable and highly efficient implementation of the Einstein summation convention for tensor products and contractions.

import numpy as np

# 1. Setup: Metric tensor (Euclidean) and random tensors
g = np.eye(3) # g_ij
A = np.random.rand(3, 3) # A^ij (Contravariant rank 2)
B = np.random.rand(3, 3) # B_jk (Covariant rank 2)

# 2. Matrix Multiplication via Einstein Summation: C^i_k = A^ij B_jk
# 'ij,jk->ik' means we sum over 'j'
C = np.einsum('ij,jk->ik', A, B)

# 3. Contraction (Trace): T = A^ii
trace_A = np.einsum('ii', A)

# 4. Raising an index: B^ik = g^ij B_jk
g_inv = np.linalg.inv(g) # g^ij
B_raised = np.einsum('ij,jk->ik', g_inv, B)

# 5. Inner product: <u, v> = g_ij u^i v^j
u = np.array([1, 0, 1])
v = np.array([0, 2, 1])
inner_prod = np.einsum('ij,i,j', g, u, v)

print(f"Matrix Product (rank 2):\n{C}")
print(f"Trace: {trace_A:.4f}")
print(f"Inner Product: {inner_prod}")
Conceptual Check

Why is the tensor product constructed via a quotient of a free vector space?

Conceptual Check

What is the result of using the metric tensor components $g_{ij}$ on a contravariant vector $V^j$?

Conceptual Check

In the notation of an (r, s) tensor, what do the values r and s represent?