Last time we found the relationship

\[ \left( \vec \omega \times \right) \vec r = \left( -A(t)^{-1} \dot A(t) \right) \vec r \]

where $$A(t)$$ is the transformation (attitude) matrix for a time-varying transformation from a fixed inertial frame $$\{ \hat \imath, \hat \jmath, \hat k \}$$ to a rotating frame $$\{\hat I,\hat J, \hat K\}$$, and $$\vec \omega$$ is the instantaneous angular velocity of the rotating frame expressed in the fixed frame. As I argued earlier, the angular velocity is difficult to find, and I advocate solving the problem using the transformation matrix. In this entry, I will be discussing the bigger picture of why a vector cross product can be characterized by a matrix multiplication.

Understanding this connection starts with a general discussion of how to describe the motion of a rotating frame. The key observation can be summarized succinctly. Since the vectors of the rotating frame span the space, the time derivatives of the basis vectors can be expressed in terms of the basis vectors themselves. Mathematically, this observation is expressed as

\[ \frac{d \hat I }{dt} = \alpha \hat I + \beta \hat J + \gamma \hat K \; ,\]

with analogous expressions for $$\frac{d \hat J}{d t}$$ and $$\frac{d \hat K}{d t}$$.

At first glance there is a tendancy to assume that $$9$$ numbers are needed to express the time-rate-of-change of the frame in terms of the frame itself. The actual number needed is much smaller and can be determined by using the orthogonality relation

\[ \hat e_i \cdot \hat e_j = \delta_{ij} \; i, j = 1,2,3 \; ,\]

where $$\hat e_1 = \hat I$$, $$\hat e_2 = \hat J$$, and $$\hat e_3 = \hat K$$.

Taking the time derivative of the orthogonality relation leads to the innocent-looking expression

\[ \frac{d }{dt} \left( \hat e_i \cdot \hat e_j \right) = 0 \; , \]

which is packed with lot of simplifications. The first simplification comes by setting $$i = j$$ giving

\[ \frac{d \hat e_i}{dt} \cdot \hat e_i = 0 \; . \]

From that we immediately see that

\[ \frac{d \hat I }{dt} \cdot \hat I = 0 \; \; \Rightarrow \; \; \alpha = 0 \; . \]

Likewise, when $$i \neq j$$, we get

\[ \frac{d \hat e_i}{dt} \cdot \hat e_j = – \hat e_i \cdot \frac{d \hat e_j}{dt} \]

which gives, in turn,

\[ \frac{d \hat I }{dt} \cdot \hat J = – \frac{d \hat J }{dt} \cdot \hat I = \beta \]

and

\[ \frac{d \hat I }{dt} \cdot \hat K = – \frac{d \hat K }{dt} \cdot \hat I = \gamma \; .\]

This process can be carried out for $$\hat J$$ with its expansion being

\[ \frac{d \hat J }{dt} = -\beta \hat I + 0 \hat J + \delta \hat K \; , \]

where $$\delta$$ is defined through the equation

\[ \frac{d \hat J }{dt} \cdot \hat K = – \frac{d \hat K }{dt} \cdot \hat J = \delta \; .\]

At this point, there is no freedom left for $$\hat K$$ and the three functions $$\left\{ \alpha, \beta, \gamma \right\}$$ completely specify how the rotating frame moves relative to itself. These three relationships better disclose their content when written in matrix form as

\[ \left[ \begin{array}{c} d \hat I/dt \\ d \hat J / dt \\ d \hat K /dt \end{array} \right] = \underbrace{\left[ \begin{array}{ccc} 0 & \beta & \gamma \\ -\beta & 0 & \delta \\ -\gamma & -\delta & 0 \end{array} \right]}_{W(t)} \left[ \begin{array}{c} \hat I \\ \hat J \\ \hat K \end{array} \right] \; . \]

There are two remaining steps. First is to relate $$\left\{ \alpha, \beta, \gamma \right\}$$ to $$\vec \omega_{rotating}$$. The second step is to relate $$\vec \omega_{rotating}$$ to $$\vec \omega$$ by using the transformation matrix $$A(t)$$.

To related $$\vec \omega_{rotating}$$ to $$\left\{ \alpha, \beta, \gamma \right\}$$ let’s look at $$\vec \omega_{rotating} \times \vec r$$

\[ \left| \begin{array}{ccc} \hat I & \hat J & \hat K \\ \omega_I & \omega_J & \omega_K \\ r_I & r_J & r_K \end{array} \right| = \left[ \begin{array}{c} \omega_J r_K – \omega_K r_J \\ \omega_K r_I – \omega_I r_K \\ \omega_I r_J – \omega_J r_I \end{array} \right] \]

and compare it to $$W(t) \vec r_{rotating}$$

\[ W(t) \vec r_{rotating} = \left[ \begin{array}{c} \beta r_J + \gamma r_K \\ -\beta r_I + \delta r_K \\ -\delta r_J -\gamma r_I \end{array} \right] \; ,\]

from which we conclude that

\[ W(t) = \left[ \begin{array}{ccc} 0 & \beta & \gamma \\ -\beta & 0 & \delta \\ -\gamma & -\delta & 0 \end{array} \right] = \left[ \begin{array}{ccc} 0 & -\omega_K & \omega_J \\ \omega_K & 0 & -\omega_I \\ -\omega_J & \omega_I & 0 \end{array} \right] \; .\]

It is convenient to define

\[ \Omega(t) = – W(t) = \left[ \begin{array}{ccc} 0 & \omega_K & -\omega_J \\ -\omega_K & 0 & \omega_I \\ \omega_J & -\omega_I & 0 \end{array} \right] \]

since now the action of $$\Omega(t)$$ on $$\vec r_{rotating}$$ is the same as the action of $$\vec \omega_{rotating} \times$$.

Finally, to get back to the inertially fixed frame, we can use the chain of relations

\[A(t) \left( \vec \omega \times \vec r \right) = (A(t) \vec \omega) \times (A(t) \vec r) \\ = \left( \vec \omega_{rotating} \times \right) \left( A(t) \vec r \right) = \Omega(t) A(t) \vec r = – \dot A(t) \vec r \]

to obtain

\[ \Omega(t) A(t) = – \dot A(t) \; .\]

Note that this relation follows immediately from the $$d \hat e_i /dt = W_{ij} \hat e_j$$ equation by expanding $$\hat e_i $$ and $$d \hat e_i /dt$$ in terms of $$\{ \hat \imath, \hat \jmath, \hat k\}$$.

Now a few remarks on why this works. First note that, from the time derivative of the orthogonality relation, the matrix relating $$\{\dot {\hat e_i} \}$$ to $$\{\hat e_j\}$$ must be antisymmetric. The number of free components of an $$N$$-dimensional antisymmetric matrix is $$N(N-1)/2$$. Only in $$N=3$$ is $$N(N-1)/2$$ equal to $$N$$. So, quite by accident (or providence), only in three dimensions does an antisymmetric matrix have the same number of components as a vector. The cross-product then mimics or prefigures the matrix product. Geometrically, these observations can be summarized by saying that, in three dimensions, a two-dimensional plane is in one-to-one correspondence with the normal vector to the plane. In lower dimensions, there is simply not enough structure to construct the normal spaces. In four or more dimensions, the normal space is larger than one-dimensional. This result also explains why there seems to be a ‘mismatch’ in the number of transformations needed for the $$\vec \omega \times \vec r$$ expression

\[ A(t) \left( \vec \omega \times \vec r \right) = \left( A(t) \vec \omega \right) \times \left( A(t) \vec r \right) \; .\]

So, living in three dimensions is a special place to be.