<< Hide Menu

πŸ“š

Β >Β 

πŸ“ˆΒ 

Β >Β 

πŸ“

4.13 Matrices as Functions

1 min readβ€’june 18, 2024

Jesse

Jesse

Jesse

Jesse

4.13 Matrices as Functions

Connecting Linear Transformations and Matrices

The linear transformation mapping a vector <x,y><x, y> to a vector <a11x+a12y,a21x+a22y><a₁₁ x + a₁₂ y, a₂₁ x + aβ‚‚β‚‚ y> can be represented by a matrix [a₁₁ a₁₂; a₂₁ aβ‚‚β‚‚] (a clearer image shown below), known as a 2 x 2 matrix. This matrix is called a transformation matrix, and it encodes all the information about the linear transformation, including the coefficients a₁₁, a₁₂, a₂₁, and aβ‚‚β‚‚. πŸ“œ

Screenshot 2023-01-21 at 3.43.38 PM.png

Image Created by Jed Q

When the vector <x,y><x, y> is multiplied by the matrix [a₁₁ a₁₂; a₂₁ aβ‚‚β‚‚], it results in a new vector <a11x+a12y,a21x+a22y><a₁₁ x + a₁₂ y, a₂₁ x + aβ‚‚β‚‚ y>, which is the image of the original vector under the linear transformation. πŸ”¦

This idea can be extended to n-dimensional space, where a linear transformation mapping a vector <x1,x2,...,xn><x_1, x_2, ..., x_n> to a vector <a11βˆ—x1+a12βˆ—x2+...+a1nβˆ—xn,a21βˆ—x1+a22βˆ—x2+...+a2nβˆ—xn,...,an1βˆ—x1+an2βˆ—x2+...+annβˆ—xn><a₁₁*x_1 + a₁₂*x_2 + ... + a₁ₙ*x_n, a₂₁*x_1 + aβ‚‚β‚‚*x_2 + ... + aβ‚‚β‚™*x_n, ..., aₙ₁*x_1 + aβ‚™β‚‚*x_2 + ... + aβ‚™β‚™*x_n> can be represented by an n x n matrix, where each element aij is the coefficient for the xj component of the output vector.

The mapping of the unit vectors in a linear transformation can provide valuable information for determining the associated matrix. In a two-dimensional space, the unit vectors are the vectors <1,0><1, 0> and <0,1><0, 1>. These vectors are often referred to as the "standard basis" vectors. The linear transformation maps these unit vectors to new vectors in the transformed space. The components of these new vectors are the columns of the transformation matrix. πŸ—ΊοΈ

For example, if the linear transformation maps the unit vector <1,0><1, 0> to the vector <a11,a21><a₁₁, a₂₁> and the unit vector <0,1><0, 1> to the vector <a12,a22><a₁₂, aβ‚‚β‚‚>, the transformation matrix is [a₁₁ a₁₂; a₂₁ aβ‚‚β‚‚].

In higher dimensional spaces, the unit vectors are the vectors with a single component equal to 1 1️⃣ and all other components equal to 0. 0️⃣ The linear transformation maps these unit vectors to new vectors in the transformed space. The components of these new vectors are the columns of the transformation matrix. πŸ”’

1_D5alE3nw045yfzDNhLbVEQ.png

Image Courtesy of Towards Data Science

Involving Angles

The matrix associated with a linear transformation of vectors that maps every vector to the vector that is an angle ΞΈ counterclockwise rotation about the origin from the original vector is [cosΞΈ/βˆ’sinΞΈ;sinΞΈ/cosΞΈ][cos_ΞΈ/βˆ’sin_ΞΈ; sin_ΞΈ/cos_ΞΈ] (see image below). This is known as the rotation matrix. πŸŒ€

Screenshot 2023-01-21 at 3.49.49 PM.png

Image Created by Jed Q

When a vector <x,y><x, y> is multiplied by this matrix, the resulting vector <xβ€²,yβ€²><x', y'> is the image of the original vector under the rotation transformation. The transformation can be described as:

xβ€²=xcosΞΈβˆ’ysinΞΈx' = xcosΞΈ βˆ’ ysinΞΈ

yβ€²=xsinΞΈ+ycosΞΈy' = xsinΞΈ + ycosΞΈ

This means that x' and y' are the coordinates of the vector after it has been rotated by an angle ΞΈ counterclockwise. The matrix [cosΞΈ βˆ’sinΞΈ; sinΞΈ cosΞΈ] encodes all the information about the rotation transformation, including the angle of rotation ΞΈ. πŸ“

It is also important to note that the matrix [cos(βˆ’ΞΈ) βˆ’sin(βˆ’ΞΈ); sin(βˆ’ΞΈ) cos(βˆ’ΞΈ)] will give the same transformation but clockwise rotation. ⏰

B1pvL.png

Image Courtesy of Math Stack Exchange

Absolute Values & Determinants

The absolute value of the determinant of a 2 x 2 transformation matrix gives the magnitude of the dilation of regions in R2 under the transformation. The determinant of a matrix is a scalar value that can be calculated from the elements of a matrix. The determinant of a 2x2 matrix is given by the formula ∣a11a12;a21a22∣=a11a22βˆ’a12a21|a11 a12; a21 a22| = a11a22 - a12a21. ⏸

In the case of a linear transformation, the determinant represents the scaling factor of the transformation.

  • A positive determinant indicates that the transformation is a dilation, meaning that it increases the size of a region. βž•
  • A negative determinant indicates a reflection, meaning that it changes the orientation of the region. βž–

For example, if the determinant of a 2x2 matrix is 2, it means that the transformation associated with that matrix is a dilation that increases the size of a region by a factor of 2. Similarly, if the determinant is -3, it means that the transformation is a dilation that decreases the size of a region by a factor of 3 and also changes the orientation of the region. βœ‚οΈ

Compositions of Two Linear Transformations

The composition of two linear transformations is a linear transformation. Remember, a linear transformation is a function that takes a vector as an input and produces another vector as an output. When two linear transformations are composed, the output of the first transformation is used as the input for the second transformation. πŸ‘—

For example, if f(x) is a linear transformation that maps a vector x to a vector y and g(x) is another linear transformation that maps a vector y to a vector z, the composition of f and g is denoted as g(f(x)) and it maps the vector x to the vector z.

The composition of two linear transformations is associative, meaning that (f(g(x))=(fg)(x)=g(f(x))(f(g(x)) = (fg)(x) = g(f(x)), the order of the linear transformations does not affect the result. πŸ™…

Screenshot 2023-01-21 at 4.06.36 PM.png

Image Courtesy of Ximera (Ohio State University)

The matrix associated with the composition of two linear transformations is the product of the matrices associated with each linear transformation. For example, if A is the matrix associated with the linear transformation f and B is the matrix associated with the linear transformation g, the matrix associated with the composition of f and g is AB. ⚑️

When a vector x is multiplied by the matrix AB, it results in a new vector y, which is the image of the original vector under the composition of the linear transformations f and g.

Inverses of Linear Transformations

Two linear transformations are said to be inverses if their composition maps any vector to itself. An inverse transformation is a transformation that "undoes" the effect of another transformation. In other words, if a linear transformation "f" maps a vector "x" to a vector "y" and another linear transformation "g" maps the vector "y" back to the vector "x", the transformation "g" is said to be the inverse of the transformation "f" and is denoted as "f^-1" πŸ”ƒ

Formally, if f: V β†’ W and g: W β†’ V are linear transformations, they are inverses if and only if g(f(x))=xg(f(x)) = x for all x in V and f(g(y))=yf(g(y)) = y for all y in W.

The composition of two linear transformations is commutative, meaning that f(g(x)) = g(f(x)), so either order of the linear transformations will result in the same outcome, the identity transformation. πŸ™‹

If a linear transformation, L, is given by L(v)=AvL(v) = Av, where A is a matrix and v is a vector, then its inverse transformation, denoted as L^-1, is given by Lβˆ’1(v)=Aβˆ’1vL^-1(v) = A^-1 v, where A^-1 is the inverse of the matrix A.

This relationship between the linear transformation L, its matrix representation A, and its inverse transformation L^-1, is a direct consequence of the properties of matrix-vector multiplication. The matrix A encodes the linear transformation L, and the vector v is transformed by A to produce the output vector Av. 🀝

The inverse of a matrix A is a matrix A^-1 such that when it's multiplied by A, the result is the identity matrix I. This means that Aβˆ’1A=IA^-1 A = I. It's important to note that not every matrix has an inverse. A matrix A is invertible if and only if its determinant is non-zero.

By applying the inverse matrix A^-1 to the output vector Av, we obtain the original vector v. This is the inverse transformation L^-1. In other words, Lβˆ’1(Av)=Aβˆ’1(Av)=Aβˆ’1Av=Iv=vL^-1(Av) = A^-1(Av) = A^-1 A v = Iv = v. πŸ€“

Screenshot 2023-01-21 at 4.08.41 PM.png

Image Courtesy of Ximera (Ohio State University)