School mathematics teaches simple functions with one variable. Higher mathematics teaches more complex functions. In higher mathematics, we also learn some theoretical aspects of single-variable functions such as limits, continuity, differentiability, and integrability. In this post, we review some definitions and terminology related to single-variable functions. Definition of single-variable functions A function is a special…

# Author: Dr. Peyman Nasehpour

## The inverse of a matrix

The multiplicative inverse of a matrix is a matrix that when multiplied by the original matrix from both sides yields the identity matrix. In other words, the multiplicative inverse of a matrix \(A\) is a matrix \(B\) such that $$AB = BA = I,$$ where \(I\) is the identity matrix. A matrix \(A\) is invertible…

## Special matrices and vectors

There are certain kinds of matrices and vectors that are particularly useful in data science. With examples, we explain special matrices and vectors. Special matrices and vectors Special matrices (diagonal and elementary) In this section, we discuss special matrices. Some special matrices like orthogonal matrices need more detailed discussion. For this reason, we argue them…

## Orthogonal linear maps

Orthogonal linear maps are generalizations of orthogonal matrices. By definition, a linear map \(f\) over an inner product real vector space \(V\) is an orthogonal linear map (function, or transformation) if it preserves the inner product, i.e. for all vectors \(u\) and \(v\) in \(V\), we have $$ f(u) \cdot f(v) = u \cdot v.$$…

## Orthogonal matrices

The orthogonal matrices have applications in many fields of science and engineering, including data science and machine learning. For example, orthogonal matrices are key concepts in numerical linear algebra including QR decompositions. In this post, we define orthogonal matrices and investigate their basic properties. Orthogonal matrices are those real matrices whose columns form an orthonormal…

## 1-to-1 and onto linear maps

Linear maps are essential in linear algebra, and 1-to-1 and onto linear maps are especially important. In this post, we investigate 1-to-1 and onto linear maps and discuss them with suitable examples. 1-to-1 linear maps A function is, by definition, 1-to-1 (also called one-to-one or injective) if \(f(x) = f(y)\) implies \(x=y\), for all \(x\)…

## Orthogonal projection

In the post on vector orthogonal projection, we discussed the concept of orthogonal projection of a vector onto a nonzero vector. In this post, we will discuss the concept of orthogonal projection of a vector onto a nonzero subspace of an inner product vector space. The orthogonal decomposition theorem First, we prove the following result:…

## Orthogonal complement

The orthogonal complement of a subset \(W\) of an inner product real vector space \(V\) is the set of all vectors \(u\) in \(V\) such that \(u\) is orthogonal to each element of \(W\). An example of an orthogonal complement As a starting point for our discussion, let us look at the following interesting example:…

## Column space and rank

The column space and rank of a matrix have significant applications in data science. First, we introduce the column space. The column space of a matrix \(A\) in \(M_{m \times n} (\mathbb{R})\) is a subspace of \(\mathbb{R}^m\) spanned (generated) by the columns of the matrix \(A\). In other words, if $$A =\begin{bmatrix} C_1 & \dots…

## Basis for vector spaces

In this post, we introduce the fundamental concept of the basis for vector spaces. A basis for a real vector space is a linearly independent subset of the vector space which also spans it. More precisely, by definition, a subset \(B\) of a real vector space \(V\) is said to be a basis if each…