In this post, we discuss algebra of finite-dimensional vectors and some fundamental concepts in linear algebra. First, we define finite-dimensional vectors. Let \(n\) be a positive integer. An \(n\)-dimensional real vector \(u\) is an ordered \(n\)-tuple (i.e., an ordered sequence with \(n\) elements) of the following form: $$ u = (u_1, \dots, u_n),$$ where \(u_i\) is a real number for each \(i\). In this case, \(u_i\) is called to be the \(i\)-th component of the vector \(u\).

### Addition of finite-dimensional vectors

Let \(u\) and \(v\) be \(n\)-dimensional real vectors. If $$ u = (u_1, \dots, u_n) $$ and $$ v = (v_1, \dots, v_n),$$ then their addition, denoted by \(u+v\), defined component-wise, is the following vector: $$(u_1 + v_1 , \dots , u_n + v_n).$$

Subtraction of finite-dimensional vectors is also defined component-by-component.

**Example**. Let $$ u = (4,3,2,1)$$ and $$v = (1,3,5,7).$$ Then, their addition and subtraction are as follows: $$u+v = (5,6,7,8)$$ and $$u-v = (3,0,-3,-6).$$

Also, check the following posts:

Two-dimensional vectors and their addition and subtraction

Three-dimensional vectors and their addition and subtraction

### Multiple of finite-dimensional vectors by numbers

If \(u = (u_1, \dots, u_n)\) is an \(n\)-dimensional vector with real components and \(r\) is a real number. The multiple of \(u\) by \(r\), denoted by \(ru\), is the following vector: $$(ru_1, \dots, ru_n).$$

Recall that \(ru\) is the scalar multiplication of the scalar \(r\) and the vector \(u\). For some detailed discussion on this topic, see multiple of vectors by numbers.

Let \(i\) and \(n\) be positive integers with \(i \leq n\). The \(i\)-th standard unit vector, denoted by \(e_i\) is an \(n\)-dimensional vector such that all of its components, except the \(i\)th one that is 1, is zero, i.e., $$e_1 = (1,0,0,\dots,0)$$ and $$e_2 = (0,1,0,\dots,0),$$ and so on. It is, then, clear, that if $$u = (a_1, \dots, a_n),$$ then $$ u = a_1 e_1 + \dots + a_n e_n.$$

Note that if \(u_i\) is an \(n\)-dimensional vector and \(r_i\) a real number for each \(1 \leq i \leq k\), then their linear combination is defined as follows: $$\sum_{i=1}^{k} r_i u_i.$$

**Example**. Suppose that $$u = (1,2,3,4)$$ and $$v = (4,3,2,1).$$ Then, \(3u+5v\), which is a linear combination of \(u\) and \(v\), is: $$(23,21,19,17).$$

### Linearly independent vectors

The vectors in the following finite set $$\{u_1, \dots, u_k\}$$ are linearly independent if \(\sum_{i=1}^{k} r_i u_i = 0\) implies that \(r_i = 0\), for each \(i\). For example, it is clear that the standard unit vectors \(e_i\)s are linearly independent.

By definition, a finite number of \(n\)-dimensional vectors are called to be linearly dependent if they are not linearly independent.

**Remark**. In linear algebra, the two of the most fundamental results are as follows:

- \(n+1\) vectors in \( \mathbb{R}^n \) are linearly dependent.
- If \(u_i\)s are \(n\) linearly independent vectors in \( \mathbb{R}^n \), then each vector \(v\) can be expressed uniquely as a linear combination of \( u_i \)s, i.e. there are unique real numbers \(r_i\)s such that $$ v = \sum_{i=1}^{n} r_i u_i.$$

### The dot product of finite-dimensional real vectors

The dot product of vectors is an essential topic in data science and data analysis. Let $$u = (u_1, \dots, u_n)$$ and $$v = (v_1, \dots, v_n)$$ be \(n\)-dimensional real vectors. Their dot product, denoted by \( u \cdot v\), is the following real number: $$u_1 v_1 + \dots + u_n v_n.$$

**Example**. Let $$ u = (4,3,2,1)$$ and $$v = (1,3,5,7).$$ Then, their dot product is as follows: $$u \cdot v = 30.$$

The set of all \(n\)-dimensional real vectors, denoted by \(\mathbb{R}^n\), together with the addition of vectors, the scalar multiplication, and the dot product of vectors is called the \(n\)-dimensional Euclidean vector space.

If \(u\) is an \(n\)-dimensional real vector, then its Euclidean norm (length), denoted by \(\Vert u \Vert\), is defined as follow: $$\Vert u \Vert = \sqrt{u \cdot u}.$$ Recall that if \(u\) and \(v\) are two \(n\)-dimensional vectors, their distance is $$d(u,v) = \Vert u – v \Vert.$$

Let \(u\) and \(v\) be arbitrary \(n\)-dimensional vectors. In the post on the dot product of vectors, we proved the Cauchy-Schwarz inequality: $$\vert u \cdot v \vert \leq \Vert u \Vert \Vert v \Vert.$$ Therefore, if \(u\) and \(v\) are nonzero vectors of the same dimension, then we have the following: $$ -1 \leq \frac{ u \cdot v}{\Vert u \Vert \Vert v \Vert} \leq 1.$$

### The angle between two finite-dimensional vectors

By definition, the angle between two nonzero \(n\)-dimensional vectors \(u\) and \(v\) is \(\theta\), if $$\cos \theta = \frac{u \cdot v}{\Vert u \Vert \Vert v \Vert}.$$

By definition, two nonzero \(n\)-dimensional vectors \(u\) and \(v\) are orthogonal, if they are perpendicular to each other, i.e., their dot product is zero. Two \(n\)-dimensional vectors \(u\) and \(v\) are orthonormal if they are unit vectors and orthogonal.

For example, if \(e_i\)s are the standard unit vectors, then \(e_i\) and \(e_j\) are orthonormal if \(i \neq j\).

**Exercise**. Show that the vectors $$(2,-3,4,-1,6)$$ and $$(4,5,3,-7,m)$$ are perpendicular if and only if \(m = – 2\).

**Exercise**. Let \(u\) and \(v\) be orthogonal vectors. Show that \( u/\Vert u \Vert \) and \( v/\Vert v \Vert \) are orthonormal.

In the post on the dot product of vectors, we showed that if \(u\) and \(v\) are vectors, then $$\Vert u – v \Vert^2 = $$ $$\Vert u \Vert^2 + \Vert v \Vert^2 – 2 u \cdot v = $$ $$\Vert u \Vert^2 + \Vert v \Vert^2 – 2 \Vert u \Vert \Vert v\Vert \cos\theta,$$ where \(\theta\) is the angle between the vectors \(u\) and \(v\). This is called the law of cosine (in some references, Kashi’s theorem).

**Exercise**. Suppose that \(u\) and \(v\) are \(n\)-dimensional orthogonal vectors. Prove the Pythagorean theorem: $$\Vert u + v \Vert^2 = \Vert u \Vert^2 + \Vert v \Vert^2.$$

**Exercise**. Show that the columns of a real matrix \(A\) are orthonormal, i.e. each column is orthonormal to the other one, if and only if the multiplication of the transpose of \(A\) with the matrix \(A\) is the identity matrix \(I\), i.e. \(A^T A = I\). For the definition of the transpose of a matrix see matrix operations.