Square Matrices in Mathematics

Square matrices

A square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order n. Any two square matrices of the same order can be added and multiplied. The entries aii form the main diagonal of a square matrix. They lie on the imaginary line which runs from the top left corner to the bottom right corner of the matrix.

Main types

Name Example with n = 3
Diagonal matrix 
      \begin{bmatrix}
           a_{11} & 0 & 0 \\
           0 & a_{22} & 0 \\
           0 & 0 & a_{33} \\
      \end{bmatrix}
Lower triangular matrix 
      \begin{bmatrix}
           a_{11} & 0 & 0 \\
           a_{21} & a_{22} & 0 \\
           a_{31} & a_{32} & a_{33} \\
      \end{bmatrix}
Upper triangular matrix 
      \begin{bmatrix}
           a_{11} & a_{12} & a_{13} \\
           0 & a_{22} & a_{23} \\
           0 & 0 & a_{33} \\
      \end{bmatrix}

Diagonal or triangular matrix

If all entries outside the main diagonal are zero, A is called a diagonal matrix. If only all entries above (or below) the main diagonal are zero, A is called a lower (or upper) triangular matrix.

Identity matrix

The identity matrix In of size n is the n-by-n matrix in which all the elements on the main diagonal are equal to 1 and all other elements are equal to 0, e.g.

I_1 = \begin{bmatrix} 1 \end{bmatrix}
,\ 
I_2 = \begin{bmatrix}
         1 & 0 \\
         0 & 1 
      \end{bmatrix}
,\ \cdots ,\ 
I_n = \begin{bmatrix}
         1 & 0 & \cdots & 0 \\
         0 & 1 & \cdots & 0 \\
         \vdots & \vdots & \ddots & \vdots \\
         0 & 0 & \cdots & 1
      \end{bmatrix}
It is a square matrix of order n, and also a special kind of diagonal matrix. It is called identity matrix because multiplication with it leaves a matrix unchanged:
AIn = ImA = A for any m-by-n matrix A.

Symmetric or skew-symmetric matrix

A square matrix A that is equal to its transpose, i.e., A = AT, is a symmetric matrix. If instead, A was equal to the negative of its transpose, i.e., A = −AT, then A is a skew-symmetric matrix. In complex matrices, symmetry is often replaced by the concept of Hermitian matrices, which satisfy A = A, where the star or asterisk denotes the conjugate transpose of the matrix, i.e., the transpose of the complex conjugate of A.
By the spectral theorem, real symmetric matrices and complex Hermitian matrices have an eigenbasis; i.e., every vector is expressible as a linear combination of eigenvectors. In both cases, all eigenvalues are real. This theorem can be generalized to infinite-dimensional situations related to matrices with infinitely many rows and columns, see below.

Invertible matrix and its inverse

A square matrix A is called invertible or non-singular if there exists a matrix B such that
AB = BA = In.
If B exists, it is unique and is called the inverse matrix of A, denoted A−1.

Definite matrix

Positive definite matrix Indefinite matrix
 \begin{bmatrix}
         1/4 & 0 \\
         0 & 1 \\
     \end{bmatrix}  \begin{bmatrix}
         1/4 & 0 \\
         0 & -1/4 
     \end{bmatrix}
Q(x,y) = 1/4 x2 + y2 Q(x,y) = 1/4 x2 − 1/4 y2
Ellipse in coordinate system with semi-axes labelled.svg
Points such that Q(x,y)=1
(Ellipse).
Hyperbola2 SVG.svg
Points such that Q(x,y)=1
(Hyperbola).
A symmetric n×n-matrix is called positive-definite (respectively negative-definite; indefinite), if for all nonzero vectors x ∈ Rn the associated quadratic form given by
Q(x) = xTAx
takes only positive values (respectively only negative values; both some negative and some positive values). If the quadratic form takes only non-negative (respectively only non-positive) values, the symmetric matrix is called positive-semidefinite (respectively negative-semidefinite); hence the matrix is indefinite precisely when it is neither positive-semidefinite nor negative-semidefinite.
A symmetric matrix is positive-definite if and only if all its eigenvalues are positive. The table at the right shows two possibilities for 2-by-2 matrices.
Allowing as input two different vectors instead yields the bilinear form associated to A:
BA (x, y) = xTAy.

Orthogonal matrix

An orthogonal matrix is a square matrix with real entries whose columns and rows are orthogonal unit vectors (i.e., orthonormal vectors). Equivalently, a matrix A is orthogonal if its transpose is equal to its inverse:
A^\mathrm{T}=A^{-1}, \,
which entails
A^\mathrm{T} A = A A^\mathrm{T} = I, \,
where I is the identity matrix.
An orthogonal matrix A is necessarily invertible (with inverse A−1 = AT), unitary (A−1 = A*), and normal (A*A = AA*). The determinant of any orthogonal matrix is either +1 or −1. A special orthogonal matrix is an orthogonal matrix with determinant +1. As a linear transformation, every orthogonal matrix with determinant +1 is a pure rotation, while every orthogonal matrix with determinant -1 is either a pure reflection, or a composition of reflection and rotation.
The complex analogue of an orthogonal matrix is a unitary matrix.

To Join Ajit Mishra's Online Classroom CLICK HERE

Comments