Square Matrix

In mathematics , a square matrix is ​​a matrix with the same number of rows and columns. An n -by- n matrix is ​​known as a square matrix of order . Any two square matrices of the same order can be added and multiplied. n

Square matrices are often used to represent simple linear transformations , such as shear or rotation . For example, if there is a square matrix representing a rotation ( rotation matrix ) and a column vector describing the position of a point in space , the product is a vector describing the position of that point after that rotation. and generates the column vector. If a line is a vector , the same can be obtained by using transformation , where is the transpose .

Square Matrix
Square Matrix
R\mathbf {v}{\displaystyle R\mathbf {v} }\mathbf {v}{\displaystyle \mathbf {v} R^{\mathsf {T}}}{\displaystyle R^{\mathsf {T}}}R

Main diagonal

Entries ( i = 1,…, n ) form

the principal diagonal of a square matrix . They lie on the imaginary line that runs from the top left corner of the matrix to the bottom right corner. For example, the elements above the main diagonal of a 4 × 4 matrix are a 11 = 9 , a 22 = 11 , a 33 = 4 , a 44 = 10 .aii

The diagonal from top right to bottom left corner of a square matrix is ​​called antidiagonal or counterdiagonal.

Special kind

Diagonal or Triangular Matrix

If all entries outside the main diagonal are zero, it is called a diagonal matrix . If all entries just above (or below) the main diagonal are zero, it is called an upper (or lower) triangular matrix . A

Identity matrix

namen = 3 . example with
diagonal matrix{\displaystyle {\begin{bmatrix}a_{11}&0&0\\0&a_{22}&0\\0&0&a_{33}\end{bmatrix}}}
lower triangular matrix{\displaystyle {\begin{bmatrix}a_{11}&0&0\\a_{21}&a_{22}&0\\a_{31}&a_{32}&a_{33}\end{bmatrix}}}
upper triangular matrix{\displaystyle {\begin{bmatrix}a_{11}&a_{12}&a_{13}\\0&a_{22}&a_{23}\\0&0&a_{33}\end{bmatrix}}}

The identity matrix is ​​of shape matrix in which all the elements on the main diagonal are equal to 1 and all other elements are equal to 0, e.g.

{\displaystyle I_{1}={\begin{bmatrix}1\end{bmatrix}},\ I_{2}={\begin{bmatrix}1&0\\0&1\end{bmatrix}},\ \ldots ,\ I_{n}={\begin{bmatrix}1&0&\cdots &0\\0&1&\cdots &0\\\vdots &\vdots &\ddots &\vdots \\0&0&\cdots &1\end{bmatrix}}.}

It is a square matrix of order , and also a special kind of diagonal matrix . It is called an identity matrix because multiplication with it leaves a matrix unchanged:n

ai n = i m a = a for any m- by- n matrix .A

Inverse Matrix and its Inverse

A square matrix is ​​said to be inverse or non-singular if the matrix exists such that AB

{\displaystyle AB=BA=I_{n}.}

If present, it is unique and can be defined as . is called the inverse matrix of , denoted .BAA-1

Symmetric or oblique-symmetric matrix

A square matrix which is equal to its transpose, that is, , is a symmetric matrix . If instead , then is said to be a skew-symmetric matrix .

A{\displaystyle A^{\mathsf {T}}=A}{\displaystyle A^{\mathsf {T}}=-A}A

For a complex square matrix , often the appropriate analog of the transpose is the conjugate conjugate , defined as the transpose of the complex conjugate of . A complex square matrix satisfying is called a Hermitian matrix . If instead , then it is called a skew-Hermitian matrix .

A A^{*}AA{\displaystyle A^{*}=A}{\displaystyle A^{*}=-A}A

By the spectral theorem , the real symmetric (or Hermitian complex) matrix is ​​an orthogonal (or unitary) eigenbasis ; i.e., each vector can be expressed as a linear combination of eigenvectors. In both cases, all eigenvalues ​​are real.

Fixed matrix

positive definiteindefinite
{\begin{bmatrix}1/4&0\\0&1\\\end{bmatrix}}{\begin{bmatrix}1/4&0\\0&-1/4\end{bmatrix}}
q ( ​​x , y ) = 1/4 2 + 2Q ( x , y ) = 1/4 2 – 1/4 2
Ellipse in coordinate system with semi-axes labelled.svg
Points such that Q ( x , y ) = 1
( ellipse ).
Hyperbola2 SVG.svg
Points such that Q ( x , y ) = 1
( hyperbola ).

A symmetric n × n – matrix is ​​said to be positive-definite (respectively negative-definite; indefinite) if for all non-zero vectors

x \in \mathbb{R}^n

The corresponding quadratic form given by q ( ​​x ) = x

Takes only positive values ​​(only negative values, respectively; both some negative and some positive values). [4] If the quadratic form takes only non-negative (respectively only non-positive) values, then the symmetric matrix is ​​said to be positive-semi-finite (respectively negative-semi-finite); Therefore the matrix is ​​undecidable when it is neither positive-semi-finite nor negative-semi-finite.

A symmetric matrix is ​​positive-definite if and only if all of its eigenvalues ​​are positive. [5] The table on the right shows two possibilities for a 2×2 matrix. Instead of allowing two distinct vectors as input, the bilinear form involving A is produced:

a ( x , y ) = y .

Orthogonal matrix

An orthogonal matrix is a square matrix with real entries , whose columns and rows are orthogonal unit vectors (that is, orthonormal vectors). Equivalently , a matrix A is orthogonal if its transpose is equal to its inverse:

{\displaystyle A^{\textsf {T}}=A^{-1},}

which consists of

{\displaystyle A^{\textsf {T}}A=AA^{\textsf {T}}=I,}

where i is the identity matrix.

An orthogonal matrix A is essentially invertible (with inverse -1 = t ), unitary ( -1 = A * ), and normal ( A * A = AA * ). The determinant of any orthogonal matrix is ​​either +1 or -1. There are n × n orthogonal matrices with the special orthogonal group determinant +1 . SO(n)

The complex analogous to an orthogonal matrix is ​​a unitary matrix.

Normal matrix

A real or complex square matrix is ​​said to be normal if . If a real square matrix is ​​symmetric, skew-symmetric, or orthogonal, then it is normal. If a complex square matrix is ​​Hermitian, skew-Hermitian or unitary, then it is normal. General matrices are of interest mainly because they include only the types of matrices listed and form the broadest class of matrices for which spectral theorem occurs. [7]

{\displaystyle A^{*}A=AA^{*}}

Operation

Scar

Trace, tr ( a of a square matrix) is the sum of its diagonal entries. While matrix multiplication is not commutative, the trace of a product of two matrices is independent of the order of the factors:

{\displaystyle \operatorname {tr} (AB)=\operatorname {tr} (BA).}

This is immediately followed by the definition of matrix multiplication:

{\displaystyle \operatorname {tr} (AB)=\sum _{i=1}^{m}\sum _{j=1}^{n}A_{ij}B_{ji}=\operatorname {tr} (BA).}

Also, the trace of a matrix is ​​equal to its transpose, i.e.,

{\displaystyle \operatorname {tr} (A)=\operatorname {tr} (A^{\mathrm {T} }).}

Proven

The provenance of a square matrix is ​​a number encoding some properties of the matrix. A matrix is ​​invertible if and only if its determinant is non-zero. Its absolute value is equal to the area (in ) or volume (in . ) of the unit square (or cube) image, while its sign corresponds to the orientation of the corresponding linear map: the determinant is positive if and only if the orientation is preserved. Is.

\ det (A)|A|A\mathbb {R} ^{2}\mathbb {R} ^{3}

The determinant of a 2×2 matrix is ​​given by?

{\displaystyle \det {\begin{bmatrix}a&b\\c&d\end{bmatrix}}=ad-bc.}

The determinant of 3×3 matrices contains 6 terms (Saras’ rule). The longer Leibniz formula generalizes these two formulas to all dimensions. [8]

The determinant of the product of square matrices is equal to the product of their determinants: [9]

{\displaystyle \det (AB) = \det (A) \cdot \det (B)}

Adding a multiple of any row to another row, or a multiple of a column to another column, does not change the determinant. Interchanging two rows or two columns affects the determinant by multiplying it by -1. [10] Using these operations, any matrix can be transformed into a lower (or upper) triangular matrix, and the determinant for such a matrix is ​​equal to the product of the entries on the main diagonal; It provides a method to calculate the determinant of any matrix. Finally, the Laplace expansion expresses the determinant in terms of minors, that is, the determinant of the smaller matrix. [11]This extension can be used for the recursive definition of determinants (taking the initial case as the determinant of a 1×1 matrix, which is its unique entry, or also the determinant of a 0×0 matrix, which is 1. ), which can be considered equivalent to the Leibniz formula. Determinants can be used to solve linear systems using Cramer’s law, where the division of the determinants of two related square matrices is equal to the value of each variable in the system. [12]

Eigenvalues ​​and eigenvectors

A number and a non-zero vector satisfying V

{\displaystyle A\mathbf {v} =\lambda \mathbf {v} }

are called the eigenvalue and eigenvector of , respectively. [13] [14] The number is an eigenvalue of an n × n -matrix A if and only if A – n is not invertible, which is equal to A

{\displaystyle \det(A-\lambda I)=0.}

The polynomial A in an indefinite x determinant given by evaluation of det ( XI n – a ) is said to be characteristic of the polynomial a . It is a monic polynomial of degree n . Therefore the polynomial equation A (λ) = 0 has at most n different solutions, that is, the eigenvalues ​​of the matrix. [16] They can be complex, even if A ‘s entries are real. According to Kelly Hamilton’s theorem, a ( a ) = 0, that is, its characteristic polynomial yields the result of substituting the matrix itself into the zero matrix.

Scroll to Top