In linear algebra , a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; The term usually refers to the square matrix . A 2×2 is an example of a diagonal matric , while a 3×3 is an example of a diagonal matric . An identity matrix of any size , or any of its multiples (a scalar matrix ), is a diagonal matric.

\begin{bmatrix}3 & 0 \\0 & 2 \end{bmatrix} {\begin{bmatrix}7&0&0\\0&6&0\\0&0&4\end{bmatrix}}

A diagonal matrix is sometimes called a scaling matric , because the scale (shape) changes as a result of matric multiplication along it. Its determinant is the product of its diagonal values.

## Definition

As stated above, a diagonal matrix is a matrix in which all off-diagonal entries are zero. That is, the matrix *d* = ( *d *_{i , j} ) with *n* columns and *n* rows if the diagonal is

{\displaystyle \forall i,j\in \{1,2,\ldots ,n\},i\neq j\implies d_{i,j}=0.}

However, the main diagonal entries are unrestricted.

*The term diagonal matrix* is sometimes called a . can refer to**The rectangular diagonal matrix** , which is an*m*by*n*matrix in which all entries_{ are not of the}* form d*_{, i} iszero_{ . }for example:

{\begin{bmatrix}1&0&0\\0&4&0\\0&0&-3\\0&0&0\\\end{bmatrix}}either{\begin{bmatrix}1&0&0&0&0\\0&4&0&0&0\\0&0&-3&0&0\end{bmatrix}}

More often, however, the diagonal matric refers to the square matric , which can be explicitly specified as aSquare Diagonal matric . A square diagonal matric is asymmetric matric, so it is called a . can also be saidSymmetrical Diagonal matric .

The following matrix is the square diagonal matric:

{\begin{bmatrix}1&0&0\\0&4&0\\0&0&-2\end{bmatrix}}

If the entries are real numbers or complex numbers , then it is also a normal matrix .

For the remainder of this article we will only consider square diagonal matrices, and refer to them simply as “diagonal matrices.”

## scalar matrix

A diagonal matric with equal diagonal entries is a scalar matric ; That is , the identity matrix is a scalar multiple of I. Its effect on a vector is scalar multiplication by . For example, a 3×3 scalar matrix has the form:

{\begin{bmatrix}\lambda &0&0\\0&\lambda &0\\0&0&\lambda \end{bmatrix}}\equiv \lambda {\boldsymbol {I}}_{3}

Scalar matrices are central to the algebra of matrices : that is, they are exactly matrices that are commutative with all other square matrices of the same size . [A] Conversely, on a field (like the real numbers), a diagonal matric in which all diagonal elements are differentiable travels only along the diagonal matric (its centralization is the set of diagonal matrices). This is because if is a diagonal matrix then given a matric with the terms of the products are: and and (since one can divide

{\displaystyle D=\operatorname {diag} (a_{1},\dots ,a_{n})}{\displaystyle a_{i}\neq a_{j},}M{\displaystyle m_{ij}\neq 0,} (i,j){\displaystyle (DM)_{ij}=a_{i}m_{ij}}{\displaystyle (MD)_{ij}=m_{ij}a_{j},}{\displaystyle a_{j}m_{ij}\neq m_{ij}a_{i}}m_{ij}

so they do not travel until the diagonal terms are zero. [B] Diagonal matrices where the diagonal entries are not all the same or all different have intermediate centers between the whole space and only the diagonal matric.

For an abstract vector space V (instead of a concrete vector space ), there are scalar transformations corresponding to the scalar matrix . This is generally true for a module M on a ring R , endomorphism replacing the algebra of the matrix with the algebra end ( M ) ( the algebra of linear operators on M ). Formally, scalar multiplication is a linear map, which induces the map (a scalar to by its corresponding scalar transformation, ending the performance of the multiplication of by () m as a) R – algebra

K^{n} R\to \operatorname {End} (M),,

For vector spaces, the scalar transformation is the center of endomorphism algebra , and similarly, the inverse transformation is the center of the general linear group GL( V ). The former is more generally true free modules , for which the endomorphism algebra is isomorphic to the matrix algebra.

M\cong R^{n}

## vector operations

Multiplying a vector by a diagonal matrix multiplies each term by the corresponding diagonal entry. Given a diagonal matrix and a vector , the product is:

{\displaystyle D=\operatorname {diag} (a_{1},\dots ,a_{n})}{\displaystyle \mathbf {v} ={\begin{bmatrix}x_{1}&\dotsm &x_{n}\end{bmatrix}}^{\textsf {T}}}

{\displaystyle D\mathbf {v} =\operatorname {diag} (a_{1},\dots ,a_{n}){\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}={\begin{bmatrix}a_{1}\\&\ddots \\&&a_{n}\end{bmatrix}}{\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}={\begin{bmatrix}a_{1}x_{1}\\\vdots \\a_{n}x_{n}\end{bmatrix}}.}

This can be expressed more compactly by using a vector instead of a diagonal matrix, , and taking the Hadamard product of vectors (entrywise product), denoted :

{\displaystyle \mathbf {d} ={\begin{bmatrix}a_{1}&\dotsm &a_{n}\end{bmatrix}}^{\textsf {T}}}{\displaystyle \mathbf {d} \circ \mathbf {v} }

{\displaystyle D\mathbf {v} =\mathbf {d} \circ \mathbf {v} ={\begin{bmatrix}a_{1}\\\vdots \\a_{n}\end{bmatrix}}\circ {\begin{bmatrix}x_{1}\\\vdots \\x_{n}\end{bmatrix}}={\begin{bmatrix}a_{1}x_{1}\\\vdots \\a_{n}x_{n}\end{bmatrix}}.}

It is mathematically equivalent, but avoids storing all zero terms of this sparse matrix . Thus this product is used in machine learning , such as computing products of derivatives in backpropagation or multiplying IDF weights in TF-IDF , [2] because some BLAS frameworks, which multiply matrices efficiently , do not directly include Hadamard product capacity.

## matrix operation

The operations of matrix addition and matrix multiplication are particularly simple for diagonal matrices . Write diagnostics ( a 1 , … , a n ) a diagonal matrix whose diagonal entries starting in the upper left corner are a 1 , …, a n . Then, in addition, we have

diag( *a *_{१} , …, *a *_{n} ) + diag ( *b *_{१} , …, *b *_{n} ) = diag ( *a *_{१} + *b *_{१} , …, *a *_{n} + *b *_{n} ) and for matrix multiplication ,

diag( *a *_{१} , …, *a *_{n} ) diag( *b *_{१} , …, *b *_{n} ) = diag( *a *_{१ }*b *_{१} , …, *a *_{n}* b *_{n} ) ।

The diagonal matrix diagnosis ( a 1 , … , a n ) is invertible if and only if the entries a 1 , …, a n are all non-zero. In this case, we have

diag( *a *_{1} , …, *a *_{n} ) ^{−1} = diag( *a *_{1 }^{−1} , …, *a *_{n}^{ −1} )

In particular, diagonal matrices form a subring of the ring of all n -by- n matrices .

Multiplying a n -by- n matrix from a left with dia ( a 1 , …, a n ) is equal to multiplying i th row by one i for all i ; Diagnostics with the correct matrix multiplication by a ( a 1 , … , a n ) is equal to multiplying the i th column of a by oneI for all.

## operator matrix in eigenbasis

As stated in the operator Determining the Coefficients of a Matrix, there is a special basis, e 1 , …, e n , for which the matrix assumes a diagonal form. Therefore, in the defined equation , all coefficients with i j are zero , leaving only one term in the sum. The remaining diagonal elements, , are known as eigenvalues and are named with in the equation, which reduces to . The resulting equation is known as the eigenvalue equation [4] and is characterized by polynomials and, further, eigenvalues and eigenvectors .

A{\textstyle A\mathbf {e} _{j}=\sum a_{i,j}\mathbf {e} _{i}}a_{i,j}a_{i,i}\lambda _ {i}{\displaystyle A\mathbf {e} _{i}=\lambda _{i}\mathbf {e} _{i}}

is used to obtain. In other words, the diagnoses of the eigenvalues ( 1 , … , n ) are 1 , …, n associated with the eigenvectors of E _{1} , … , E _{n}.

#### virtue

- The diagnosis of the determinant ( a 1 , …, a n ) is the product of a 1 a n .
- Adjugate is the diagonal again of a diagonal matrix.
- where all matrices are square,
- A matrix is diagonal if and only if it is triangular and normal .
- A matrix is diagonal if and only if it is both upper- and lower-triangular.
- A diagonal matrix is symmetric.
- The matrix identity in n and zero are the matrix diagonals.
- A 1×1 matrix is always diagonal.

## Application

Diagonal matrices occur in many areas of linear algebra. Because of the matrix operation and the simple description of eigenvalues/eigenvectors above, it is usually desirable to represent a given matrix or linear map by a diagonal matrix.

In fact, a given *n* -by- *n* matrix is *a* uniform (meaning that there is a matrix x for a diagonal matrix *such* that *x *^{-1 is the diagonal }*ax* ) if and only if it has *n* linearly independent eigenvectors. Such matrices are called diagonalizable.

In the field of real or complex numbers, is more true. The spectral theorem says that every general matrix is unitarily identical to a diagonal matrix (if *aa *^{*} = *a *^{* }*a* then there exists a unitary matrix *u* such that *uau *^{*} is diagonal). Furthermore, singular value decomposition implies that for any matrix *a* , there exist unitary matrices *U* and *V* such that *UAV *^{*} is diagonal with positive entries.^{}^{}^{}^{}

## operator principle

In operator theory, especially the study of PDEs, operators are particularly easy to understand and if the operator with respect to the diagonal one is dealing with is the base PDEs easier to solve; This corresponds to a separable partial differential equation. Therefore, an important technique for understanding operators is the transformation of coordinates—in the language of operators, an integral transformation—that transforms the basis into the eigenbasis of eigenfunctions: which makes the equation separable. A striking example of this is the Fourier transform, which diagonalizes constant coefficient differentiation operators (or more commonly translation invariant operators), such as the Laplacian operators, say, in the heat equation.

Particularly handy are the multiplication operators, which are defined as multiplying (values) by a certain function – the values of the function at each point correspond to the diagonal entries of the matrix.