<aside> β»οΈ Eigenvalues and eigenvectors: For a linear operator $\hat A$ the eigenvalue equation is defined as
$$ \hat A |u \rang = \lambda | u \rang $$
where $\lambda$ and $|u\rang$ are the eigenvalues and eigenvectors associated to $\hat A$ repsectively, where $|u\rang \ne |0\rang$
</aside>
This equation can be rearranged to get
$$ (\hat A-\lambda \hat{\bold 1}) \,|u\rang = | 0 \rang $$
this has non trivial solution
$$ \det(\hat A-\lambda \hat {\bold 1})=0 $$
which generates a polynomial of degree $N$ with $N$ solution for $\lambda \in \mathbb C$
ποΈ Note: each distinct eigenvalue gives an eigenvector. If two eigenvectors are the same then it is a degeneracy
π§ββοΈ Theorem: Let $\hat A$ be a Hermitian operators ($\hat A^\dagger=\hat A$). The eigenvalues for this operator are real and its eigenvectors are orthogonal
π« Proof: The eigenvalue equation for an operator is given by
$$ \hat A|u_i\rang =\lambda _i |u_i \rang $$
Consider the matrix element of $\hat A$ with respect to two eigenvectors
$$ \lang u_j |\hat A |u_k \rang = \lambda _k \lang u_j|u_k \rang $$
Using Hermitian property
$$ \lang u_j |\hat A|u_k\rang =\lang u_j |\hat A^\dagger |u_k \rang = \overline {\lang u_k |\hat A|u_j \rang} = \overline \lambda _j \lang u_j |u_k \rang $$
Equating the two expressions we obtain
$$ (\lambda _k-\overline \lambda_j)\lang u_j|u_k \rang =0 $$
- For $j=k$, we must have $\lambda_j=\overline \lambda_j \in \R$ since $\lang u_j|u_j\rang >0$ for all $|u_j\rang \ne 0$
- For $j\ne k$, eigenvectors are distinct therefore $\lambda_j \ne \lambda_k$ , thus $\lang u_j|u_k \rang =0$
π§ββοΈ Theorem: A unitary operator $\hat U$ which satisfies $\hat U \hat U^\dagger = \hat {\bold 1}$ has eigenvalue equation
$$ \hat U |u_j \rang = \lambda _j |u_j \rang $$
which satisfy the following equations
- $|\lambda_j|=1$ with $\lambda_j =e^{i\theta_j}$ where $\theta_j\in \R$
- If $\lambda_j \ne \overline \lambda_j$ then $\lang u_k | u_j \rang =0$
- The eigenvalues of $\hat U$ can be chosen to be an orthonormal basis for $V^N$
π« Proof: the eigenvalue equation of $\hat U$ and $\hat U^\dagger$ are
$$ \hat U|u_j \rang = \lambda _j |u_j \rang \qquad \lang u_k | \hat U^\dagger = \overline\lambda _k \lang u_k | $$
Combining these equations we can write
$$ \begin{aligned} \hat U |u_j \rang &= \lambda j |u_j \rang \\ \lang u_k|\underbrace{\hat U^\dagger\hat U }{\hat{\bold 1}}|u_j \rang &= \lambda j \underbrace{\lang u_k| \hat U^\dagger}{\overline\lambda _k \lang u_k |} |u_j \rang \\ \lang u_k|u_j \rang &=\lambda_j \overline\lambda_k\lang u_k|u_j \rang \end{aligned} $$
- For $\lambda_j\overline \lambda_j=1$ we have $|\lambda_j|=({\lambda_j\overline \lambda_j})^{1/2}=\sqrt 1 =1$ thus $\lambda_j=e^{i\theta_j}$ where $\theta_j\in \R$
- For $\lambda_j \ne \overline\lambda_k$ then $\lang u_k | u_j \rang =0$
- If $\lang u_k | u_j \rang =0$ then $u_k$ and $u_j$ are orthogonal so $\lambda_j$ and $\lambda_k$ can be used to be an orthonormal basis
ποΈ Note: statement 3 does not hold when $N\to \infin$ because some infinite vectors canβt be normalised
If $\hat A$ is Hermitian or Unitary, then the set of its eigenvectors $\{|u_j\rang \}^N_{j=1}$ are an orthonormal basis, therefore we can write a completeness relation $\hat {\bold 1}=\sum^N_{j=1} |u_j \rang \lang u_j |$.
From eigenvalue we have
$$ \hat A|u_j \rang = \lambda_j |u_j \rang $$
Using the completeness relation we have
$$ \hat A= \hat A \hat{\bold 1}=\sum^N_{j=1} \hat A |u_j \rang \lang u_j|=\sum^N_{j=1}\lambda_j|u_j \rang \lang u_j| $$
π Conclusion: this is known as a spectral representation of $\hat A$
In matrix form this corresponds to a diagonal matrix with matrix elements $\lang u_j |\hat A |u_k \rang =\lambda_k \lang u_j |u_k \rang = \lambda_k \delta _{jk}$
$$ \hat A \xrightarrow{\{|u_j \rang \} ^N_{j=1}}\bold A_{\text{diag}}=\begin{bmatrix} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_N \\ \end{bmatrix} $$
We can write this transformation as
$$ \bold A_\text{diag}=\bold T \bold A \bold T^\dagger $$
where $(\bold T)_{jk}=\lang u_j|e_k \rang$ where the $j^{\text {th}}$ column of the matrix is the $j^\text{th}$ eigenvector of $\hat A$
π Example: diagonalisation of the Pauli operators
We start with the Pauli operator
$$ \sigma_z=\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \qquad \sigma_x=\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \qquad \sigma_y=\begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} $$
These have the following normalised eigenvectors and eigenvalues
$$ \begin{aligned} \lambda_{z1}&=-1 \\ \lambda_{z2}&=1 \\ \vec u_{z1}&=(0,1) \\ \vec u_{z2}&= (1,0) \end{aligned} \qquad \begin{aligned} \lambda_{x1}&=-1 \\ \lambda_{x2}&=1 \\ \vec u_{x1}&=1/\sqrt{2}\,(-1,1) \\ \vec u_{x2}&=1/\sqrt{2}\, (1,1) \end{aligned} \qquad \begin{aligned} \lambda_{y1}&=-1 \\ \lambda_{y2}&=1 \\ \vec u_{y1}&=1/\sqrt{2}\,(i,1) \\ \vec u_{y2}&=1/\sqrt{2}\, (1,i) \end{aligned} $$
$\sigma_z$ is already diagonalised so the diagonal version is just
$$ \sigma_z^\text{diag}=\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} $$
$\sigma _x$ we start by finding the unitary operator $\bold T_x$ and its transpose $\bold T^\dag _x$
$$ \bold T_x=[\vec u_{x1},\; \vec u_{x2}]=\frac{1}{\sqrt{2}}\begin{bmatrix} -1 & 1 \\ 1 & 1 \end{bmatrix} \qquad \bold T_x^\dag = \overline{ \bold T^\text{T}_x}=\frac{1}{\sqrt{2}}\begin{bmatrix} -1 & 1 \\ 1 & 1 \end{bmatrix} $$
Now we calculate the diagonal matrix
$$ \sigma^\text{diag}{x}=\bold T_x \sigma{x}\bold T_x^\dag=\frac{1}{2}\begin{bmatrix} -1 & 1 \\ 1 & 1 \end{bmatrix}\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} -1 & 1 \\ 1 & 1 \end{bmatrix}=\begin{bmatrix} -1 & 0 \\ 0 & 1 \end{bmatrix} $$
$\sigma_y$ we follow the same process
$$ \bold T_y=[\vec u_{y1},\; \vec u_{y2}]=\frac{1}{\sqrt{2}}\begin{bmatrix} i & 1 \\ 1 & i \end{bmatrix} \qquad \bold T_y^\dag = \overline{ \bold T^\text{T}y}=\frac{1}{\sqrt{2}}\begin{bmatrix} -i & 1 \\ 1 & -i \end{bmatrix} \\ \sigma^\text{diag}{y}=\bold T_y \sigma_{y}\bold T_y^\dag=\frac{1}{2}\begin{bmatrix} i & 1 \\ 1 & i \end{bmatrix}\begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}\begin{bmatrix} -i & 1 \\ 1 & -i \end{bmatrix}=\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} $$