\documentclass{article}% \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{graphicx}% \setcounter{MaxMatrixCols}{30} %TCIDATA{OutputFilter=latex2.dll} %TCIDATA{Version=5.00.0.2552} %TCIDATA{CSTFile=40 LaTeX article.cst} %TCIDATA{Created=Tuesday, August 18, 2015 14:51:12} %TCIDATA{LastRevised=Wednesday, November 04, 2015 10:33:16} %TCIDATA{} %TCIDATA{} %TCIDATA{} %TCIDATA{Language=American English} \newtheorem{theorem}{Theorem} \newtheorem{acknowledgement}[theorem]{Acknowledgement} \newtheorem{algorithm}[theorem]{Algorithm} \newtheorem{axiom}[theorem]{Axiom} \newtheorem{case}[theorem]{Case} \newtheorem{claim}[theorem]{Claim} \newtheorem{conclusion}[theorem]{Conclusion} \newtheorem{condition}[theorem]{Condition} \newtheorem{conjecture}[theorem]{Conjecture} \newtheorem{corollary}[theorem]{Corollary} \newtheorem{criterion}[theorem]{Criterion} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{exercise}[theorem]{Exercise} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{notation}[theorem]{Notation} \newtheorem{problem}[theorem]{Problem} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{remark}[theorem]{Remark} \newtheorem{solution}[theorem]{Solution} \newtheorem{summary}[theorem]{Summary} \newenvironment{proof}[Proof]{\noindent\textbf{#1.} }{\ \rule{0.5em}{0.5em}} \begin{document} \title{Math 413/513 Chapter 5 (from Friedberg, Insel, \& Spence)} \author{David Glickenstein} \maketitle \section{Eigenvalues and Eigenvectors} \begin{definition} If $V$ is a finite dimensional vector space, a linear operator $T\in \mathcal{L}\left( V\right)$ is \emph{diagonalizable} if there is an ordered basis $\beta$ for $V$ such that $\left[ T\right] _{\beta}$ is a diagonal matrix. A square matrix $A$ is diagonalizable if $L_{A}$ is \emph{diagonalizable}. \end{definition} This maybe requires some basic examples. Notice that $T\left( x,y\right) =\left( x+y,4x+y\right)$ does not produce a diagonal matrix in the standard basis $\left[ T\right] _{E_{2}}=\left( \begin{array} [c]{cc}% 1 & 1\\ 4 & 1 \end{array} \right)$ but if we consider the basis $\beta=\left\{ \left( 1,2\right) ,\left( 1,-2\right) \right\}$ we see that \begin{align*} T\left( 1,2\right) & =\left( 3,6\right) =3\left( 1,2\right) \\ T\left( 1,-2\right) & =\left( -1,2\right) =-\left( 1,-2\right) \end{align*} so $\left[ T\right] _{\beta}=\left( \begin{array} [c]{cc}% 3 & 0\\ 0 & -1 \end{array} \right) .$ Hence $T$ is diagonalizable and so is $\left( \begin{array} [c]{cc}% 1 & 1\\ 4 & 1 \end{array} \right) .$ Notice that the fact that $\left[ T\right] _{\beta}$ is diagonal is equivalent to the fact that $T\left( v_{1}\right) =\lambda_{1}v_{1}$ for an ordered basis $\left\{ v_{1},\ldots,v_{n}\right\} ,$ where $\lambda _{1},\ldots,\lambda_{n}$ are scalars. The choice of $v_{i}$ is important here, since if we choose other vectors, $T$ does not have this simple form (in the above example, $T\left( 1,0\right) \neq\lambda\left( 1,0\right)$ for any choice of scalar $\lambda$). This leads to the notion of eigenvalue and eigenvector. \begin{definition} Let $T$ be a linear operator on a vector space $V$ (not necessarily finite dimensional). A nonzero vector $v\in V$ is called an\emph{ eigenvector} of $T$ if there exists a scalar $\lambda$ such that $T\left( v\right) =\lambda v.$ The scalar $\lambda$ is called the \emph{eigenvalue }corresponding to the eigenvector $v.$ Let $A\in F^{n\times n}.$ A nonzero vector $v\in F^{n}is$ called an eigenvector of $A$ if $v$ is an eigenvector of $L_{A},$ i.e., $Av=\lambda v$ for some $\lambda.$ The scalar $\lambda$ is called the eigenvalue corresponding to the eigenvector $v.$ \end{definition} \begin{remark} Some older terms for eigenvalue/eigenvector are characteristic value/characteristic vector and proper value/proper vector. \end{remark} \begin{theorem} A vector $v$ is an eigenvector for a linear operator $T\in\mathcal{L}\left( V\right)$ corresponding to eigenvalue $\lambda$ if and only if $v\neq0$ and $v\in N\left( T-\lambda I_{V}\right) .$ \end{theorem} \begin{theorem} A linear operator $T$ on a finite-dimensional vector space $V$ is diagonalizable if and only if there exists an ordered basis $\beta$ for $V$ consisting of eigenvectors of $T.$ Furthermore, if $T$ is diagonalizable and $\beta$ is an ordered basis of eigenvectors of $T$ and $D=\left[ T\right] _{\beta},$ then $D$ is a diagonal matrix and the diagonal entries are the corresponding eigenvalues. \end{theorem} The idea of the proof is essentially the discussion above and is left as an exercise. \begin{example} Not all matrices have eigenvectors. We can determine rotation by angle $\theta$ by left multiplication by the matrix $R_{\theta}=\left( \begin{array} [c]{cc}% \cos\theta & -\sin\theta\\ \sin\theta & \cos\theta \end{array} \right) .$ If $\theta$ is not an integer multiple of $\pi,$ then $R_{\theta}$ does not take any nonzero vector to a multiple of itself, and so these matrices have no eigenvectors. \end{example} \begin{example} Let $C^{\infty}$ be the vector space consisting of function $\mathbb{R\rightarrow\mathbb{R}}$ with derivatives of all orders (check this is a vector space subspace of the vector space of functions $\mathbb{R\rightarrow\mathbb{R}}$). Consider the map $T:C^{\infty}\left( \mathbb{R}\right) \rightarrow C^{\infty}\left( \mathbb{R}\right)$ given by $T\left( f\right) =f^{\prime}.$ For a function to be an eigenvector, we need that $f^{\prime}=\lambda f$ for some value of $\lambda.$ There are functions that satisfy this property, namely $f_{\lambda}\left( x\right) =e^{\lambda x}$ is a eigenvector with eigenvalue $\lambda.$ Note that the constant function is an eigenvector with eigenvalue $0.$ \end{example} Notice that if $v$ is an eigenvector for a linear operator $T$ with corresponding eigenvalue $\lambda,$ then $T\left( v\right) =\lambda v$ can be written $\left( T-\lambda I_{V}\right) v=0$ where $I_{V}$ is the identity transformation on $V.$ Thus there is a nonzero element of the nullspace of the linear operator $T-\lambda I.$ For matrices, this allows us to use determinants to find eigenvalues. \begin{theorem} Let $A\in F^{n\times n}.$ The scalar $\lambda$ is an eigenvalue of $A$ if and only if $\det\left( A-\lambda I_{n}\right) =0.$ \end{theorem} \begin{proof} If $\lambda$ is an eigenvalue, its corresponding eigenvector satisfies $\left( A-\lambda I_{n}\right) v=0,$ so there $A-\lambda I_{n}$ is not invertible, so $\det\left( A-\lambda I_{n}\right) =0.$ Similarly, if $\det\left( A-\lambda I_{n}\right) =0,$ then $A-\lambda I_{n}$ is not invertible, so there is a nonzero element $v$ in the nullspace of $A-\lambda I_{n},$ and then $Av=\lambda v.$ \end{proof} One can replace $\lambda$ in the previous theorem to get a polynomial associated with a matrix $A.$ \begin{definition} Let $A\in F^{n\times n}.$ The polynomial $f\left( t\right) =\det\left( A-tI_{n}\right)$ is called the \emph{characteristic polynomial} of $A.$ \end{definition} Note that the theorem above says that any eigenvalue is a zero of the characteristic polynomial and any zero of the characteristic polynomial is an eigenvalue. We can use this to compute eigenvalues, then eigenvectors, and sometimes even diagonalize the matrix. Suppose we have a basis of eigenvectors for a square matrix $A$, then notice that if $Q$ has the elements of the basis as its columns, then $AQ=QD$ where $D$ is the diagonal matrix with the eigenvalues on the diagonal, hence $Q^{-1}AQ=D.$ \begin{example} Consider the matrix $A=\left( \begin{array} [c]{ccc}% 1 & 4 & 5\\ 0 & 2 & 6\\ 0 & 0 & 3 \end{array} \right) .$ The characteristic polynomial is $\left( 1-\lambda\right) \left( 2-\lambda\right) \left( 3-\lambda\right) .$ Each must correspond to at least one eigenvector, so we need to calculate% $A-I=\left( \begin{array} [c]{ccc}% 0 & 4 & 5\\ 0 & 1 & 6\\ 0 & 0 & 2 \end{array} \right)$ and we see that $N\left( A-I\right) =\operatorname{span}\left\{ \left( 1,0,0\right) \right\} .$ We calculate $A-2I=\left( \begin{array} [c]{ccc}% -1 & 4 & 5\\ 0 & 0 & 6\\ 0 & 0 & 1 \end{array} \right)$ and so $N\left( A-2I\right) =\operatorname{span}\left\{ \left( 4,1,0\right) \right\} .$ We calculate $A-3I=\left( \begin{array} [c]{ccc}% -2 & 4 & 5\\ 0 & -1 & 6\\ 0 & 0 & 0 \end{array} \right)$ and so $N\left( A-3I\right) =\operatorname{span}\left\{ \left( 29,12,2\right) \right\} .$ Let's double check that these are, in fact, eigenvectors% \begin{align*} \left( \begin{array} [c]{ccc}% 1 & 4 & 5\\ 0 & 2 & 6\\ 0 & 0 & 3 \end{array} \right) \left( \begin{array} [c]{c}% 1\\ 0\\ 0 \end{array} \right) & =\left( \begin{array} [c]{c}% 1\\ 0\\ 0 \end{array} \right) \\ \left( \begin{array} [c]{ccc}% 1 & 4 & 5\\ 0 & 2 & 6\\ 0 & 0 & 3 \end{array} \right) \left( \begin{array} [c]{c}% 4\\ 1\\ 0 \end{array} \right) & =\left( \begin{array} [c]{c}% 8\\ 2\\ 0 \end{array} \right) =2\left( \begin{array} [c]{c}% 4\\ 1\\ 0 \end{array} \right) \\ \left( \begin{array} [c]{ccc}% 1 & 4 & 5\\ 0 & 2 & 6\\ 0 & 0 & 3 \end{array} \right) \left( \begin{array} [c]{c}% 29\\ 12\\ 2 \end{array} \right) & =\left( \begin{array} [c]{c}% 87\\ 36\\ 6 \end{array} \right) =3\left( \begin{array} [c]{c}% 29\\ 12\\ 2 \end{array} \right) . \end{align*} One can check that $Q=\left( \begin{array} [c]{ccc}% 1 & 4 & 29\\ 0 & 1 & 12\\ 0 & 0 & 2 \end{array} \right)$ has inverse $Q^{-1}=\left( \begin{array} [c]{ccc}% 1 & -4 & \frac{19}{2}\\ 0 & 1 & -6\\ 0 & 0 & \frac{1}{2}% \end{array} \right)$ and $\left( \begin{array} [c]{ccc}% 1 & -4 & \frac{19}{2}\\ 0 & 1 & -6\\ 0 & 0 & \frac{1}{2}% \end{array} \right) \left( \begin{array} [c]{ccc}% 1 & 4 & 5\\ 0 & 2 & 6\\ 0 & 0 & 3 \end{array} \right) \left( \begin{array} [c]{ccc}% 1 & 4 & 29\\ 0 & 1 & 12\\ 0 & 0 & 2 \end{array} \right) =\left( \begin{array} [c]{ccc}% 1 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 3 \end{array} \right)$ \end{example} We can see how similar matrices affect the characteristic polynomial. \begin{proposition} If $A$ is similar to $B,$ that is, there exists an invertible matrix $Q$ such that $A=Q^{-1}BQ,$ then the characteristic polynomials of $A$ and $B$ are the same. \end{proposition} \begin{proof} Exercise. \end{proof} Recall that if $\beta,\beta^{\prime}$ are ordered bases for a finite-dimensional vector space $V$, then there exists and invertible matrix $Q$ such that for any $T\in\mathcal{L}\left( V\right)$ we have $\left[ T\right] _{\beta}=Q^{-1}\left[ T\right] _{\beta^{\prime}}Q.$ Thus, we can define the characteristic polynomial of a linear transformation. \begin{definition} Let $V$ be a finite-dimensional vector space and let $T\in\mathcal{L}\left( V\right) .$ We define the characteristic polynomial of $T$ to be the characteristic polynomial of $\left[ T\right] _{\beta}$ for any basis $\beta$ of $V,$ i.e., the characteristic polynomial is $f\left( t\right) =\det\left( \left[ T\right] _{\beta}-\lambda I_{n}\right)$ where $n=\dim V.$ \end{definition} \begin{remark} Note that $\left[ T\right] _{\beta}-\lambda I_{n}=\left[ T-\lambda I_{V}\right] _{\beta}$ so we could have defined it thus. \end{remark} It should be fairly clear from the expansion in cofactors formula that the characteristic polynomial is, in fact, a polynomial. We can be a bit more precise. \begin{theorem} Let $A\in F^{n\times n}.$ The characteristic polynomial of $A$ is a polynomial of degree $n$ with leading coefficient $\left( -1\right) ^{n}.$ Also, $A$ has at most $n$ distinct eigenvalues. \end{theorem} \begin{proof} Exercise. \end{proof} \section{Problems} \begin{itemize} \item Be sure to read more examples in the book on diagonalizing matrices by finding zeroes of the characteristic polynomial and then finding eigenvectors. \item FIS Section 5.1 exercises 2, 3, 7, 8ab, 9 12, 14, 15, 16, 19-22 \end{itemize} \section{Diagonalizability} \begin{theorem} \label{thm:distinct eigenvalues}Let $T$ be a linear operator on a vector space $V$ and let $\lambda_{1},\lambda_{2},\ldots,\lambda_{k}$ be distinct eigenvalues of $T.$ If $v_{1},\ldots,v_{k}$ are eigenvectors of $T$ such that $T\left( v_{i}\right) =\lambda_{i}v_{i}$ for $i=1,\ldots,k,$ then $\left\{ v_{1},\ldots,v_{k}\right\}$ is linearly independent. \end{theorem} \begin{proof} Consider a linear combination $\sum_{i=1}^{k}a_{i}v_{i}=\vec{0}.$ It follows by applying $T-\lambda_{k}I_{V}$ that $\sum_{i=1}^{k-1}a_{i}\left( \lambda_{i}-\lambda_{k}\right) v_{i}=\vec{0}.$ We can now use this to induct on $k.$ If $k=1,$ then since $v_{1}\neq\vec{0}$ (since eigenvectors are nonzero), $\left\{ v_{1}\right\}$ is linearly independent. Now suppose any set of eigenvectors of distinct eigenvalues that has fewer than $k$ elements is linearly independent. We see that if $\sum_{i=1}^{k}a_{i}v_{i}=\vec{0}%$ then we can apply $T-\lambda_{k}I_{V}$ to both sides to find that $\sum_{i=1}^{k-1}a_{i}\left( \lambda_{i}-\lambda_{k}\right) v_{i}=\vec{0}.$ Since $\left\{ v_{1},\ldots,v_{k-1}\right\}$ are linearly independent and $\lambda_{i}\neq\lambda_{k},$ we must have that $a_{1}=\cdots=a_{k-1}=0.$ It then follows that $a_{k}=0$ as well. \end{proof} \begin{corollary} Let $V$ be an $n$-dimensional vector space and $T\in\mathcal{L}\left( V\right) .$ If $T$ has $n$ distinct eigenvalues, then $T$ is diagonalizable. \end{corollary} \begin{proof} $T$ has at least one eigenvector for each eigenvalue, yielding a set $\left\{ v_{1},\ldots,v_{n}\right\}$. By the theorem, this set is linearly independent, and thus a basis. By a previous theorem, if $V$ has a basis of eigenvectors of $T,$ then $T$ is diagonalizable. \end{proof} Note that the converse of the theorem is false; we can have a linear operator that is diagonalizable that has repeated eigenvalues (for instance, the identity map). \begin{definition} A polynomial $f\left( t\right)$ in $P\left( F\right)$ splits over $F$ if there are scalars $c,a_{1},\ldots,a_{n}$ (not necessarily distinct) in $F$ such that $f\left( t\right) =c\left( t-a_{1}\right) \left( t-a_{2}\right) \cdots\left( t-a_{n}\right) .$ \end{definition} So basically a polynomial splits if it can be factored with all linear factors. Note that the field is important. For instance, $t^{2}+1$ does not split over $\mathbb{R}$ or $\mathbb{Q}$ but it does split over $\mathbb{C}$ since $t^{2}+1=\left( t+i\right) \left( t-i\right) .$ Also note that we did not preclude $\left( t-1\right) ^{2},$ which also splits. \begin{theorem} The characteristic polynomial of any diagonalizable linear operator splits. \end{theorem} \begin{proof} It is clear that if $D$ is a diagonal matrix, then its characteristic polynomial splits. If $T$ is diagonalizable, then there is a basis $\beta$ such that $\left[ T\right] _{\beta}$ is diagonal, so the theorem follows. \end{proof} If we think a bit harder about the proof, we see that it may be possible to have an eigenvalue repeated when written in diagonal form, and this leads to a repeated zero in the characteristic polynomial. \begin{definition} Let $\lambda$ be an eigenvalue of a linear operator or matrix with characteristic polynomial $f\left( t\right) .$ The (algebraic)\ multiplicity of $\lambda$ is the largest positive integer $k$ such that $\left( t-\lambda\right) ^{k}$ is a factor of $f\left( t\right) .$ \end{definition} While we have shown that if a transformation is diagonalizable and a diagonal representation has a repeated eigenvalue, then their must be a repeated zero in the characteristic polynomial. It is natural to ask if a transformation whose characteristic polynomial splits with a repeated eigenvalue must be diagonalizable, but this is false. \begin{example} Consider the matrix $A=\left( \begin{array} [c]{cc}% 0 & 1\\ 0 & 0 \end{array} \right) .$ Its characteristic polynomial is $\lambda^{2},$ so it splits. We see that $N\left( A\right) =\operatorname{span}\left\{ \left( 1,0\right) \right\} ,$ which is one-dimensional. It is therefore impossible to have a basis of eigenvectors and so $A$ is not diagonalizable. \end{example} The problem in the last example is that we are unable to find a basis of eigenvectors since there is only one eigenvalue and the nullspace of $A-\lambda I_{V}$ (where $\lambda=0$) is only one-dimensional. This leads to the following definition. \begin{definition} Let $T$ be a linear operator on a vector space $V$ and let $\lambda$ be an eigenvalue of $T.$ Define $E_{\lambda}=\left\{ x\in V:T\left( x\right) =\lambda x\right\} =N\left( T-\lambda I_{V}\right) .$ The set $E_{\lambda}$ is called the \emph{eigenspace} of $T$ corresponding to the eigenvalue $\lambda.$ Analogously, the eigenspace of a matrix $A$ is the eigenspace of $L_{A}.$ \end{definition} \begin{theorem} Let $T$ be a linear operator on a finite-dimensional vector space $V$ and let $\lambda$ be an eigenvalue of $T$ with multiplicity $m.$ Then $1\leq\dim E_{\lambda}\leq m.$ \end{theorem} \begin{proof} Choose an ordered basis $\left\{ v_{1},\ldots,v_{p}\right\}$ for $E_{\lambda},$ then extend to a basis $\beta$ for $V.$ Then $\left[ T\right] _{\beta}$ will have the block form% $\left( \begin{array} [c]{cc}% \lambda I_{p} & A\\ 0 & B \end{array} \right) .$ It is clear that the characteristic polynomial will look like $f\left( t\right) =\left( \lambda-t\right) ^{p}\det\left( B-tI\right) .$ It follows that the multiplicity $m$ is at least as big as $p=\dim E_{\lambda}.$ Since $\lambda$ is an eigenvalue, it has at least one eigenvector so $\dim E_{\lambda}\geq1.$ \end{proof} \begin{theorem} \label{thm:basis}Let $T$ be a linear operator on a vector space $V$ and let $\lambda_{1},\ldots,\lambda_{k}$ be distinct eigenvalues of $T.$ For each $i=1,2,\ldots,k$ let $S_{i}$ be a finite linearly independent subset of $E_{\lambda_{i}}.$ Then $S=S_{1}\cup S_{2}\cup\cdots\cup S_{k}$ is a linearly independent subset of $V.$ [Note: the union should really be more of a concatenation since it needs to preserve the ordering of the $S_{i}$.] \end{theorem} \begin{proof} We need some notation to keep track of the vectors in $S_{i}$ and $S.$ Suppose $S_{i}=\left\{ v_{i1},\ldots,v_{in_{i}}\right\}$ for each $i$ and that $S=\left\{ v_{11},\ldots,v_{1n_{1}},v_{21},\ldots,v_{kn_{k}}\right\} .$ Consider scalars $a_{ij}$ such that $\sum_{i=1}^{k}\sum_{j=1}^{n_{i}}a_{ij}v_{ij}=0.$ For each $i,$ let $w_{i}=\sum_{j=1}^{n_{i}}a_{ij}v_{ij}.$ Note that $w_{i}\in S_{i}$ and $\sum_{i=1}^{k}w_{i}=0.$ If we could show that this implies $w_{i}=0$ for all $i=1,\ldots,k,$ then it will follow that $a_{ij}=0$ for all $i,j$ (since $\left\{ v_{i1},\ldots,v_{in_{i}}\right\}$ is a basis for each $i$).This is the content of the following lemma. \end{proof} \begin{lemma} Let $T$ be a linear operator and let $\lambda_{1},\ldots,\lambda_{p}$ be distinct eigenvalues of $T.$ For each $i=1,\ldots,p,$ let $v_{i}\in E_{\lambda_{i}}.$ If $v_{1}+\cdots+v_{p}=\vec{0}%$ then $v_{i}=0$ for all $i.$ \end{lemma} \begin{proof} By Theorem \ref{thm:distinct eigenvalues} , $N=\left\{ v_{i}:v_{i}\neq\vec {0}\right\}$ is a linearly independent set. Thus, if $N$ is nonempty, then $v_{1}+\cdots+v_{p}\neq\vec{0}.$ \end{proof} \begin{theorem} Let $T$ be a linear operator on a finite-dimensional vector space $V$ and let $\lambda_{1},\ldots,\lambda_{k}$ be the distinct eigenvalues of $T.$ then \begin{enumerate} \item $T$ is diagonalizable if and only if the characteristic polynomial of $T$ splits and the multiplicity of $\lambda_{i}$ is equal to $\dim E_{\lambda_{i}}$ for all $i.$ \item If $T$ is diagonalizable and $\beta_{i}$ is an ordered basis for $E_{\lambda_{i}}$ for each $i,$ then $\beta=\beta_{1}\cup\beta_{2}\cup \cdots\cup\beta_{k}$ is an ordered basis for $V$ consisting of eigenvectors of $T.$ \end{enumerate} \end{theorem} \begin{proof} If $T$ is diagonalizable, then there is a basis $\beta$ such that $\left[ T\right] _{\beta}$ takes the block form% $\left[ T\right] _{\beta}=\left( \begin{array} [c]{cccc}% \lambda_{1}I_{n_{1}} & 0 & \cdots & 0\\ 0 & \lambda_{2}I_{n_{2}} & \ddots & \vdots\\ \vdots & \ddots & \ddots & 0\\ 0 & \cdots & 0 & \lambda_{k}I_{n_{k}}% \end{array} \right) .$ (Technically, there is a basis such that the matrix is diagonal, but then we can reorder that basis so that equal eigenvalues are adjacent.) It follows that the characteristic polynomial of $T$ is $\left( \lambda_{1}-t\right) ^{n_{1}}\cdots\left( \lambda_{k}-t\right) ^{n_{k}}$ where $n_{1}% ,\ldots,n_{k}$ are the dimensions of $E_{\lambda_{1}},\ldots,E_{\lambda_{k}}.$ Now suppose the characteristic polynomial of $T$ splits is equal to $\left( \lambda_{1}-t\right) ^{n_{1}}\cdots\left( \lambda_{k}-t\right) ^{n_{k}}$ where $n_{1},\ldots,n_{k}$ are the dimensions of $E_{\lambda_{1}}% ,\ldots,E_{\lambda_{k}}.$ Then by Theorem \ref{thm:basis} we can take bases $\beta_{i}$ of $E_{\lambda_{i}}$ and the set $\beta=\beta_{1}\cup\cdots \cup\beta_{k}$ is linearly independent. Since it has size equal to $n_{1}+\cdots+n_{k}=\dim V$ (since this is the degree of the characteristic polynomial), $\beta$ must be a basis and in that basis, $\left[ T\right] _{\beta}$ has the form above and is thus diagonalizable. Note that this proves the second statement as well. \end{proof} \section{Problems} \begin{itemize} \item Be sure to read the examples in the book on diagonalizing. \item FIS Section 5.2 exercises 2, 3, 6, 7 (diagonalize first: see Example 7), 8, 12, 13, 18, 19. \end{itemize} \end{document}