Friday, November 29, 2013

Stein 2.11

Prove that if $f\in{L^1(\mathbb{R}^d)}$, real-valued, and $\int_E f(x)dx \geq 0$ for every $E\in{\mathcal{M}}$ then $f(x) \geq 0$ a.e. $x$. Similarly, if $\int_E f(x)dx=0$ for every $E\in{\mathcal{M}}$ then $f(x) = 0$ a.e.

$\textit{Proof}$ : This problem can be tackled in various ways using the tools so far. One way is as follows:

Define $A=\{x \mid f(x) < 0 \}$. $f$ is measurable so $ \{f < a \} \in{\mathcal{M}} $, $\forall a\in{\mathbb{R}}$. Letting $a=0$ makes the set $A\in{\mathcal{M}}$. By assumption, $$\int_Af(x)dx \geq 0$$ By the way $A$ is defined, $$f\chi_A(x) \leq 0$$ $$\int_Af(x)dx = \int_\mathbb{R}f(x) \chi_A(x)dx \leq \int_\mathbb{R}0=0$$ Therefore, $$ \int_Af(x)dx=0$$ We have $f(x) < 0$ $\forall x\in{A}$ and $\int_Af(x)dx=0$. Now assume that to the contrary that $m(A) > 0$. Let $Q_n $ be the compact cube centered at the origin with side lengths $n$. So, $A \cap Q_n$ is measurable and bounded and $A\cap Q_n \nearrow_{n\rightarrow \infty} A$. [Then, by corollary 3.3, $m(A)=\lim_{n\rightarrow \infty}m(A \cap Q_n)$.] On $A \cap Q_n$ we have conditions for Lusin's theorem met. So, on a contained closed set $F_\epsilon \subseteq A \cap Q_n$ $f\in{C(F_\epsilon)}$. Additionally, $F_\epsilon$ has positive measure. With $f(x_0) <0$ for some $x_0$ on a set of positive measure, then the integral of $f$ around a neighborhood of $x_0$ must be negative since $f$ is continuous...a contradiction.
$\therefore \quad m(A)=0$ i.e. $f \geq 0$ a.e.

Alternatively, a very quick and direct way of proof is to use one of the main results of chapter 3: the Lebesgue differentiation theorem. This states that if $f$ is integrable (or even just locally integrable) then $$\lim_{x\in{B},m(B)\rightarrow 0} \frac{1}{m(B)} \int_B f(t)dt = f(x) \quad a.e.$$ Balls in $\mathbb{R}^d$ are measurable, so over any ball, $\int_B f(t)dt \geq 0 $. Since $f$ is integrable, we apply the Lebesgue differentiation theorem. So, $$f(x) = \lim_{x\in{B},m(B)\rightarrow 0} \frac{1}{m(B)} \int_B f(t)dt \geq 0 \quad a.e.$$ i.e. $f \geq 0$ a.e. Done.

Similarly, if $\int_E f(x)dx=0$, then we get $$f(x) = \lim_{x\in{B},m(B)\rightarrow 0} \frac{1}{m(B)} \int_B f(t)dt=0 \quad a.e. \quad \quad \blacksquare$$

Stein 2.9 (Chebyshev's inequality)

(Chebyshev's inequality)

Let $f\in{L^1(\mathbb{R}^d)}$, $f \geq 0$, $\alpha > 0$ and $E_\alpha = \{x | f(x) > \alpha \}$.

Then, $$m(E_\alpha) \leq \frac{1}{\alpha} \int f $$

$\textit{Proof}$ : $$E_\alpha = \{x | f(x) > \alpha \} = \{ x | \frac{f(x)}{\alpha} > 1 \}$$ Integration can be considered a way of measuring a set, so we can write $$m(E_\alpha) = \int \chi_{E_\alpha} = \int_{E_\alpha} 1 $$ By the monotinicity and linearity of the Lebesgue integral, $$\leq \int_{E_\alpha} \frac{f(x)}{\alpha} = \frac{1}{\alpha}\int_{E_\alpha}f \leq \frac{1}{\alpha} \int f \quad \quad \blacksquare $$

Friday, November 15, 2013

Commutativity condition forces diagonalization

$\textit{exercise}$ : Let $A,B,C\in{M_{n \times n}(\mathbb{R})}$ and suppose the characteristic polynomial of $A$ splits and separates in $\mathbb{C}$. Suppose also that $AB=BA$ and $AC=CA$. Prove that $BC=CB$

$\textit{proof}$: If the characteristic polynomial $p_A(t)$ splits and separates in $\mathbb{C}$ i.e. has no repeated roots in $\mathbb{C}$ then $A$ will diagonalize in $\mathbb{C}$. We first show that if $D$ is diagonal in $\mathbb{C}$ and $DB=BD$ then this implies that $B$ must also be diagonal. \\ Let $D\in{M_{n \times n}(\mathbb{C})}$ where $D$ is diagonal. Since $D$ separates in $\mathbb{C}$ then for the diagonal entries, $a_{ii} \neq a_{jj} \quad \forall i \neq j$. If $DB=BD$ then $B$ is diagonal since $$ [DB]_{ij}= \left\{ \begin{array}{lr} d_{ii}b_{ii} : i=j \\ d_{ii}b_{ij} : i \neq j \end{array} \right. $$ $$ [BD]_{ij}= \left\{ \begin{array}{lr} b_{ii}d_{ii} : i=j \\ b_{ij}d_{jj} : i \neq j \end{array} \right. $$ Since $DB=BD$, then $d_{ii}b_{ij}=b_{ij}d_{jj}$. But with each $d_{ii} \neq d_{jj}$ and $\mathbb{C}$ being a field (commutative, inverses exist, and also no zero divisors in case $d_{ii}=0$ for some $i$) this forces $b_{ij}=0$. Similarly, $b_{ji}=0$, so all off-diagonal entries of $B$ are zero i.e. $B$ is diagonal.

We have shown that if given a diagonal matrix with unique entries commuting with another matrix, then the other matrix must also be diagonal; regardless of the splitting field of the characteristic polynomial. This commutativity condition is once which can be employed in certain problems to create simultaneous diagonalization of linear operators. This technique is sometimes used in solving PDE's.

Now, given the conditions above, show that $BC=CB$.
$A$ diagonalizes in $\mathbb{C}$, so there exists an invertible matrix (change of basis matrix) $Q$ and a diagonal matrix $D$ s.t. $D=Q^{-1}AQ$. Since $AB=BA$ it follows that $$(Q^{-1}AQ)(Q^{-1}BQ)=Q^{-1}ABQ=Q^{-1}BAQ=(Q^{-1}BQ)(Q^{-1}AQ)$$ Thus, by the previous result, the matrix $Q^{-1}BQ$ is diagonal. Similarly, $Q^{-1}CQ$ must be diagonal. Using the fact that diagonal matrices commute with one another, we get, $$(Q^{-1}BQ)(Q^{-1}CQ)=(Q^{-1}CQ)(Q^{-1}BQ)$$ $$Q^{-1}BCQ=Q^{-1}CBQ \quad \therefore BC = CB$$