For Lectures 11-14

I
Our discussion of Hilbert space is relevant because of:-


\begin{theorem}
For any measure space $(X,\mathcal
F,\mu)$\ (countably additive...
...\mathcal F$\ on
$X)$\ the space $L^2(X,\mu)$\ is a Hilbert space.
\end{theorem}

Proof. We have shown that $ L^2(X,\mu)$ is a linear space and that the inner product $ \langle f,g\rangle =\int_Xf\bar gd\mu$ on $ L^2(X,\mu)$ is well-defined and satisfies the needed properties. Thus $ L^2(X,\mu)$ is a pre-Hilbert space and we only need to show that it is complete for the norm $ \Vert f\Vert _{L^2}=\left(\int_X\vert f\vert^2d\mu\right)^{\frac12}.$

We will use the completeness of $ L^1(X,\mu)$ proved last week. Let $ \{f_n\}$ be a Cauchy sequence in $ L^2(X,\mu).$ Consider first the set $ Z$ where at least one of the $ f_n$'s is non-zero. This is a countable union of measurable sets, hence measurable. It is in fact a countable union of sets of finite measure, for instance

$\displaystyle Z=\bigcup_{n,m}\{\vert f_n\vert\ge 1/m\},\ \mu\{\vert f_n\vert\ge 1/m\}\le m^2\int _X\vert f_n\vert^2d\mu.$ (2)

Let us set $ Z_k=\bigcup_{n+m\le k}\{\vert f_n\vert\ge 1/m\}$ and so obtain the union as $ Z=\bigcup_kZ_k$ where each $ Z_k$ has finite measure and $ Z_{k+1}\supset
Z_k.$ Then $ g\in L^2(X,\mu)$ implies, by the Cauchy-Schwarz inequality

$\displaystyle 2\int_{Z_k}\vert g\vert d\mu\le \mu(Z_k)^{\frac12}\Vert g\Vert _{L^2}.$ (3)

Applying this to $ f_n-f_m$ we see that $ \{f_n\big\vert _{Z_k}\}$ is Cauchy in $ L^1(Z_k,\mu)$ for each fixed $ k.$ So, by completeness of $ L^1,$ it converges to $ g_k\in L^1(Z_k,\mu).$ Recall that we may change the values of the $ f_n$ on a set of measure zero in $ Z_k$ so that on a subsequence $ f_{n_k(l)}(x)\longrightarrow
g(x)$ pointwise on $ Z_k.$ Passing to the diagonal subsequence in $ k,$ the uniqueness of the limit (modulo values on sets of measure zero) means that there is one function $ g$ defined on the whole of $ Z$ such that $ f_{n(l)}\longrightarrow g$ almost everywhere. Taking $ g=0$ on $ X\setminus
Z$ and again changing values (to $ 0)$ on a set of measure zero if necessary we can arrange that $ f_{n(l)}(x)\longrightarrow g(x)$ pointwise on $ X.$

Now that we have our putative limit we can use Fatou's Lemma. Applied to the sequence of measurable, non-negative, functions $ \vert f_m-f_n\vert^2$ with $ n$ fixed and $ m\longrightarrow \infty$ it states that

$\displaystyle \int_X\liminf_m\vert f_n(x)-f_m(x)\vert^2d\mu\le \liminf_m\int_X\vert f_m-f_n\vert^2d\mu.$ (4)

Given $ \epsilon >0$ the fact that $ \{f_n\}$ is Cauchy in $ L^2(X,\mu)$ allows us to choose $ N$ such that for $ n,m\ge N$ $ \Vert f_n-f_m\Vert _{L^2}<\epsilon.$ This implies that, for $ n\ge N$ the right side of (3) is less than or equal to $ \epsilon ^2.$ On the other hand the integrand on the left side is pointwise convergent so

$\displaystyle \int_X\vert f_n(x)-g\vert^2d\mu\le\epsilon ^2\ \forall\ n\ge N.$ (5)

This shows that $ f_n\longrightarrow g$ in $ L^2(X,\mu)$ (and in particular that $ g\in L^2(X,\mu).$ $ \qedsymbol$

II
Now, suppose we are in a general Hilbert space $ H.$ Suppose that $ \{\phi _i\}_{i\in I}$ is a countable - either finite or countably infinite - orthonormal set. That is, each element $ \phi _i\in H$ has norm one $ \Vert\phi_i\Vert^2=\langle \phi _i,\phi _i\rangle =1$ and they are pairwise orthogonal so $ \langle \phi _i,\phi _j\rangle =0$ if $ i\not=j.$ Then for each element $ f\in H$ we can consider the constants

$\displaystyle c_n(f)=\langle f,\phi _n\rangle ,\ n\in I.$ (6)


\begin{proposition}
For any orthonormal set and any $f\in
H$\ the series
\begin...
...e \Vert f\Vert^2.
\end{equation}[This is Bessel's inequality.]
\end{proposition}

Proof. Take an ordering of $ I$ so that we can replace it by $ \{1,\dots\}$ and for any finite $ N$ consider the finite sum

$\displaystyle S_N(f)=\sum\limits_{n=1}^Nc_n(f)\phi _n\in H.$ (7)

Expanding out the inner product using the sesquilinearity gives

$\displaystyle \Vert S_N(f)\Vert^2= \langle S_N(f), S_N(f)\rangle =\sum\limits_{...
...{c_j(f)} \langle \phi _i,\phi _j\rangle =\sum\limits_{i=1}^N\vert c_i(f)\vert^2$ (8)

using the orthonormality of the $ \phi _i$'s. Doing the same thing for $ S_M(f)-S_N(f)$ where $ M\ge N$ we find

$\displaystyle \Vert S_m(f)-S_M(f)\Vert^2=\sum\limits_{n=N}^M\vert c_n(f)\vert^2.$ (9)

On the other hand if we write $ f=(f-S_N(f))+S_N(f)$ we see that $ \langle
f-S_N(f),\phi _n\rangle =0$ for $ n\le N.$ Since $ S_N(f)$ is a sum of these $ \phi _n$'s, $ \langle f-S_N(f),S_N(f)\rangle =0.$ This in turn means that

\begin{multline}
\Vert f\Vert^2=\langle (f-S_N(f))+S_N(f),(f-S_N(f))+S_N(f)\rangle\\
=\Vert f-S_N(f)\Vert^2+\Vert S_N(f)\Vert^2
\end{multline}

since the cross-terms vanish in the linear expansion. Thus

$\displaystyle \Vert S_N(f)\Vert^2\le\Vert f\Vert^2\ \forall\ N.$ (10)

Going back to (8) we conclude from (11) that the series $ \sum\limits_{n=1}^N\vert c_n\vert^2$ has an upper bound independent of $ N.$ When $ I$ is inifinte this means it converges (in $ \bbR).$ From (9) it follows that the sequence $ \{S_N(f)\}$ is Cauchy in $ H.$ Since $ H$ is complete, being a Hilbert space, it must converge and the limit must satisfy (6). $ \qedsymbol$

See if you can show that the limit $ \sum\limits_{i\in I}c_i(f)\phi _i$ is independent of the order chosen for $ I.$

III
A (countable) orthonormal set is said to be complete if $ S(f)=f$ for all $ f\in H.$ Notice that $ f-S(F)$ is the limit of $ f-S_N(f)$ as $ N\to\infty$ and $ \langle f-S_N(f),\phi _j\rangle =0$ whenever $ j\ge N.$ Taking the limit we see that

$\displaystyle \langle f-S(f),\phi _j\rangle =\lim_{N\to\infty}\langle f-S(f),\phi _j\rangle =0$ (11)

for all $ j,$ where we have used the fact that if $ v_j\to v$ in $ H$ then $ \langle v_j,w\rangle \to\langle v,w\rangle$ for each $ w\in H$ - this follows from Schwarz inequality since

$\displaystyle \vert\langle v,w\rangle -\langle v_j,w\rangle\vert\le \Vert v-v_j\Vert\Vert w\Vert.$ (12)

Thus another way of stating the completeness of the orthonormal set is that

$\displaystyle w\in H,\ \langle w,\phi _j\rangle =0\ \forall\ j\Longrightarrow w=0.
$

IV
Existence of complete orthonormal bases.


\begin{theorem}
Any \emph{separable} Hilbert space
has a (countable) comlete orthonormal basis.
\end{theorem}

Proof. Recall that a metric space is separable if it has a countable dense subset. So we can suppose there is a countable set $ E\subset H$ with $ \bar E=H.$ Let $ E=\{e_1,e_2,\dots\}$ be an enumeration of $ E.$ We extract a complete orthonormal basis from $ E$ by applying the Gramm-Schmid procedure. First consider $ e_1.$ If it is zero, pass to $ e_2.$ If it is non-zero, set $ \phi _1=e_1/\Vert e_1\Vert$ which gives an orthonormal set with one element, now pass to $ e_2.$ Proceeding by induction, suppose at stage $ n$ we have an orthonormal set $ \{\phi_1,\dots,\phi _N\}$ with $ N\le n$ elements such that each of the $ e_j,$ $ j\le n,$ is linearly dependent on these $ N$ elements. Now consider $ e_{n+1}.$ If it is dependent on the $ \phi _j$ for $ j\le N,$ pass on to $ e_{n+2}.$ If not then

$\displaystyle \phi _{N+1}=g/\Vert g\Vert,\ g=e_n-\sum\limits_{j=1}^N\langle g,\phi _j\rangle \phi _j$ (13)

is well-defined, such that $ \{\phi_1,\dots,\phi _N\}\cup\{\phi _{N+1}\}$ is orthonormal and such that the $ e_j$ for $ j\le N+1$ are dependent on this new orthonormal set.

Thus we can proceed by induction to define an orthonormal set, which will either be finite or countable (depending on $ H).$ In either case it is complete. To see this, suppose there is some element $ f\in H$ orthogonal to all the $ \phi_i.$ By the density of $ E,$ for any $ \epsilon >0$ there exists $ e\in E$ such that $ \Vert f-e\Vert<\epsilon.$ However, $ e$ is in the (finite) span of the $ \phi _i,$ so $ \langle f,e\rangle =0.$ This however implies, by Pythagorus' theorem, that $ \epsilon
^2\ge\Vert f-e\Vert^2=\Vert f\Vert^2+\Vert e\Vert^2.$ Thus $ \Vert f\Vert\le \epsilon$ so in fact $ \Vert f\Vert=0$ and hence $ f=0,$ proving the completeness. $ \qedsymbol$

V
The basic result we will prove on Fourier series is that for the special case of $ L^2([-\pi,\pi]),$ computed with respect to Lebesgue measure, the exponentials

$\displaystyle \phi _n(x)=\frac1{\sqrt{2\pi}}\exp(inx),\ n\in\bbZ,$    form a complete ornomormal set$\displaystyle .$ (14)

First we want to check that it is indeed an orthonormal set. Since $ \vert\exp(inx)\vert=1$ the norm is easy enough to compute

$\displaystyle \Vert\phi _n\Vert^2_{L^2}=\int_{[-\pi,\pi]}\frac 1{2\pi}=1$ (15)

since we do know how to integrate constants. The orthogonality would seem almost as easy,

$\displaystyle \langle\phi_n,\phi _k\rangle=\frac1{2\pi}\int_{[-\pi,\pi]} e^{i(n-k)x}dx =\frac1{2\pi}\frac{e^{i(n-k)x}}{i(n-k)}\big\vert^{\pi}_{-\pi}=0.$ (16)

However, this is proof by abuse of notation since here we are using the Riemann integral (and Fundamental Theorem of Calculus) and we are supposed to be computing the Lebesgue integral.

So I need to go back and check some version of their equality. The following will do for present purposes.


\begin{proposition}
If $g$\ is a continuous function on a
finite interval $[a,b...
...bset\bbR$\ then its Riemann and Lebesgue
integrals are equal.
\end{proposition}

Proof. We can split $ g$ into real and imaginary parts if it is complex valued and the result then follows from the real case; so assume that $ g$ is real. Since we do know how to integrate constants (being simple functions) we can add to $ g$ the constant $ -\inf_{[a,b]}g$ (or something larger) and we can then assume that $ g\ge0.$ Now, the Riemann integral is defined as the common value of the upper and lower integrals (look this up in Rudin [2], I am not going to remind you of all of it.) One result for continuous functions (in fact general Riemann integrable functions) is that given $ \epsilon >0$ there exist a partition of $ [a,b],$ $ \mathcal P,$ such that the difference of lower and upper partial sums satifies

$\displaystyle U(g,\mathcal P)-L(g,\mathcal P)<\epsilon.$ (17)

Notice here that the lower sum is actually $ I_{[a,b]}(s)$ for a simple function which is smaller than $ g.$ So, directly from the definition of the integral we know that

$\displaystyle \int_a^b gdx=\sup _{\mathcal P}L(g,\mathcal P)\le \int_{[a,b]} gdx$ (18)

where the integral on the left is Riemann's and on the right is Lebesgue's. On the other hand if we simply divide $ [a,b]$ into $ N$ equal parts and take the simple function with value $ \inf g$ on each interval we get a sequence of simple functions increasing to $ g$ (in fact uniformly on $ [a,b].)$ The Riemann lower partial sum for this partition is $ I(g_N),$ bounded above by the Riemann integral, and by the monotone convergence theorem this sequence converges to the Lebesgue integral. This gives the opposite inequality to (19) so the two integrals are equal. $ \qedsymbol$

This argument only needs slight modification to show that every Riemann integrable function on $ [a,b]$ is Lebesgue integrable and that the integrals are equal; it is done in Adams and Guillemin.

Thus we know that the Fourier functions $ \phi _n(x)$ do indeed form a countable orthonormal set for $ L^2([-\pi,\pi]).$ We still need to know that it is complete. This involves some more work.

VI
To prove the completeness of the Fourier functions $ \{\phi _n\}$ we need to show that any function $ f\in\mathcal L^2([-\pi,\pi])$ which satisfies

$\displaystyle c_n(f)=\frac1{\sqrt{2\pi}}\int_{[-\pi,\pi]} f(x)e^{-inx}dx=0\ \forall\ n\in\bbZ,$ (19)

itself vanishes almost everywhere, so is zero in $ L^2([-\pi,\pi]).$ In fact we will show something a little stronger,


\begin{proposition}
If $f\in \mathcal L^1([-\pi,\pi])$
satisfies \eqref{6.3.2004.24} then $f=0$\ outside a set of measure zero in
$[-\pi,\pi].$
\end{proposition}

However this will take some work.

VII
First we make the following observation directly from our construction of the integral.


\begin{lemma}
If $f\in\mathcal{L}^1(X,\mu)$\ and
$A_j\subset X$\ is a sequence ...
...quation}
\int_{A_j}fd\mu\longrightarrow \int_{A}fd\mu.
\end{equation}\end{lemma}

Proof. This is really just a reminder of what we have done earlier. We might as well assume that $ f$ is non-negative, since we can work with the real and imaginary parts, and then with their positive and negative parts, separately. It suffices to show that every subsequence of the real sequence in (21) has a convergent subsequence with limit the integral over $ A.$ Since we are not assuming anything much about the $ A_j$'s it is enough to show that there is a subsequence converging to the integral over $ A$ and then apply the argument to any subsequence. Since the measure of the symmetric difference $ \mu(S(A_j,A))\to0$ we can pass to a subsequence (which we then renumber) so that

$\displaystyle \mu(S(A_j,A))=\mu(A\setminus A_j)+\mu(A_j\setminus A)\le 2^{-j}.$ (20)

Since $ \int_Agd\mu-\int_{A_j}gd\mu=\int_{A\setminus
A_j}gd\mu-\int_{A_j\setminus A}gd\mu$ it suffices to show that these sequences tend to zero. So, set $ B_j=A\setminus A_j;$ from (22) it follows that $ F=\bigcup_{j\ge 1} B_j$ has finite measure and $ F_N=\bigcup_{j\ge N}B_j$ has measure tending to zero, with $ B_N\subset F_N.$ Thus it suffices to show that $ \int_{F_N}gd\mu\to0$ since this sequence dominates $ \int_{B_j}gd\mu$ by monotonicity. Finally then we write $ F=\bigcup_{j\ge 1} G_j,$ $ G_j=F_j\setminus F_{j+1}$ which is a decompostion into a countable collection of disjoint open sets. By the countable additivity of the integral

$\displaystyle \int_{F}gd\mu=\sum\limits_{j=1}^\infty \int_{G_j}gd\mu.$ (21)

Thus the series of non-negative terms on the right converges which implies that the series `of remainders'

$\displaystyle \int_{F_N}gd\mu= \sum\limits_{j=N}^\infty \int_{G_j}gd\mu\to0$ as $\displaystyle N\to\infty.$ (22)

This shows that the $ \int_{A\setminus A_j}gd\mu\to0$ as $ j\to\infty;$ the other half of (22) can be handled in the same way, so we have a subsequence with the correct convergence and hence have proved the Lemma. $ \qedsymbol$

VIII
So, let's apply Lemma 1 directly as follows.
\begin{lemma}
If $f\in L^1([a,b])$\ for a finite interval
$[a,b]\subset \bbR$\ ...
... then the
corresponding functions $g_i\to g$\ uniformly on $[a,b].$
\end{lemma}

Proof. By Lemma 1, $ \int_{[a,s+t]}gdx\longrightarrow\int_{[a,s]}gdx$ as $ t\to0.$ $ \qedsymbol$

IX
Now, go back and consider $ f\in L^1([-\pi,\pi])$ which satisfies (20). We will show that there is a continuous function (not obviously zero) which also satisfies (20). Namely consider

$\displaystyle g(s)=\int_{[-\pi,s]}fdx-C,\ C$ chosen so$\displaystyle \int_{[-\pi,\pi]}gdx=0.$ (23)

The constant term here is added so that $ c_0(g)=0.$ We have to work harder to show that the other $ c_n(g)=0.$ To do so we use the integration by parts identity

$\displaystyle \int_{[a,b]} h_1(s)(\int_{[a,s]} h_2(x)dx)ds= \int_{[a,b]} (\int_{[x,b]}h_1(s)ds)h_2(x)dx.$ (24)


\begin{proposition}
The identity \eqref{6.3.2004.34}
holds if $h_i\in L^1([a,b]),$\ $i=1,2.$
\end{proposition}

Before worrying about the proof of this, let us apply it to $ h_2=f$ and $ h_1=\exp(-inx)$ for some $ 0\not=n\in\bbZ.$ On the left in (26) the inner integral is our definition of $ g$ in (25) except for the missing constant. On the right in (26) we compute, using equality of Riemann and Lebesgue integrals for continuous functions, finding

$\displaystyle \int_{[b,x]}h_1(s)ds=-\frac1{in}\left(\exp(-inb)-\exp(-inx)\right).$ (25)

This is an linear combination of our Fourier exponentials, so

$\displaystyle \int_{[a,b]} (\int_{[x,b]}\exp(-ins)ds)f(x)dx=0,\ 0\not=n\in\bbZ$ if $\displaystyle f$ satisfies (20)$\displaystyle .$ (26)

Since we already have arranged that $ c_0(g)=0$ and the constant does not change the $ c_n(g)$ for $ n\not=0,$ once we prove Proposition 4 we will know that

$\displaystyle g$ given by (25) satisfies (20) if $\displaystyle f$ does so. (27)

X
Proof of Proposition 4.

We have to prove (26). Again this is a return to basics. Notice first that both sides of (26) do make sense if $ h_i\in L^1([a,b])$ since then the integrated functions

$\displaystyle g_2(s)=\int_{[a,s]} h_2(x)dx)ds$ and $\displaystyle g_1(x)=\int_{[x,b]}h_1(s)ds$ (28)

are both continuous. This means that the products $ h_1(s)g_2(s)$ and $ h_2(x)g_2(x)$ are both integrable (why exactly).

Now, (26) is separately linear in $ h_1$ and $ h_2.$ So as usual we can assume these functions are positive, by first replacing the functions by their real and imaginary parts, and then their positive and negative parts. On the left the we can take a sequence of non-negative simple functions approaching $ h_1$ from below and we get convergence in $ L^1([a,b]).$ Similarly on the right the integrated functions approach $ g_1$ uniformly so the integrals converge. The same argument works for $ h_2,$ so it suffices to prove (26) for simple functions and hence, again using linearity, for the characteristic functions of two measurable sets, $ h_i=\chi_{A_i},$ $ A_i\subset[a,b]$ measurable. Recalling what it means to be measurable, we can approximate say $ A_1,$ in measure, by a sequence of elementary sets, each a finite union of disjoint intervals. Using Lemma 1 (several times) the resulting integrals converge. Applying the same arguement to $ A_2$ it suffices to prove (26) when $ h_1$ and $ h_2$ are the characteristic functions of intervals. In this case our identity has become rather trivial, except that we have to worry about the various cases. So, suppose that $ h_1$ is the characteristic function of $ [a_1,b_1],$ a subinterval of $ [a,b]$ (open, half-open or closed does not matter of course). We are trying to prove

$\displaystyle \int_{[a_1,b_1]}(\int_{[a,s]}h_2(x)dx)ds\overset{?}= \int_{[a,b]}(\int_{[x,b]\cap[a_1,b_1]}ds)h_2(x)dx.$ (29)

Using the constraints on the domains (these are now Riemann integrals anyway) this is equivalent to

$\displaystyle \int_{[a_1,b_1]}(\int_{[a_1,s]}\chi_{[c,d]}(x)dx)ds\overset{?}= \int_{[a_1,b_1]}(\int_{[x,b_1]}ds)\chi_{[c,d]}(x)dx$ (30)

We can replace $ a_1$ and $ c$ both by $ \max(a_1,c)$ from the support properties and similarly we can replace $ b_1$ and $ d$ by $ \min(b_1,d).$ Calling the new end points $ a$ and $ b$ again we are down to

$\displaystyle \int_{[a,b]}(\int_{[a,s]}dx)ds\overset{?}= \int_{[a,b]}(\int_{[x,b]}ds)dx$ (31)

which is just a very special case of Riemannian integration by parts. Thus (26), and hence Proposition 4, is proved in general.

XI
Okay, so now we know that if $ f\in L^1([-\pi,\pi])$ has all the $ c_n(f)=0$ then $ g$ in (25) is continuous and satisfies the same thing. Of course we can continue this, and integrate again, replacing $ g$ by $ f$ and get a new function which satisfies

$\displaystyle h(s)=\int_{[-\pi,s]}gdx-C',\ h'=g, c_n(h)=0\ \forall\ n\in\bbZ.$ (32)

Thus $ h$ is continuously differentiable and still has $ c_n(h)=0.$ We will show that this implies $ h\equiv0$ by proving a convergence result for Fourier series, this is [1] Theorem 2 on p.140, more or less.

XII

\begin{theorem}
If $h$\ is differentiable on
$[-\pi,\pi]$\ (so only from the ri...
...(h)(x)\longrightarrow h(x)\ \forall\ x\in[-\pi,\pi].
\end{equation}\end{theorem}

Proof. Pick a point $ x_0\in[-\pi,\pi]$ and consider the partial sum of the series we are interested in

$\displaystyle S_N(h)(x_0)=\frac1{\sqrt{2\pi}}\sum\limits_{k=-N}^N c_k(h)e^{ikxx...
...rac1{2\pi}\int_{-\pi}^\pi h(x)\left(\sum\limits_{k=-N}^Ne^{ik(x_0-x)}\right)dx.$ (33)

Here we have just inserted the definition of the $ c_n(h).$ So consider the function

$\displaystyle D_N(x)=\frac1{2\pi}\sum\limits_{k=-N}^Ne^{ikx}= \frac1{2\pi}\frac{e^{i(N+1)x}-e^{-iNs}} {e^{ix}-1}.$ (34)

To see this, multiply the sum definining $ D_N(x)$ by $ e^{iNx}$ and observe that it becomes $ \sum\limits_{i=0}^{2N}T^k=(T^{2N+1}-1)/(T-1)$ where $ T=e^{ix}.$ Also integrating $ D_N(x)$ term by term we find that

$\displaystyle \int_{-\pi}^{\pi}D_N(x)dx=1,\ D_N(x+2\pi n)=D_N(x),\ x\in\bbR,\ n\in\bbZ.$ (35)

The reason for looking at $ D_N(x)$ is that, from (36),

$\displaystyle S_N(h)(x_0)=\int_{-\pi}^{\pi}h(x)D_N(x_0-x)dx.$ (36)

From (38) we can write the expected limit in a similar way

$\displaystyle h(x_0)=\int_{-\pi}^{\pi}h(x_0)D_N(x_0-x)dx$ (37)

where we use the second part of (38) to reorganize the integral. Combining these two we get

$\displaystyle [S_N(h)(x_0)-h(x_0)]=\int_{-\pi}^{\pi}[h(x)-h(x_0)]D_N(x_0-x)dx.$ (38)

Now, the function $ e^{ix}-1$ in the denominator of $ D_N(x)$ in (37) vanishes precisely at $ x=0$ in $ [-\pi,\pi]$ and does so simply. That is, $ x/(e^{ix}-1)$ is continuous if we define it to take the value of the derivative $ -i$ at $ x=0.$ This means that the quotient

$\displaystyle p(x)=\frac{h(x)-h(x_0)}{e^{i(x-x_0)}-1}=\frac{h(x)-h(x_0)}{x-x_0} \frac{x-x_0}{e^{i(x-x_0)}-1}$ (39)

is bounded and continuous on $ [-\pi,\pi]$ if we define it correctly at $ x=x_0.$ In particular this means it defines an element $ p\in L^2([-\pi,\pi]).$ Then (41) can be written

$\displaystyle [S_N(h)(x_0)-h(x_0)]=c_{N+1}(p)-c_{-N}(p),\ p\in L^2([-\pi,\pi]).$ (40)

We already know, by Bessel's inequality, that the series $ \sum\limits_{n\in\bbZ}\vert c_n(p)\vert^2$ converges so $ c_{N+1}(p),$ $ c_{-N}(p)\to0$ as $ N\to\infty.$ Thus we have proved Theorem 3. $ \qedsymbol$

I should have commented a little on the case $ x_0=\pm\pi.$

XII
So, now we know that $ S_N(h)(x_0)\to H(x_0)$ for each $ x_0\in[-\pi,\pi]$ if $ h$ is differentiable. In fact the proof shows a little more than this, so let us record it:


\begin{lemma}
If $h\in L^2([\pi,\pi])$\ is continuous and
differentiable from the right and left at $x_0$\ then
$S_N(h)(x_0)\longrightarrow H(x_0).$
\end{lemma}

Proof. Just check that this is all we really used. $ \qedsymbol$

XIII
Returning to our efforts to prove Proposition 3 we now know that if $ f$ has $ c_n(f)=0$ for all $ n$ then $ g$ given by (25) must be constant, since its indefinite integral $ h$ vanishes. Thus the integrals $ \int_{[-\pi,s]}f(x)dx$ are independent of the end point $ s,$ so must all vanish (since we know that the limit as $ s\downarrow-\pi$ is zero). Thus we have a function $ f\in L^1([-\pi,\pi])$ with integral zero. The proof of Proposition 3 is therefore finished by


\begin{lemma}
If $f\in L^1([a,b])$\ and $\int_a^xf(s)ds=0$
for all $x\in [a,b]$\ then $f=0$\ in $L^1([a,b]).$
\end{lemma}

Proof. First, we can take the difference

$\displaystyle \int_{[c,d]}f(s)ds=\int_a^df(s)ds-\int_a^cf(s)ds=0$ (41)

to conclude that

$\displaystyle \int_{A}fdx=0\ \forall\ A\in\mathcal R_{\text{Leb}},$ (42)

the ring of subsets of $ [a,b]$ consisting of finite unions of disjoint intervals. But then, by Lemma 1, we see that the integral vanishes for all Lebesgue subsets of $ [a,b],$ since they can be approximated by such finite unions. Now, take $ A$ to be the measurable set on which $ f\ge0$ and then the set on which $ f\le0$ and we conclude that $ f=0$ almost everywhere in $ [a,b]$ and hence $ f=0$ in $ L^1([a,b].$ $ \qedsymbol$

XIV
Finally then we have proved Proposition 3 which can be restated as

No two distinct elements of $\displaystyle L^1([-\pi,\pi])$ have the same Fourier coefficients, (43)

or, if you prefer, that the Fourier coefficients determine a function in $ L^1([-\pi,\pi]).$ For a function in $ L^2([-\pi,\pi])$ we have the stronger statement that

\begin{multline}
f(x)=\sum\limits_{n=-\infty}^\infty c_n(f)e^{inx}\text{converge...
...{ with }c_n(f)=\frac1{\sqrt{2\pi}}\int_{-\pi}^\pi f(x)e^{-inx}dx.
\end{multline}

Richard B. Melrose 2004-05-24