Boundedness of a Continuous Function

Suppose f:I \rightarrow \Re is a continuous function (where I is an interval). Now, for every x_{0} \in I and \varepsilon>0, we have a \delta >0 such that f(x_{0})-\varepsilon< f(x)<f(x_{0})+\varepsilon for x_{0}-\delta<x<x_{0}+\delta. This tells us that f is bounded in the interval (x_{0}-\delta, x_{0}+\delta). Does it mean that the function is bounded in its entire domain? What we have shown is that given an x \in I, there is an interval I_{x} and two real numbers m_{x} and M_{x} such that

m_{x}<f(\xi)<M_{x} for all \xi \in I_{x}.

Surely \bigcup_{x \in I}I_{x} \supset I. But, if we could choose finitely many intervals out of the collection \{ I_{x}\}_{x \in I}, say, I_{x_{1}}, I_{x_{2}}, \ldots, I_{x_{n}} such that I_{x_{1}}, \bigcup I_{x_{2}} \bigcup \ldots \bigcup I_{x_{n}} \supset I, then we would get m < f(\xi) < M, where M=max \{ M_{x_{1}}, \ldots, M_{x_{n}}\}  and m=min \{ m_{x_{1}, m_{x_{2}}}, \ldots, m_{x_{n}}\}. That, we can indeed make such a choice is a property of a closed bounded interval I in \Re and is given by the following theorem, the proof of which, is given below:

Theorem (Heine-Borel):

Let a, b \in \Re and let I be a family of open intervals covering [a,b], that is, for all x \in [a,b], there exists I \in \mathcal{I} such that x \in I. Then, we can find finitely many open intervals I_{1}, I_{2}, \ldots \mathcal{I} such that I_{1} \bigcup I_{2} \bigcup I_{3}\bigcup \ldots \bigcup I_{n} \supset [a,b].

Proof:

Suppose our contention is false: Let us take the intervals [a,c] and [c,b] where c=\frac{a+b}{2}. If the hypothesis is false, then it should be false for at least one of the intervals [a,c] or [c,b]. Otherwise, we could find I_{1},I_{2}, \ldots I_{m} \in \mathcal{I} and J_{1}, J_{2}, \ldots \in \mathcal{I} such that I_{1} \bigcup I_{2} \bigcup \ldots I_{m} \supset [a,c] and J_{1} \bigcup J{2} \bigcup \ldots \bigcup J_{n} \supset [c,b] and then [I_{1} \ldots I_{m}, J_{1} \ldots J_{n}] would be the finite family of intervals for which I_{1} \bigcup I_{2} \bigcup \ldots \bigcup I_{m} \bigcup J_{1} \bigcup \ldots J_{n} \supset [a,b].

So let us assume that at least for one of the intervals [a,c] or [c,b] the assumption of the theorem is false. Call it [a_{1},b_{1}]. Again let c_{1}=\frac{a_{1}+b_{1}}{2}. Now since the claim of the theorem is false for [a_{1},b_{1}] it should be false for at least [a_{1},c_{1}] or [c_{1},b_{1}] by the above argument. Call it [a_{2},b_{2}]. We have a \leq a_{1} \leq a_{2} < b_{2} \leq b_{1} \leq b. We can continue this process to get a sequence of intervals [a_{1},b_{1}] \supset [a_{2},b_{2}] \supset [a_{3},b_{3}] \supset \ldots [a_{n},b_{n}] \supset \ldots for which the assertion is false. Observe further that b_{n}-a_{n}=\frac{b-a}{2^{n}} and that we have a \leq a_{1} \leq a_{2} \leq \ldots a_{n} < b_{1} \leq b_{n-1} \leq \ldots \leq b.

This gives us a monotonically increasing sequence (a_{n})_{n-1}^{\infty} which is bounded above and a monotonically decreasing sequence (b_{n})_{n=1}^{\infty} bounded below. So (a_{n})_{n=1}^{\infty} and (b_{n})_{n=1}^{\infty} must converge to say \alpha and \beta respectively. Then, \alpha=\beta because \beta - \alpha= \lim{(b_{n}-a_{n})}=\lim{\frac{(b-a)}{2^{n}}}=0. Since \mathcal{I} covers [a,b], \alpha must belong to J for some J \in \mathcal{I}. Also, since \lim_{n \rightarrow \infty}{a_{n}}=\alpha, there exists an n_{1} such that a_{n} \in J for all n > n_{2}. Now let n_{0}=max \{ n_{1},n_{2}\}. Therefore, we conclude that [a_{n},b_{n}] \subset J for all n > n_{0}. But, this violates our hypothesis that we cannot choose finitely many members of \mathcal{I} whose union will contain [a_{n},b_{n}] for any n. QED.

Corollary.

A continuous function on a closed interval is bounded.

The proof of the corollary is already given just before the Heine-Borel theorem. So, if we have a continuous function f:[a,b] \rightarrow \Re and M=\sup{\{f(x): a \leq x \leq b \}} and m=\inf{\{ f(x) : a \leq x \leq b\}}, the above corollary says -\infty < m \leq M < \infty. Next, we ask the natural question: do there exist two points x_{0},y_{0} \in [a,b] such that f(x_{0})=M and f(x_{0})=m? In other words, does a continous function on a closed interval attain its bounds? The answer is yes.

Theorem:

Suppose f:[a,b] \rightarrow \Re is continuous, and M=\sup{ \{f(x): a \leq x \leq b \}} and m=\inf{ \{ f(x): a \leq x \leq b\}}. Then, there are two points x_{0},y_{0} \in [a,b] such that f(x_{0})=M and f(y_{0})=m.

Note: these points x_{0} and y_{0} need not be unique.

Proof by contradiction:

Suppose there is no point x \in [a,b] such that f(x)=M, then we would have f(x)<M or M-f(x)>0 for all x \in [a,b]. Let us define y:[a,b] \rightarrow \Re by g(x)=\frac{1}{M-f(x)}

Since M-f(x) vanishes nowhere, y is also a continuous function. So, by the corollary above it ought to be bounded above, and below. Let 0 <g(x)<M_{1}, for all x \in [a,b]. On the other hand, by the property of a supremum we note that there exists an x \in [a,b] such that f(x)+\frac{1}{2M_{1}}>M, which implies that M-f(x)<\frac{1}{2M_{1}} or g(x)=\frac{1}{M-f(x)}>2M_{1}, which is contradiction. Therefore, f(x) must attain the value M at some point x_{0} \in [a,b]. The proof of the other part is very similar. QED.

The above theorem together with the corollary says that on a closed interval, a continuous function is bounded and attains its bounds. This, again by the intermediate value theorem, means that the function must attain all the values between its supremum and infimum. Thus, the image of a closed interval under a continuous map is a closed interval. However, if f is a continuous map on an open interval, then the function need not be bounded.

Example. 

Let f: (0,1) \rightarrow \Re be defined by f(x)=1/x. This is surely continuous but the limit,

\lim_{x \rightarrow 0}f(x)=\infty, which means that given any M >0, we can always find x such that f(x)>M, viz., choose 0<x<\frac{1}{M}.

If f is a continuous function, then given \varepsilon >0, for each x_{0} fixed, we can find \delta >0 such that

|f(x)-f(x_{0})|<\varepsilon whenever |x-x_{0}|<\delta

Here \delta depends upon x_{0}.

Can we choose \delta_{0}>0 such that it works for all x_{0}? The answer in general is no.

Example.

Let f: \Re \rightarrow \Re be defined by f(x)=x^{2}. If we fix any \delta >0, then for x>0, f(x+\theta)-f(x)=2\theta x + \theta^{2} \geq 2\theta x, and hence as x becomes large, the difference between f(x+\theta) and f(x) also becomes large for every fixed \theta>0. So for say \varepsilon=1, we cannot choose \delta>0 such that \delta>0 such that |f(x+\theta)-f(x)|<\varepsilon for all \theta<\delta and all x. We thus have the following definition:

Definition:

Let f: D \rightarrow \Re be a continuous function where D=\Re or [a,b] or (a,b). Then, f is said to be uniformly continuous if for all \varepsilon>0, there exists a \delta>0 such that

|f(x)-f(y)|<\varepsilon for all x, y \in D with |x-y|<\delta

We have seen above that every continuous function need not uniformly continuous. When D=[a,b], however, every continuous function is uniformly continuous as the next result shows.

Theorem:

Let f:[a,b] \rightarrow \Re be continuous. Then, f is uniformly continuous.

Proof.

Fix \varepsilon > 0. The continuity of f implies that for every x \in [a,b], we can choose \delta_{x}>0 such that

|f(x)-f(y)|<\frac{\varepsilon}{2} whenever |y-x|<\delta_{x} and y \in [a,b]

Now, let I_{x}=(x-\frac{1}{2}\delta_{x}, x +\frac{1}{2}\delta_{x})

Then, clearly \{I_{x}: x \in [a,b] \} covers [a,b] as x \in I_{x}. By the Heine Borel theorem, we can get finitely many intervals out of this family, I_{x_{1}}, I_{x_{2}}, …, I_{x_{m}} such that

I_{x_{1}} \bigcup I_{x_{2}} \bigcup \ldots \bigcup I_{x_{m}} \supset [a,b].

Let \delta = \min \{\frac{1}{2}\delta_{x_{1}}, \frac{1}{2}\delta_{x_{2}}, \ldots, \frac{1}{2}\delta_{x_{m}} \}

Then, \delta>0 (note that minimum of finitely many positive numbers is always positive). Next we claim that if x, y \in [a,b], |x-y|<\delta then |f(x)-f(y)|<\varepsilon

Since x \in [a,b] \subseteq I_{x_{1}} \bigcup \ldots \bigcup I_{x_{m}}, we can find k \leq m such that x \in I_{x_{k}}, that is, |x-x_{k}|<\frac{1}{2}\delta_{x_{k}}. Now, |y-x_{k}| \leq |x-y|+|x-x_{k}| \leq \delta +\frac{1}{2}\delta_{x_{k}} \leq \delta_{x_{k}}.

Hence, |f(y)-f(x_{k})| < \frac{\varepsilon}{2} and |f(x)-f(x_{k})| < \frac{\varepsilon}{2} and therefore, |f(y)-f(x)|<\varepsilon. QED.

More later,

Nalin Pithwa

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: