## Category Archives: pure mathematics

### Number theory has numerous uses

One of the fun ways to get started in mathematics at an early age s via number theory. It does not require deep, esoteric knowledge of concepts of mathematics to get started, but as you explore and experiment, you will learn a lot and also you will have a ball of time writing programs in basic number theory. One of the best references I have come across is “A Friendly Introduction to Number Theory” by Dr. Joseph Silverman. It is available on Amazon India.

Well, number theory is not just pure math; as we all know, it is the very core of cryptography and security in a world transforming itself to a totally digital commerce amongst other rapid changes. Witness, for example, the current intense debate about opening up an iPhone (Apple vs. FBI) and some time back, there was the problem with AES Encrypted Blackberry messaging services in India.

Number theory is also used in Digital Signal Processing, the way to filter out unwanted “noise” from an information signal or “communications signal.” Digital Signal Processing is at the heart of modem technology without which we would not be able to have any real computer networks.

There was a time when, as G H Hardy had claimed that number theory is the purest of all sciences as it is untouched by human desire. Not any more !!!

Can you imagine a world without numbers ?? That reminds me of a famous quote: “God created the natural numbers, all the rest is man-made.” (Kronecker).

More later,

Nalin Pithwa

### Careers in Mathematics

Most people already have a belief that the the only career possible with a degree in Mathematics is that of a teacher or a lecturer or a professor. Thanks to the co-founder(s) of Google, whose database search engine is based on the Perron-Frobenius Theorem, this notion is changing.

In particular, you might want to have a detailed look at the website of Australian mathematics/mathematicians —–

http://www.mathscareers.org.au/

I will cull more such stuff and post in this blog later…

Regards,

Nalin Pithwa

### Differentiation

We have seen how the concept of continuity is naturally associated with attempts to model gradual changes. For example, consider the function $f: \Re \rightarrow \Re$ given by $f(x)=ax+b$, where change in $f(x)$ is proportional to the change in x. This simple looking function is often used to model many practical problems. One such case is given below:

Suppose 30 men working for 7 hours a day can complete a piece of work in 16 days. In how many days can 28 men working for 6 hours a day complete the work? It must be evident to most of the readers that the answer is $\frac{16 \times 7 \times 30}{28 \times 6}=20$ days.

(While solving this we have tacitly assumed that the amount of work done is proportional to the number of men working, to the number of hours each man works per day, and also to the number of days each man works. Similarly, Boyle’s law for ideal gases states that pressure remaining constant, the increase in volume of a mass of gas is proportional to the increase in temperature of the gas).

But, there are exceptions to this as well. Galileo discovered that the distance covered by a body, falling from rest, is proportional to the square of the time for which it has fallen, and the velocity is proportional to the square root of the distance through which it has fallen. Similarly, Kepler’s law tells us that the square of the period of the planet going round the sun is proportional to the cube of the mean distance from the sun.

These and many other problems involve functions that are not linear. If for example we plot the graph of the distance covered by a particle versus time, it is a straight line only when the motion is uniform. But, we are seldom lucky to encounter only uniform motion. (Besides, uniform motion would be so monotonous. Perhaps, there would be no life at all motions if all motions were uniform. Imagine a situation in which each body is in uniform motion. A body at rest would be eternally at rest and those once in motion, would never stop.) So the simple method of proportionality becomes quite inadequate to tackle such non-linear problems. The genius of Newton lay in looking at those problems which are next best to linear, the ones that are nearly linear.

We know that the graph of a linear function is a straight line. What Newton suggested was to look at functions, small portions of whose graphs look almost like a straight line (see Fig 1).

In Fig 1, the graph certainly is not a straight line. But a small portion of it looks like a straight like a straight line. To formalize this idea, we need the concept of differentiability.

Definition.

Let I be an open interval and $f: I \rightarrow \Re$ be a function. We say that f is locally linear or differentiable at $x_{0} \in I$ if there is a constant m such that

$f(x)-f(x_{0})=m(x-x_{0})+r(x_{0},x)(x-x_{0})$

or equivalently, for x in a punctured interval around $x_{0}$,

$\frac{f(x)-f(x_{0})}{x-x_{0}}=m+r(x_{0},x)$

where $r(x_{0},x) \rightarrow 0$ as $x \rightarrow x_{0}$

What this means is that for small enough $x-x_{0}$, $\frac{f(x)-f(x_{0})}{x-x_{0}}$ is nearly a constant or, equivalently, $f(x)-f(x_{0})$ is nearly proportional to the increment $x-x_{0}$. This is what is called the principal of proportional parts and used very often in calculations using tables, when the number for which we are looking up the table is not found there.

Thus, if a function f is differentiable at $x_{0}$, then $\lim_{x \rightarrow x_{0}}\frac{f(x)-f(x_{0})}{x-x_{0}}$

exists and is called the derivative of f at $x_{0}$ and denoted by $f^{'}(x_{0})$. So we write

$\lim_{x \rightarrow x_{0}}\frac{f(x)-f(x_{0})}{x-x_{0}}=f^{'}(x_{0})$.

We need to look at functions which are not differentiable at some point, to fix our ideas. For example, consider the function $f: \Re \rightarrow \Re$ defined by $f(x)=|x|$.

This function though continuous at every point is not differentiable at $latex x=0$. In fact, $\lim_{x \rightarrow 0_{+}}\frac{|x|}{x}=-1$. What all this means is that if one looks at the graph of $f(x)=|x|$, it has a sharp corner at the origin.

No matter how small a part of the graph containing the point $(0,0)$ is taken, it never looks like a line segment. The reader can test for the non-differentiability of $f(x)=|\sin{x}|$ at $x=n\pi$.

This leads us to the notion of the direction of the graph at a point: Suppose $f: I \rightarrow \Re$ is a function differentiable at $x_{0} \rightarrow I$, and let P and Q be the points $(x_{0},f(x_{0}))$ and $(x, f(x))$ respectively in the graph of f. (see Fig 2).

The chord PQ has the slope $\frac{f(x)-f(x_{0})}{x-x_{0}}$. As x comes close to $x_{0}$, the chord tends to the tangent to the curve at $(x_{0}, f(x_{0}))$. So, $\lim_{x \rightarrow x_{0}} \frac{f(x)-f(x_{0}}{x-x_{0}}$ really represents the slope of the tangent at $(x_{0},f(x_{0}))$ (see Fig 3).

Similarly, if $x(t)$ is the position of a moving point in a straight line at time t, then $\frac{x(t)-x(t_{0}}{t-t_{0}}$ is its average velocity in the interval of time $[t_{0},t]$. Its limit as t goes to $t_{0}$, if it exists, will be its instantaneous velocity at the instant of time $t_{0}$. We have

$x^{'}{t_{0}}=\lim_{t \rightarrow t_{0}}\frac{x(t)-x(t_{0})}{t-t_{0}}$ is instantaneous velocity at $t_{0}$.

If the limit of $\frac{f(x)-f(x_{0})}{x-x_{0}}$ does not exist as x tends to $x_{0}$, the curve $(x, f(x))$ cannot have a tangent at $(x_{0},f(x_{0}))$, as we saw in the case of $f(x)=|x|$ at $(0,0)$; the graph abruptly changes its direction. If we look at the motion of a particle which is moving with uniform velocity till time $t_{0}$ and is abruptly brought to rest at that instant, then its graph would look as in Fig 4a.

This is also what we think happens when a perfectly elastic ball impinges on another ball of the same mass at rest, or  when a perfectly elastic ball moving at a constant speed impinges on a hard surface (see fig 4b). We see that there is a sharp turn in the space time graph of such a motion at time $t=t_{0}$. Recalling the interpretation of

$x^{'}(t)=\lim_{t \rightarrow t_{0}} \frac{x(t)-x(t_{0})}{t-t_{0}}$ as its instantaneous velocity at $t=t_{0}$, we see that in the situation described above, instantaneous velocity at $t=t_{0}$ is not a meaningful concept.

We have already seen that continuous functions need not be differentiable at some points of their domain. Actually there are continuous functions which are not differentiable anywhere also. On the other hand, as the following result shows, every differentiable function is always continuous.

Theorem:

If a function is differentiable at $x_{0}$, then it is continuous there.

Proof:

If f is differentiable at $x_{0}$, then let $\lim_{x \rightarrow x_{0}} \frac{f(x)-f(x_{0}}{x-x_{0}}=l$. Setting

$r(x,x_{0})=\frac{f(x)-f(x_{0})}{x-x_{0}}-l$, we see that $\lim_{x \rightarrow x_{0}}r(x, x_{0})=0$. Thus, we have

$f(x)-f(x_{0})=(x-x_{0})l + (x-x_{0})r(x,x_{0})$

Now, $\lim_{x \rightarrow x_{0}} (f(x)-f(x_{0}))=\lim_{x \rightarrow x_{0}}(x-x_{0})l + \lim_{x \rightarrow x_{0}} (x-x_{0})r(x, x_{0})=0$

This shows that f is continuous at $x_{0}$.

QED.

Continuity of f at $x_{0}$ tells us that $f(x)-f(x_{0})$ tends to zero as $x - x_{0}$ tends to zero. But, in the case of differentiability, $f(x)-f(x_{0})$ tends to zero at least as fast as $x-x_{0}$. The portion $l(x-x_{0})$ goes to zero no doubt but the remainder $|f(x)-f(x_{0})-l(x-x_{0})|$ goes to zero at a rate faster than that of $|x-x_{0}|$. This is how differentiation was conceived by Newton and Leibniz. They introduced a concept called an infinitesimal. Their idea was that when $x-x_{0}$ is an infinitesimal, then so is $f(x)-f(x_{0})$, which is of the same order of infinitesimal as $x-x_{0}$.The idea of infinitesimals served them well but had a little problem in its definition. They were introduced seemed to run against the Archimedean property. The definition of infinitesimals can be made rigorous But, we do not go into it here. However, we can still usefully deal with concepts and notation like:

(a) $f(x)=\mathcal{O}(g(x))$ as $x \rightarrow x_{0}$ if there exists a K such that $|f(x)| \leq K|g(x)|$ for x sufficiently near $x_{0}$.

(b) $f(x)=\mathcal{o}(g(x))$ as $x \rightarrow x_{0}$ if $\lim_{x \rightarrow x_{0}}\frac{f(x)}{g(x)}=0$.

Informally, $f(x)=\mathcal{o}(g(x))=0$ means $f(x)$ is of smaller order than $g(x)$ as

$x \rightarrow x_{0}$. In this notation, f is differentiable at $x_{0}$ if there is an l such that

$|f(x)-f(x_{0})-l(x-x_{0})|=\mathcal{o}(|x-x_{0}|)$.

We shall return to this point again. Let us first give examples of derivatives of some functions.

Examples.

(The proof are left as exercises).

(a) $f(x)=x^{n}$, $f^{'}(x_{0})=\lim_{x \rightarrow x_{0}}\frac{x^{n}-{x_{0}}^{n}}{x-x_{0}}=n{x_{0}}^{n-1}$, n a positive integer.

(b) $f(x)=x^{n}$ ($x \neq 0$, where n Is a negative integer), $f^{'}(x)=nx^{n-1}$

(c) $f(x)=e^{x}$, $f^{'}(x)=e^{'}(x)$

(d) $f(x)=a^{x}$, $f^{'}(x)=a^{x}\log[e]{a}$

### Boundedness of a Continuous Function

Suppose $f:I \rightarrow \Re$ is a continuous function (where I is an interval). Now, for every $x_{0} \in I$ and $\varepsilon>0$, we have a $\delta >0$ such that $f(x_{0})-\varepsilon< f(x) for $x_{0}-\delta. This tells us that f is bounded in the interval $(x_{0}-\delta, x_{0}+\delta)$. Does it mean that the function is bounded in its entire domain? What we have shown is that given an $x \in I$, there is an interval $I_{x}$ and two real numbers $m_{x}$ and $M_{x}$ such that

$m_{x} for all $\xi \in I_{x}$.

Surely $\bigcup_{x \in I}I_{x} \supset I$. But, if we could choose finitely many intervals out of the collection $\{ I_{x}\}_{x \in I}$, say, $I_{x_{1}}, I_{x_{2}}, \ldots, I_{x_{n}}$ such that $I_{x_{1}}, \bigcup I_{x_{2}} \bigcup \ldots \bigcup I_{x_{n}} \supset I$, then we would get $m < f(\xi) < M$, where $M=max \{ M_{x_{1}}, \ldots, M_{x_{n}}\}$  and $m=min \{ m_{x_{1}, m_{x_{2}}}, \ldots, m_{x_{n}}\}$. That, we can indeed make such a choice is a property of a closed bounded interval I in $\Re$ and is given by the following theorem, the proof of which, is given below:

Theorem (Heine-Borel):

Let $a, b \in \Re$ and let I be a family of open intervals covering $[a,b]$, that is, for all $x \in [a,b]$, there exists $I \in \mathcal{I}$ such that $x \in I$. Then, we can find finitely many open intervals $I_{1}, I_{2}, \ldots \mathcal{I}$ such that $I_{1} \bigcup I_{2} \bigcup I_{3}\bigcup \ldots \bigcup I_{n} \supset [a,b]$.

Proof:

Suppose our contention is false: Let us take the intervals $[a,c]$ and $[c,b]$ where $c=\frac{a+b}{2}$. If the hypothesis is false, then it should be false for at least one of the intervals $[a,c]$ or $[c,b]$. Otherwise, we could find $I_{1},I_{2}, \ldots I_{m} \in \mathcal{I}$ and $J_{1}, J_{2}, \ldots \in \mathcal{I}$ such that $I_{1} \bigcup I_{2} \bigcup \ldots I_{m} \supset [a,c]$ and $J_{1} \bigcup J{2} \bigcup \ldots \bigcup J_{n} \supset [c,b]$ and then $[I_{1} \ldots I_{m}, J_{1} \ldots J_{n}]$ would be the finite family of intervals for which $I_{1} \bigcup I_{2} \bigcup \ldots \bigcup I_{m} \bigcup J_{1} \bigcup \ldots J_{n} \supset [a,b]$.

So let us assume that at least for one of the intervals $[a,c]$ or $[c,b]$ the assumption of the theorem is false. Call it $[a_{1},b_{1}]$. Again let $c_{1}=\frac{a_{1}+b_{1}}{2}$. Now since the claim of the theorem is false for $[a_{1},b_{1}]$ it should be false for at least $[a_{1},c_{1}]$ or $[c_{1},b_{1}]$ by the above argument. Call it $[a_{2},b_{2}]$. We have $a \leq a_{1} \leq a_{2} < b_{2} \leq b_{1} \leq b$. We can continue this process to get a sequence of intervals $[a_{1},b_{1}] \supset [a_{2},b_{2}] \supset [a_{3},b_{3}] \supset \ldots [a_{n},b_{n}] \supset \ldots$ for which the assertion is false. Observe further that $b_{n}-a_{n}=\frac{b-a}{2^{n}}$ and that we have $a \leq a_{1} \leq a_{2} \leq \ldots a_{n} < b_{1} \leq b_{n-1} \leq \ldots \leq b$.

This gives us a monotonically increasing sequence $(a_{n})_{n-1}^{\infty}$ which is bounded above and a monotonically decreasing sequence $(b_{n})_{n=1}^{\infty}$ bounded below. So $(a_{n})_{n=1}^{\infty}$ and $(b_{n})_{n=1}^{\infty}$ must converge to say $\alpha$ and $\beta$ respectively. Then, $\alpha=\beta$ because $\beta - \alpha= \lim{(b_{n}-a_{n})}=\lim{\frac{(b-a)}{2^{n}}}=0$. Since $\mathcal{I}$ covers $[a,b]$, $\alpha$ must belong to J for some $J \in \mathcal{I}$. Also, since $\lim_{n \rightarrow \infty}{a_{n}}=\alpha$, there exists an $n_{1}$ such that $a_{n} \in J$ for all $n > n_{2}$. Now let $n_{0}=max \{ n_{1},n_{2}\}$. Therefore, we conclude that $[a_{n},b_{n}] \subset J$ for all $n > n_{0}$. But, this violates our hypothesis that we cannot choose finitely many members of $\mathcal{I}$ whose union will contain $[a_{n},b_{n}]$ for any n. QED.

Corollary.

A continuous function on a closed interval is bounded.

The proof of the corollary is already given just before the Heine-Borel theorem. So, if we have a continuous function $f:[a,b] \rightarrow \Re$ and $M=\sup{\{f(x): a \leq x \leq b \}}$ and $m=\inf{\{ f(x) : a \leq x \leq b\}}$, the above corollary says $-\infty < m \leq M < \infty$. Next, we ask the natural question: do there exist two points $x_{0},y_{0} \in [a,b]$ such that $f(x_{0})=M$ and $f(x_{0})=m?$ In other words, does a continous function on a closed interval attain its bounds? The answer is yes.

Theorem:

Suppose $f:[a,b] \rightarrow \Re$ is continuous, and $M=\sup{ \{f(x): a \leq x \leq b \}}$ and $m=\inf{ \{ f(x): a \leq x \leq b\}}$. Then, there are two points $x_{0},y_{0} \in [a,b]$ such that $f(x_{0})=M$ and $f(y_{0})=m$.

Note: these points $x_{0}$ and $y_{0}$ need not be unique.

Suppose there is no point $x \in [a,b]$ such that $f(x)=M$, then we would have $f(x) or $M-f(x)>0$ for all $x \in [a,b]$. Let us define $y:[a,b] \rightarrow \Re$ by $g(x)=\frac{1}{M-f(x)}$

Since $M-f(x)$ vanishes nowhere, y is also a continuous function. So, by the corollary above it ought to be bounded above, and below. Let $0 , for all $x \in [a,b]$. On the other hand, by the property of a supremum we note that there exists an $x \in [a,b]$ such that $f(x)+\frac{1}{2M_{1}}>M$, which implies that $M-f(x)<\frac{1}{2M_{1}}$ or $g(x)=\frac{1}{M-f(x)}>2M_{1}$, which is contradiction. Therefore, $f(x)$ must attain the value M at some point $x_{0} \in [a,b]$. The proof of the other part is very similar. QED.

The above theorem together with the corollary says that on a closed interval, a continuous function is bounded and attains its bounds. This, again by the intermediate value theorem, means that the function must attain all the values between its supremum and infimum. Thus, the image of a closed interval under a continuous map is a closed interval. However, if f is a continuous map on an open interval, then the function need not be bounded.

Example.

Let $f: (0,1) \rightarrow \Re$ be defined by $f(x)=1/x$. This is surely continuous but the limit,

$\lim_{x \rightarrow 0}f(x)=\infty$, which means that given any $M >0$, we can always find x such that $f(x)>M$, viz., choose $0.

If f is a continuous function, then given $\varepsilon >0$, for each $x_{0}$ fixed, we can find $\delta >0$ such that

$|f(x)-f(x_{0})|<\varepsilon$ whenever $|x-x_{0}|<\delta$

Here $\delta$ depends upon $x_{0}$.

Can we choose $\delta_{0}>0$ such that it works for all $x_{0}$? The answer in general is no.

Example.

Let $f: \Re \rightarrow \Re$ be defined by $f(x)=x^{2}$. If we fix any $\delta >0$, then for $x>0$, $f(x+\theta)-f(x)=2\theta x + \theta^{2} \geq 2\theta x$, and hence as x becomes large, the difference between $f(x+\theta)$ and $f(x)$ also becomes large for every fixed $\theta>0$. So for say $\varepsilon=1$, we cannot choose $\delta>0$ such that $\delta>0$ such that $|f(x+\theta)-f(x)|<\varepsilon$ for all $\theta<\delta$ and all x. We thus have the following definition:

Definition:

Let $f: D \rightarrow \Re$ be a continuous function where $D=\Re$ or $[a,b]$ or (a,b). Then, f is said to be uniformly continuous if for all $\varepsilon>0$, there exists a $\delta>0$ such that

$|f(x)-f(y)|<\varepsilon$ for all $x, y \in D$ with $|x-y|<\delta$

We have seen above that every continuous function need not uniformly continuous. When $D=[a,b]$, however, every continuous function is uniformly continuous as the next result shows.

Theorem:

Let $f:[a,b] \rightarrow \Re$ be continuous. Then, f is uniformly continuous.

Proof.

Fix $\varepsilon > 0$. The continuity of f implies that for every $x \in [a,b]$, we can choose $\delta_{x}>0$ such that

$|f(x)-f(y)|<\frac{\varepsilon}{2}$ whenever $|y-x|<\delta_{x}$ and $y \in [a,b]$

Now, let $I_{x}=(x-\frac{1}{2}\delta_{x}, x +\frac{1}{2}\delta_{x})$

Then, clearly $\{I_{x}: x \in [a,b] \}$ covers $[a,b]$ as $x \in I_{x}$. By the Heine Borel theorem, we can get finitely many intervals out of this family, $I_{x_{1}}$, $I_{x_{2}}$, …, $I_{x_{m}}$ such that

$I_{x_{1}} \bigcup I_{x_{2}} \bigcup \ldots \bigcup I_{x_{m}} \supset [a,b]$.

Let $\delta = \min \{\frac{1}{2}\delta_{x_{1}}, \frac{1}{2}\delta_{x_{2}}, \ldots, \frac{1}{2}\delta_{x_{m}} \}$

Then, $\delta>0$ (note that minimum of finitely many positive numbers is always positive). Next we claim that if $x, y \in [a,b]$, $|x-y|<\delta$ then $|f(x)-f(y)|<\varepsilon$

Since $x \in [a,b] \subseteq I_{x_{1}} \bigcup \ldots \bigcup I_{x_{m}}$, we can find $k \leq m$ such that $x \in I_{x_{k}}$, that is, $|x-x_{k}|<\frac{1}{2}\delta_{x_{k}}$. Now, $|y-x_{k}| \leq |x-y|+|x-x_{k}| \leq \delta +\frac{1}{2}\delta_{x_{k}} \leq \delta_{x_{k}}$.

Hence, $|f(y)-f(x_{k})| < \frac{\varepsilon}{2}$ and $|f(x)-f(x_{k})| < \frac{\varepsilon}{2}$ and therefore, $|f(y)-f(x)|<\varepsilon$. QED.

More later,

Nalin Pithwa

### Exponentials and logarithms

We continue this topic after the intermediate value theorem posted earlier.

For $a>1$, define $f: \Re \rightarrow \Re$ by $f(x)=a^{x}$. It is easily seen that $f(x) if $x>y$. This shows that f is one to one. Further, $\lim_{x \rightarrow \infty}f(x)=\infty$, whereas $\lim_{x \rightarrow -\infty}f(x)=0$. That, $f: \Re \rightarrow \Re$ is onto $\Re_{+}$ follows from the intermediate value theorem. Thus, $f:\Re \rightarrow \Re_{+}$ defined by $f(x)=a^{x}$ is bijective. So there is a unique map

$g: \Re_{+} \rightarrow \Re$

such that $f(g(y))=y$ for every y in $\Re_{+}$ and $g(f(x))=x$ for every x in $\Re$.

This function g is what we call the logarithm function of y to the base a, written as $\log_{a}{y}$. In fact, the logarithm is a continuous function.

For $y_{0} \in \Re_{+}$, $\varepsilon>0$, let $\delta=min\{ a^{x_{0}+\varepsilon}-a^{x_{0}}, a^{x_{0}}-a^{x_{0}-\varepsilon}\}$, where $x_{0}=\log_{a}{y_{0}}$. Then, we have for

$|y_{0}-y|, $a^{x_{0}-\varepsilon} \leq y_{0}-\delta , or

$x_{0}-\varepsilon<\log_{a}{y} or

$g(y_{0})-\varepsilon

Exercise.

If $f:\Re \rightarrow \Re$ is an increasing continuous function, show that it is bijective onto its range and its inverse is also continuous.

With the help of the logarithm function, we can evaluate $\lim_{x \rightarrow 0}\frac{a^{x}-1}{x}$.

Let $a^{x}=1+y$ so that $y \rightarrow 0$ as $x \rightarrow 0$. Also, $x=\log_{a}{1+y}$. So, we have

$\lim_{x \rightarrow 0}\frac{a^{x}-1}{x}=\lim_{y \rightarrow 0}\frac{y}{\log_{a}{1+y}}=\lim_{y \rightarrow 0}\frac{1}{\frac{1}{y}\log_{a}{1+y}}=\lim_{y \rightarrow 0}\frac{1}{\log_{a}{(1+y)^{\frac{1}{y}}}}$, that is,

$\frac{1}{\log_{a}{e}}=\log_{e}{a}$.

In the step before last, we have used the fact that the logarithm is a continuous function and that $\lim_{y \rightarrow 0}{(1+y)^{1/y}}=e$, while in the last step we have observed that $(\log_{a}{e})^{-1}=\log_{e}{a}$ (Exercise).

More later,

Nalin Pithwa

### Intermediate value theorem

Intermediate value theorem.

Let $f:[a,b] \rightarrow \Re$ be continuous. Suppose for $x_{1},x_{2} \in [a,b]$, $f(x_{1}) \neq f(x_{2})$. If c is a real number between $f(x_{1})$ and $f(x_{2})$, then there is an $x_{0}$ between $x_{1}$ and $x_{2}$ such that $f(x_{0})=c$.

Proof.

Define $g:[a,b] \rightarrow \Re$ by $g(x)=f(x)-c$. Then, g is a continuous function such that $g(x_{1})$ and $g(x_{2})$ are of opposite signs. The assertion of the theorem amounts to saying that there is a point $x_{0}$ between $x_{1}$ and $x_{2}$ such that $g(x_{0})=0$. Without loss of generality, we may take $g(x_{1})>0$ and $g(x_{2})<0$ (otherwise replace g by -g). If $g(\frac{x_{1}+x_{2}}{2})>0$, we write $\frac{x_{1}+x_{2}}{2}=a_{1}$ and $x_{2}=b_{1}$; otherwise, write $a_{1}=x_{1}$ and $b_{1}=\frac{x_{1}+x_{2}}{2}$ so that we have $g(a_{1})>0$, $g(b_{1})<0$. Now if $g(\frac{x_{1}+x_{2}}{2})=0$, then $x_{0}=\frac{a_{1}+b_{1}}{2}$.

If $g(\frac{a_{1}+b_{1}}{2})>0$, write $\frac{a_{1}+b_{1}}{2}=a_{2}$ and $b_{1}=b_{2}$, otherwise write $a_{2}=a_{1}$ and $b_{2}=\frac{a_{1}+b_{1}}{2}$, so that we have $g(a_{2})>0$ and $g(b_{2})<0$. We could continue this process and find sequences $(a_{n})_{n=1}^{\infty}$,

$(b_{n})_{n=1}^{\infty}$ with $g(a_{n})>0$ and $g(b_{n})<0$ and

$a_{1} \leq a_{2} \leq \ldots \leq a_{n} \leq a_{n+1} \leq \ldots \leq x_{2}$,

$b_{1} \geq b_{2} \geq \ldots \geq b_{n} \geq b_{n+1} \geq \ldots \geq x_{1}$.

Since $(a_{n})_{n=1}^{\infty}$ is a monotonically non-decreasing sequence bounded above, it must converge. Suppose it converges to $\alpha$. Similarly, $(b_{n})_{n=1}^{\infty}$ is monotonically non-increasing, bounded below and therefore converges to, say, $\beta$. We further note that $b_{n}-a_{n}=\frac{x_{2}-x_{1}}{2^{n}} \rightarrow 0$ as $n \rightarrow \infty$ implying $\alpha=\beta$. Let us call this $x_{0}$. By the continuity of g, we have $\lim_{n \rightarrow \infty}g(a_{n})=g(x_{0})=\lim_{n \rightarrow \infty}g(b_{n})$, and since $g(a_{n})>0$ for all n, we must have $g(x_{0}) \geq 0$ and at the same time since $g(b_{n})<0$ for all n, we must also have $g(x_{0}) \leq 0$. This implies $g(x_{0})=0$. QED.

Corollary.

If f is a continuous function in an interval I and $f(a)f(b)<0$ for some $a,b \in I$, then there is a point c between a and b for which $f(c)=0$. (Exercise).

The above result is often used to locate the roots of equations of the form $f(x)=0$.

For example, consider the equation: $f(x) \equiv x^{3}+x-1=0$.

Note that $f(0)=-1$ whereas $f(1)=1$. This shows that the above equation has a root between 0 and 1. Now try with 0.5. $f(0.5)=-0.375$. So there must be a root of the equation between 1 and 0.5. Try $0.75$. $f(0.75)>0$, which means that the root is between 0.5 and 0.75. So, we may try $0.625$. $f(0.625)<0$. So the root is between 0.75 and 0.625. Now, if we take the approximate root to be 0.6875, then we are away from the exact root at most a distance of 0.0625. If we continue this process further, we shall get better and better approximations to the root of the equation.

Exercise.

Find the cube root of 10 using the above method correct to 4 places of decimal.

More later,

Nalin Pithwa

### Some elementary properties of limits

Some elementary properties of limits.

1) Suppose for functions f and g, $\lim_{x \rightarrow a}f(x)$ and $\lim_{x \rightarrow a}g(x)$ exist. Then, we have

(a) $\lim_{x \rightarrow a}(f(x)+g(x))=\lim_{x \rightarrow a}f(x)+\lim_{x \rightarrow a}g(x)$

(b) $\lim_{x \rightarrow a}(f(x)g(x))=\lim_{x \rightarrow a}f(x). \lim_{x \rightarrow b}g(x)$

(c) $\lim_{x \rightarrow a}\frac{f(x)}{g(x)}=\frac{\lim_{x \rightarrow a} f(x)}{\lim_{x \rightarrow a}g(x)}$, provided $\lim_{x \rightarrow a}g(x) \neq 0$

(2) Suppose f is continuous at a, then $\lim_{x \rightarrow a}f(x)$ is simply $f(a)$.

(3) Suppose f is defined in a deleted neighbourhood of $x_{0}$, g is defined in a neighbourhood of $x_{0}$ and is continuous. If $f(x)=g(x)$ for x in the deleted neighbourhood of $x_{0}$, then $\lim_{x \rightarrow a}f(x)=g(x)$.

From the above, it is easy to see that

(a) $\lim_{x \rightarrow x_{0}}p(x)=p(x_{0})$ fpr a polynomial p.

(b) $\lim_{x \rightarrow x_{0}}\sin{x}=\sin{x_{0}}$

(c) $\lim_{x \rightarrow x_{0}}\frac{p(x)}{q(x)}=\frac{p(x_{0})}{q(x_{0})}$, for polynomials p, q if $q(x_{0}) \neq 0$.

More later,

Nalin Pithwa

### Continuity: continued

Consider the theorem proved in the previous blog:

Theorem:

$f: \Re \rightarrow \Re$ is continuous at $x \in \Re$ if and only if $(f(x_{n}))_{n=1}^{\infty}$ converges to $f(x)$ whenever $(x_{n})_{n=1}^{\infty}$ converges to x, that is, $\lim_{n \rightarrow \infty}=f(\lim_{n \rightarrow \infty} x_{n})$.

Observe that the above theorem states that not only does $(f(x_{n}))_{n=1}^{\infty}$ converge whenever $(x_{n})_{n=1}^{\infty}$ does, but also that $\lim f(x_{n})=f(\lim x_{n})$ for a continuous function. On the other hand, if f is not continuous it may happen that $\lim f(x_{n})$ exists, but does not equal $f(\lim x_{n})$. This leads to another notion called limit of a function.

Definition. Suppose $f: \Re \rightarrow \Re$. We say that $l$ is the limit of a function $f(x)$ as x tends to $x_{0}$ if for every $\varepsilon > 0$, there is a $\delta > 0$ such that

$|f(x)-l|< \varepsilon$ for $0 < |x-x_{0}|< \delta$.

In this case, we write $\lim_{x \rightarrow x_{0}} f(x)=l$.

This is the same thing as saying that $f(x)$ can be brought as close to l as we please by bringing x sufficiently close to $x_{0}$. But, we do not require the function to have the value $l$ at $x_{0}$. If it has the value $l$ at $x_{0}$. If it has the value $l$ at $x_{0}$, then it is continuous at $x_{0}$.

From the above discussion, we see that for $\lim_{x \rightarrow x_{0}} f(x)$ to exist it is not necessary for us to assume that f is defined at $x_{0}$. We only need to know whether or not $f(x)$ is coming close to a definite real number when x is coming close to $x_{0}$. Thus, for the limit of $f(x)$ (as x tends to $x_{0})$ to exist or not to exist, we need the function f to be defined for $\{ x: 0 < |x-x_{0}|< \delta\}$ for some $\delta >0$x. This is the set of points in the interval $(x_{0}-\delta, x_{0}+\delta)$ from which $x_{0}$ has been deleted, that is, $\{ x:0 < |x-x_{0}|<\delta\}=(x_{0}-\delta, x_{0}) \bigcup (x_{0}, x_{0}+\delta)$. When we want to look at a function over such a punctured neighbourhood of a point, we try to see what happens to the function when we come close to it, without actually reaching that point. This is not really artificial at all as many physical and mathematical exigencies force us to look at such situations. Take, the case of a particle moving in a straight line. With reference to a fixed point on the straight line, let $x(t)$ and $x(t_{0})$ be the positions of the particle at time t and $t_{0}$ respectively ($t > t_{0}$). So its average velocity in the interval of time $[t_{0},t]$ is given by $\frac{x(t)-x(t_{0})}{t-t_{0}}=f(t)$. Now the function f is defined for every t save $t_{0}$. But, the instantaneous velocity should indeed e the $\lim_{t \rightarrow t_{0}}$. In other words, as the interval of time $[t_{0},t]$ decreases, the average velocity should eventually stabilize to a certain number $v(t_{0})$ called the instantaneous velocity of the particle at the instant of time $t_{0}$. If this happens, only then it is meaningful of talk of instantaneous velocity of the particle. See what we are doing. We have a function f defined for every real number except a particular real number $x_{0}$. Then, we are trying to find out what happens to $f(t)$ as t comes closer and closer to $t_{0}$. We shall see later many such situations of finding the limit of $f(t)$ as $t \rightarrow t_{0}$, where $f(t)$ is defined for every t except $t=t_{0}$.

Digress. The post office function.

When we want to mail a letter enclosed in an envelope, we usually go to the post master with it to tell us the denomination of the stamp to be affixed. The post master weighs the letter and tells us the postage according to the weight. Here, we have a definite postage for a definite weight, we don’t have different rates for the same weight (for same kind of mail like registered or ordinary or speed post). The rate chart with the post master, for a particular kind of mail, truly is a function whose domain is the set of positive real numbers representing the weight of the mail, and the range again consists of positive real numbers representing the postage. We write the chart as a function f such that $p=f(w)$ meaning p is the postage to mail a letter of weight w.

Let us look again at this post office function.

This function is clearly defined for every x. But what happens to $f(x)$, for instance, when x comes close to 15? When we take $x=14.9, x=14.999$ we are, at each successive stage, coming closer to 15. Similarly, as we go through $x=15.1, x=15.01, x=15.001$, we are coming closer to 15 as well. But, in the former case, we were approaching x through the values of x less than 15, or what is the same thing as approaching 15 from the left. In the latter case, we are approaching x through the values of x larger than x 15, or we are approaching from the right. It is clear that $\lim f(x)=2$ as x approaches 15 from the left, while $\lim f(x)=3$, when x approaches 15 from the right. So we have the following definition:

Definition.

Let $f:\Re - \{ a\} \rightarrow \Re$. We say that the left hand limit of $f(x)$ exists as x tends to a if there is a number such that for every $\varepsilon > 0$, we can find a $\delta > 0$ such that

$|f(x)-l|< \varepsilon$ for $a-\delta < x < a$.

In such a case, we call the left hand limit of $f(x)$ at a, and write $\lim_{x \rightarrow a_{-}} f(x)=l$. Similarly, we say that $f(x)$ has right hand limit as x tends to a if for all

$\varepsilon >0$ there exists $\delta >0$ such that

$|f(x)-r|< \varepsilon$ for $a < x < a+\delta$.

In such a case, we call the right hand limit of $f(x)$ as x tends to a and write $\lim_{x \rightarrow a_{+}}f(x)=r$. It is clear that when the left hand limit exists, it is unique. So is the case with the right hand limit. Thus, we are led to the conclusion:

The limit of $f(x)$ exists as x tends to a if and only if both the left-hand and right hand limits of $f(x)$ exist as x approaches a, and they are equal.Moreover, if the common limit is equal to the value of the function at the point, then the function is continuous at that point

In the case of the post office function,  the left hand and right hand limits exists at 15 but are not equal. So there is not the question of the limit existing much less the continuity of the function at 15.

More later,

Nalin Pithwa

### Some properties of continuous functions : continued

We now turn to the question whether $f(x_{n})$ approximates $f(x)$ when $x_{n}$ approximates x. This is the same as asking: suppose $(x_{n})_{n=1}^{\infty}$ is a sequence of real numbers converging to x, does $(f(x_{n}))_{n=1}^{\infty}$ converges to $f(x)$?

Theorem.

If $f: \Re \rightarrow \Re$ is continuous at $x \in \Re$ if and only if $(f(x_{n}))_{n=1}^{\infty}$ converges to $f(x)$ whenever $(x_{n})_{n=1}^{\infty}$ converges to x, that is,

$\lim_{n \rightarrow \infty}f(x_{n})=f(\lim_{n \rightarrow \infty} x_{n})$.

Proof.

Suppose f is continuous at x and $(x_{n})_{n=1}^{\infty}$ converges to x. By continuity, for every $\varepsilon > 0$, there exists $\delta >0$ such that $|f(x)-f(y)|<\varepsilon$ whenever $|x-y|<\delta$. Since $(x_{n})_{n=1}^{\infty}$ converges to x, for this $\delta >0$, we can find an $n_{0}$ such that $|x-x_{0}|<\delta$ for $n > n_{0}$.

So $|f(x)-f(x_{n})|<\varepsilon$ for $n > n_{0}$ as $|x_{n}-x|<\delta$.

Conversely, suppose $(f(x_{n}))_{n=1}^{\infty}$ converges to $f(x)$ when $(x_{n})_{n=1}^{\infty}$ converges to x. We have to show that f is continuous at x. Suppose f is not continuous at x. That is to say, there is an $\varepsilon >0$ such that however small $\delta$ we may choose, there will be a y satisfying $|x-y|<\delta$ yet $|f(x)-f(y)|\geq \varepsilon$. So for every n, let $x_{n}$ be such a number for which $|x-x_{n}|<(1/n)$ and $|f(x)-f(x_{n})| \geq \varepsilon$. Now, we see that the sequence $(x_{n})_{n=1}^{\infty}$ converges to x. But $(f(x_{n}))_{n=1}^{\infty}$ does not converge to f(x) violating our hypothesis. So f must be continuous at x. QED.

More later,

Nalin Pithwa

### Some properties of continuous functions

Theorem:

If $g,f : [a,b] \rightarrow \Re$ are continuous functions and c is a constant, then

a) $f+g$ is a continuous functions.

b) $f-g$ is a continuous functions.

c) cf is a continuous function.

d) fg is a continuous function.

Proof:

We shall only prove statement d. Choose and fix any $\varepsilon >0$. Since is continuous at $x_{0}$, we have that for the positive number $\frac{\varepsilon}{(2|g(x_{0})|+1}$ there exists a $\delta_{1}>0$ such that

$|f(x)-f(x_{0}|< \frac{\varepsilon}{2(|g(x_{0}|+1)}$ whenever $|x-x_{0}|<\delta_{1}$

Since $||f(x)|-|f(x_{0})|| \leq |f(x)-f(x_{0})|$, we conclude that

$|f(x)|<|f(x_{0})|+\frac{\varepsilon}{2(|g(x_{0})|+1)}$, whenever $|x-s_{0}|<\delta_{1}$. Let $|f(x_{0})|+\varepsilon 2(|g(x_{0})|+1)=M$. Also, since g is continuous at $x_{0}$, for the positive number $\frac{\varepsilon}{2M}$, there is a $\delta_{2}>0$ such that $|g(x)-g(x_{0})|< \frac{\varepsilon}{2M}$ whenever $|x-x_{0}|<\delta_{2}$. Put $\delta=min(\delta_{1},\delta_{2})$. Then, whenever $|x-x_{0}|<\delta$, we have

$|f(x)g(x)-f(x_{0})g(x_{0})|$ equals

$|f(x)g(x)-f(x)g(x_{0})+f(x)g(x_{0})-f(x_{0})g(x_{0})|$

$\leq |f(x)||g(x)-g(x_{0})|+|g(x_{0})(f(x)-f(x_{0}))|$

which equals

$|f(x)||g(x)-g(x_{0})|+|g(x_{0})||f(x)-f(x_{0})|$

$< M.\frac{\varepsilon}{2M}+|g(x_{0})|.\frac{\varepsilon}{2(|g(x_{0}|+1)}$

which equals $\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon$

Observe that we have not claimed that the quotient of two continuous functions is continuous. The problem is obvious: $\frac{f(x)}{g(x)}$ cannot have any meaning at x for which $g(x)=0$. So, the question would be, if $g(x) \neq 0$ for every $x \in [a,b]$, is the function $h:[a,b] \rightarrow \Re$, defined by $h(x)=\frac{f(x)}{g(x)}$, continuous? The answer is yes. For a proof, we need a preliminary result.

Lemma.

if $g:[a,b] \rightarrow \Re$ is continuous and $g(x_{0}) \neq 0$, then there is an $m > 0$ and $\delta >0$ such that if $x_{0}-\delta, then $|g(x)|>m$.

Proof.

Let $|g(x_{0})|=2m$. Now, $m>0$. By continuity of g, there is a $\delta>0$ such that

$|g(x)-g(x_{0})| for $x_{0}-\delta

But, $|g(x)-g(x_{0})| \geq ||g(x)|-|g(x_{0})||$ and hence, $-m<|g(x)|-|g(x_{0})|<m$, giving us

$m=|g(x_{0})|-m<|g(x)|$ for $x_{0}-\delta. Hence, the proof.

The lemma says that if a continuous function does not vanish at a point, then there is an interval containing it in which it does not vanish at any point.

Theorem.

If $f,g :[a,b] \rightarrow \Re$ are continuous and $g(x) \neq 0$ for all x, then $h:[a,b] \rightarrow \Re$ defined by $h(x)=\frac{f(x)}{g(x)}$ is continuous.

The proof of the above theorem using the lemma above is left as an exercise.

Examples.

a) $f:\Re \rightarrow \Re$ defined by $f(x)=a_{0}$ for all $x \in \Re$, where $a_{0}$ is continuous.

b) $f:\Re \rightarrow \Re$ defined by $f(x)=x$ is continuous.

c) $g:\Re \rightarrow \Re$ defined by $g(x)=x^{2}$ is a continuous function because $g(x)=f(x)f(x)$, where $f(x)=x$. Since f is continuous by (b), g must be continuous.

d) $h:\Re \rightarrow \Re$ by $h(x)=x^{n}$, n being a positive integer, is continuous by repeated application of the above reasoning.

e) $p: \Re \rightarrow \Re$ defined by $p(x)=a_{0}+a_{1}x+\ldots +a_{n}x^{n}$, where $a_{0}, a_{1}, \ldots , a_{n}$ is also continuous. This is because of the fact that if

$f_{1}, f_{2}, f_{3} \ldots, f_{n}:\Re \rightarrow \Re$ are defined by $f_{1}(x)=x$, $f_{2}=x^{2}$, …, $f_{n}=x^{n}$, then $a_{1}f_{1}, a_{2}f_{2}, \ldots a_{n}f_{n}$ are also continuous functions. Hence,

$a_{0}+a_{1}f_{1}+ \ldots +a_{n}f_{n}=p$ is also a continuous function as the sum of continuous functions is a continuous function. Thus, we have shown that a polynomial is a continuous function.

f) Let p and q be polynomials. Let $\alpha_{1}, \alpha_{2}, \ldots, \alpha_{n} \in \Re$ be such that $q(\alpha_{1})=q(\alpha_{2})=\ldots=q(\alpha_{n})$  and $q(\alpha) \neq 0$ for $\alpha \neq \alpha_{1}, \alpha \neq \alpha_{2}, \ldots, \alpha neq \alpha_{n}$.

Now, let $D = \Re - \{ \alpha_{1}, \alpha_{2}, \ldots , \alpha_{n}\}$.

Then, $h:D \rightarrow \Re$ defined by $h(x)=\frac{p(x)}{q(x)}$ is a continuous function. What we have said is that a rational function which is defined everywhere except on the finite set of zeroes of the denominator is continuous.

g) $f:\Re \rightarrow \Re$ defined by $f(x)=\sin{x}$ is continuous everywhere. Indeed, $f(x)-f(x_{0})=\sin{x}-\sin{x_{0}}=2\sin{\frac{x-x_{0}}{2}}\cos{\frac{x+x_{0}}{2}}$. Therefore,

$|f(x)-f(x_{0})|=2|\sin{\frac{(x-x_{0})}{2}}| |\cos{\frac{(x+x_{0})}{2}}|\leq |x-x_{0}|$ (because $|\sin{x}| \leq |x|$, where x is measured in radians)

h) $f:\Re \rightarrow \Re$ defined by $f(x)=\cos{x}$ is continuous since

$|f(x)-f(x_{0})|=|\cos{x}-\cos{x_{0}}|=2|\sin{\frac{(x_{0}-x)}{2}}\sin{\frac{x+x_{0}}{2}}| \leq |x-x_{0}|$

i) $f:\Re - \{ (2n+1)\frac{\pi}{2}: n \in \mathbf{Z}\} \rightarrow \Re$ defined by $f(x)=\tan{x}$ is continuous. We had to omit numbers like $\ldots, \frac{-3\pi}{2}, \frac{-\pi}{2}, \frac{\pi}{2}, \frac{3\pi}{2}, \ldots$ from the domain of f as $\tan{x}$ cannot be defined for these values of x.

j) $f:\Re_{+} \rightarrow \Re$ defined by $f(x)=x^{1/n}$ is a continuous function. Indeed,

$f(x)-f(a)=x^{1/n}-a^{1/m}$ which equals

$\frac{(x-a)}{x^{\frac{n-1}{n}}+x^{\frac{n-2}{n}}\frac{1}{a^{n}}+\ldots +a^{\frac{n-1}{n}} }$

Choose $|x-a|<|a/2|$ to start with, so that $|a/2|<|x|<(3/2)|a|$. Thus,

$|x^{\frac{n-1}{n}}+x^{\frac{n-2}{n}}a^{1/n}+\ldots+a^{\frac{n-1}{n}}|>|a|^{\frac{n-1}{n}} \times ((1/2)^{\frac{n-1}{n}}+(1/2)^{\frac{n-2}{n}}+\ldots+1)$

Given an $\varepsilon >0$, let

$\delta=min\{\frac{|a|}{2}, \varepsilon \times |a|^{\frac{n-1}{n}} \times \left( (1/2)^{\frac{n-1}{n}}+\ldots+1 \right)\}$.

Then, for $|x-a|<\delta$, we have

$|f(x)-f(a)|=\frac{|x-a|}{|x^{\frac{n-1}{n}}+x^{\frac{n-2}{n}} \times a^{1/n}+\ldots+a^{\frac{n-1}{n}}|}< \varepsilon$.

It can be shown that f defined by $f(x)=x^{r}$ is also a continuous function for every real $r \in \Re$.

k) Consider the function $f:\Re \rightarrow \Re$ defined by $f(x)=a^{x}$. Is f a continuous function? This is left as an exercise. (Hint: It will suffice to prove continuity at $x=0$. This would follow from $\lim_{m \rightarrow \infty}a^{1/m}$).

k) Suppose $f:\Re \rightarrow \Re$ is defined by $f(x)=1/x$, if $x \neq 0$ and $f(0)=0$. We can see that f is not continuous at 0 as $f(x)$ changes abruptly when x goes over from negative to positive values.

More later,

Nalin Pithwa