Monthly Archives: June 2015

Some properties of continuous functions : continued

We now turn to the question whether f(x_{n}) approximates f(x) when x_{n} approximates x. This is the same as asking: suppose (x_{n})_{n=1}^{\infty} is a sequence of real numbers converging to x, does (f(x_{n}))_{n=1}^{\infty} converges to f(x)?

Theorem.

If f: \Re \rightarrow \Re is continuous at x \in \Re if and only if (f(x_{n}))_{n=1}^{\infty} converges to f(x) whenever (x_{n})_{n=1}^{\infty} converges to x, that is,

\lim_{n \rightarrow \infty}f(x_{n})=f(\lim_{n \rightarrow \infty} x_{n}).

Proof.

Suppose f is continuous at x and (x_{n})_{n=1}^{\infty} converges to x. By continuity, for every \varepsilon > 0, there exists \delta >0 such that |f(x)-f(y)|<\varepsilon whenever |x-y|<\delta. Since (x_{n})_{n=1}^{\infty} converges to x, for this \delta >0, we can find an n_{0} such that |x-x_{0}|<\delta for n > n_{0}.

So |f(x)-f(x_{n})|<\varepsilon for n > n_{0} as |x_{n}-x|<\delta.

Conversely, suppose (f(x_{n}))_{n=1}^{\infty} converges to f(x) when (x_{n})_{n=1}^{\infty} converges to x. We have to show that f is continuous at x. Suppose f is not continuous at x. That is to say, there is an \varepsilon >0 such that however small \delta we may choose, there will be a y satisfying |x-y|<\delta yet |f(x)-f(y)|\geq \varepsilon. So for every n, let x_{n} be such a number for which |x-x_{n}|<(1/n) and |f(x)-f(x_{n})| \geq \varepsilon. Now, we see that the sequence (x_{n})_{n=1}^{\infty} converges to x. But (f(x_{n}))_{n=1}^{\infty} does not converge to f(x) violating our hypothesis. So f must be continuous at x. QED.

More later,

Nalin Pithwa

 

 

Some properties of continuous functions

Theorem: 

If g,f : [a,b] \rightarrow \Re are continuous functions and c is a constant, then

a) f+g is a continuous functions.

b) f-g is a continuous functions.

c) cf is a continuous function.

d) fg is a continuous function.

Proof:

We shall only prove statement d. Choose and fix any \varepsilon >0. Since is continuous at x_{0}, we have that for the positive number \frac{\varepsilon}{(2|g(x_{0})|+1} there exists a \delta_{1}>0 such that

|f(x)-f(x_{0}|< \frac{\varepsilon}{2(|g(x_{0}|+1)} whenever |x-x_{0}|<\delta_{1}

Since ||f(x)|-|f(x_{0})|| \leq |f(x)-f(x_{0})|, we conclude that

|f(x)|<|f(x_{0})|+\frac{\varepsilon}{2(|g(x_{0})|+1)}, whenever |x-s_{0}|<\delta_{1}. Let |f(x_{0})|+\varepsilon 2(|g(x_{0})|+1)=M. Also, since g is continuous at x_{0}, for the positive number \frac{\varepsilon}{2M}, there is a \delta_{2}>0 such that |g(x)-g(x_{0})|< \frac{\varepsilon}{2M} whenever |x-x_{0}|<\delta_{2}. Put \delta=min(\delta_{1},\delta_{2}). Then, whenever |x-x_{0}|<\delta, we have

|f(x)g(x)-f(x_{0})g(x_{0})| equals

|f(x)g(x)-f(x)g(x_{0})+f(x)g(x_{0})-f(x_{0})g(x_{0})|

\leq |f(x)||g(x)-g(x_{0})|+|g(x_{0})(f(x)-f(x_{0}))|

which equals

|f(x)||g(x)-g(x_{0})|+|g(x_{0})||f(x)-f(x_{0})|

< M.\frac{\varepsilon}{2M}+|g(x_{0})|.\frac{\varepsilon}{2(|g(x_{0}|+1)}

which equals \frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon

Observe that we have not claimed that the quotient of two continuous functions is continuous. The problem is obvious: \frac{f(x)}{g(x)} cannot have any meaning at x for which g(x)=0. So, the question would be, if g(x) \neq 0 for every x \in [a,b], is the function h:[a,b] \rightarrow \Re, defined by h(x)=\frac{f(x)}{g(x)}, continuous? The answer is yes. For a proof, we need a preliminary result.

Lemma.

if g:[a,b] \rightarrow \Re is continuous and g(x_{0}) \neq 0, then there is an m > 0 and \delta >0 such that if x_{0}-\delta<x<x_{0}+\delta, then |g(x)|>m.

Proof.

Let |g(x_{0})|=2m. Now, m>0. By continuity of g, there is a \delta>0 such that

|g(x)-g(x_{0})|<m for x_{0}-\delta<x<x_{0}+\delta

But, |g(x)-g(x_{0})| \geq ||g(x)|-|g(x_{0})|| and hence, $-m<|g(x)|-|g(x_{0})|<m$, giving us

m=|g(x_{0})|-m<|g(x)| for x_{0}-\delta<x<x_{0}+\delta. Hence, the proof.

The lemma says that if a continuous function does not vanish at a point, then there is an interval containing it in which it does not vanish at any point.

Theorem.

If f,g :[a,b] \rightarrow \Re are continuous and g(x) \neq 0 for all x, then h:[a,b] \rightarrow \Re defined by h(x)=\frac{f(x)}{g(x)} is continuous.

The proof of the above theorem using the lemma above is left as an exercise.

Examples.

a) f:\Re \rightarrow \Re defined by f(x)=a_{0} for all x \in \Re, where a_{0} is continuous.

b) f:\Re \rightarrow \Re defined by f(x)=x is continuous.

c) g:\Re \rightarrow \Re defined by g(x)=x^{2} is a continuous function because g(x)=f(x)f(x), where f(x)=x. Since f is continuous by (b), g must be continuous.

d) h:\Re \rightarrow \Re by h(x)=x^{n}, n being a positive integer, is continuous by repeated application of the above reasoning.

e) p: \Re \rightarrow \Re defined by p(x)=a_{0}+a_{1}x+\ldots +a_{n}x^{n}, where a_{0}, a_{1}, \ldots , a_{n} is also continuous. This is because of the fact that if

f_{1}, f_{2}, f_{3} \ldots, f_{n}:\Re \rightarrow \Re are defined by f_{1}(x)=x, f_{2}=x^{2}, …, f_{n}=x^{n}, then a_{1}f_{1}, a_{2}f_{2}, \ldots a_{n}f_{n} are also continuous functions. Hence,

a_{0}+a_{1}f_{1}+ \ldots +a_{n}f_{n}=p is also a continuous function as the sum of continuous functions is a continuous function. Thus, we have shown that a polynomial is a continuous function.

f) Let p and q be polynomials. Let \alpha_{1}, \alpha_{2}, \ldots, \alpha_{n} \in \Re be such that q(\alpha_{1})=q(\alpha_{2})=\ldots=q(\alpha_{n})  and q(\alpha) \neq 0 for \alpha \neq \alpha_{1}, \alpha \neq \alpha_{2}, \ldots, \alpha neq \alpha_{n}.

Now, let D = \Re - \{ \alpha_{1}, \alpha_{2}, \ldots , \alpha_{n}\}.

Then, h:D \rightarrow \Re defined by h(x)=\frac{p(x)}{q(x)} is a continuous function. What we have said is that a rational function which is defined everywhere except on the finite set of zeroes of the denominator is continuous.

g) f:\Re \rightarrow \Re defined by f(x)=\sin{x} is continuous everywhere. Indeed, f(x)-f(x_{0})=\sin{x}-\sin{x_{0}}=2\sin{\frac{x-x_{0}}{2}}\cos{\frac{x+x_{0}}{2}}. Therefore,

|f(x)-f(x_{0})|=2|\sin{\frac{(x-x_{0})}{2}}| |\cos{\frac{(x+x_{0})}{2}}|\leq |x-x_{0}| (because |\sin{x}| \leq |x|, where x is measured in radians)

h) f:\Re \rightarrow \Re defined by f(x)=\cos{x} is continuous since

|f(x)-f(x_{0})|=|\cos{x}-\cos{x_{0}}|=2|\sin{\frac{(x_{0}-x)}{2}}\sin{\frac{x+x_{0}}{2}}| \leq |x-x_{0}|

i) f:\Re - \{ (2n+1)\frac{\pi}{2}: n \in \mathbf{Z}\} \rightarrow \Re defined by f(x)=\tan{x} is continuous. We had to omit numbers like \ldots, \frac{-3\pi}{2}, \frac{-\pi}{2}, \frac{\pi}{2}, \frac{3\pi}{2}, \ldots from the domain of f as \tan{x} cannot be defined for these values of x.

j) f:\Re_{+} \rightarrow \Re defined by f(x)=x^{1/n} is a continuous function. Indeed,

f(x)-f(a)=x^{1/n}-a^{1/m} which equals

\frac{(x-a)}{x^{\frac{n-1}{n}}+x^{\frac{n-2}{n}}\frac{1}{a^{n}}+\ldots +a^{\frac{n-1}{n}} }

Choose |x-a|<|a/2| to start with, so that |a/2|<|x|<(3/2)|a|. Thus,

|x^{\frac{n-1}{n}}+x^{\frac{n-2}{n}}a^{1/n}+\ldots+a^{\frac{n-1}{n}}|>|a|^{\frac{n-1}{n}} \times ((1/2)^{\frac{n-1}{n}}+(1/2)^{\frac{n-2}{n}}+\ldots+1)

Given an \varepsilon >0, let

\delta=min\{\frac{|a|}{2}, \varepsilon \times |a|^{\frac{n-1}{n}} \times \left( (1/2)^{\frac{n-1}{n}}+\ldots+1 \right)\}.

Then, for |x-a|<\delta, we have

|f(x)-f(a)|=\frac{|x-a|}{|x^{\frac{n-1}{n}}+x^{\frac{n-2}{n}} \times a^{1/n}+\ldots+a^{\frac{n-1}{n}}|}< \varepsilon.

It can be shown that f defined by f(x)=x^{r} is also a continuous function for every real r \in \Re.

k) Consider the function f:\Re \rightarrow \Re defined by f(x)=a^{x}. Is f a continuous function? This is left as an exercise. (Hint: It will suffice to prove continuity at x=0. This would follow from \lim_{m \rightarrow \infty}a^{1/m}).

k) Suppose f:\Re \rightarrow \Re is defined by f(x)=1/x, if x \neq 0 and f(0)=0. We can see that f is not continuous at 0 as f(x) changes abruptly when x goes over from negative to positive values.

More later,

Nalin Pithwa

 

 

Limits and Continuity — reblogging

We have seen many examples of functions earlier. Let us also consider the example of the range of r of a gun pointed at an angle \theta to the horizon. Gun, shell and other conditions remaining the same, we know that for every angle \theta we have a definite range r(\theta), giving rise to the function r. Suppose we know that the target is at a distance d from the gun and that r(\theta_{0})=d for some \theta_{0}. Then, we point our gun at an angle \theta_{0} to the horizon to hit the target. A little deviation in \theta in likely to cause an error in our hit. But, we also know that if our hit is within a certain distance from the target, then our objective is achieved (depending on the shell). Now, if want the hit to be within a distance \varepsilon > 0 from the target, can we adjust the deviation \delta in \theta around \theta_{0} accordingly? Another way of putting the same question is to ask if for a small change in angle \theta, can one get a small change in range latex r(\theta)$? If yes, then we are allowed a little play in aiming our gun. If not, a small play might result in a wide miss. What we are asking is: for every \varepsilon >0, can we find a $\delta >0$ such that |r(\theta)-r(\theta_{0})|<\varepsilon whenever |\theta -\theta_{0}|<0? We shall come across similar questions in other situations too. For example, perhaps, you might be knowing how to obtain the values of \pi through the series called Madhava-Gregory’s series. That method takes a long time to evaluate the value of \pi correctly to, say, 4 places of decimal. There is also the formula:

\frac{\pi^{4}}{90}=\frac{1}{1^{4}}+\frac{1}{2^{4}}+\frac{1}{3^{4}}+\ldots+\frac{1}{n^{4}}+\ldots.

To find the sum of the above series correct to, say, 4 places of decimal, it is enough to sum the first 25 terms. But the sum gives an approximate value of \pi^{4}/90. To get the value of $\pi$, we need to multiply the sum by 90 and then extract its fourth root. Now the question is how  would the error committed in evaluating \pi^{4}/90 be propagated in the subsequent calculations? Put  in a different way, let us write x=\frac{\pi^{4}}{90} and f(x)=(90x)^{1/4}, and let x_{n} b/ e the approximate value of \pi calculated by summing the first n terms of the series. So, the approximate value of $\pi$ would be f(x_{n}). This leads naturally to the question: how  is the error |f(x)-f(x_{n})| in the value of f(x) related to the error |x-x_{n}| in the value of x? Can we calculate f(x) correct to the desired accuracy by calculating x sufficiently accurately? If not, then this method of calculation is not very useful. If a small perturbation in x causes an abrupt change in f(x), then we should perhaps think of some other method of calculation. It may be noted that, philosophically, continuity forms the basis of large parts of the experimental sciences where it is tacitly assumed that small errors in measurement will not lead to drastic changes in conclusions.

These ideas lead to  the definition of continuity of functions.

Definition. let f: \Re \rightarrow \Re be a function. W say that the function is continuous at x_{0} \in \Re if for every \varepsilon >0, we can find a \delta>0 such that |f(x)-f(x_{0}| , \varepsilon whenever |x-x_{0}|<\delta. This is to say that for continuous functions, f, the value of f(x) can be restricted within the interval (f(x_{0})-\varepsilon, f(x_{0})+\varepsilon) by restricting the value of x within (x_{0}-\delta, x_{0}+\delta). Try to do draw a figure based on this!!

Note that for continuity at x_{0}. we need an interval containing x_{0} to be contained in its domain, and hence, it is enough that the function f has its domain an interval in [a,b]. So we can define continuity of a function f:[a,b] \rightarrow \Re in the same way as above. In the case of the end points a and b, we can only talk of  the right and left continuity, respectively.

More later,

Nalin Pithwa

 

 

Injections, surjections and bijections

A function f: A \rightarrow B is called an injection if f(a)=f(a^{'}) implies a=a^{'}. There is another name for this kind of function. It is also called a one-to-one function or an injective map. A map f:A \rightarrow B is one-one if two distinct elements of A have distinct images under f.

A function f: A \rightarrow B is called a surjection if for every b \in B there is an element a \in A such that f(a)=b. In other words, f(A)=B. That is to say, every element of B is an image of some element of A under f. A surjective map is also called an onto map.

A map f:A \rightarrow B which is both one-to-one and onto is called a bijection or a bijective map.

Examples.

1) Suppose f: \Re \rightarrow \Re is defined by f(x) =\cos{x}. It is clear that this is neither one-to-one nor onto. Indeed because f(x)=f(x+2\pi), it cannot be one-to-one. Since f(x) never takes a value below -1 or above 1, it cannot be onto.

2) f: \Re \rightarrow \Re defined by

f(x)= x, if x \geq 0 and f(x)=x-1, if x<0 is one-to-one but not onto as -1/2 is never attained by the function.

3) f:\Re \rightarrow \Re defined by f(x)=\frac{x}{1+|x|} is one-to-one but not onto.

Warning.

Trigonometric functions like sine and cosine are neither one-to-one nor onto. So, how does one define \sin^{-1}{x} or \cos^{-1}{x}? Actually, there is ambiguity in defining these. If we write \sin^{-1}{x}=\theta, it means that \sin{\theta}=x. It is easily seen that there is no \theta if x is more than 1 or less than -1. Thus, the domain of \sin^{-1}{x} or \cos^{-1}{x} must be [-1.1]. Then, again

\sin{\theta}=x has many solutions \theta for the same x. For example, \sin{\pi/6}=\sin{5\pi/6}=1/2. So, which of \pi/6 or 5\pi/6 should claim to be the value of \sin^{-1}{(1/2)}? In such a case, we agree to take only one value in a definite way. For 0 \leq x \leq 1, we choose 0 \leq \theta \leq \pi/2 such that \sin{\theta}=x. It is obvious that there is only one such \theta. Similarly, for -1 \leq x <0, we choose -\pi/2 \leq \theta < 0 such that \sin{\theta}=x. Thus, this way of choosing \theta such that \sin{\theta}=x for -1 \leq x <1 has no ambiguity. such a value of the inverse circular function is called its principal value, though we could choose another set of values with equal ease.

Similar problems arise in the context of a function f:\Re \rightarrow \Re defined by f(x)=x^{2}. The function f is neither one-to-one nor onto. But, if we take f: \Re \rightarrow \Re_{+} as f(x)=x^{2}, then f is onto. We would like to define f^{-1}:\Re_{+} \rightarrow \Re as a function. In order that we are able to define f^{-1} as a function we must agree, once and for all, the sign of f^{-1}{(x)}. Indeed, since f(-1)=f(1)=1, which one would we call f^{-1} ? In fact, f^{-1}{(x)} is what we would like to denote by \sqrt{x}. But, we must decide if we are taking the positive value or the negative value. Once, we decide that, f^{-1} would become a function.

More later,

Nalin Pithwa

 

 

 

Composition of functions

Suppose A, B and C are sets and f:A \rightarrow B and g:B \rightarrow C are functions. We define a function

h: A \rightarrow C by

h(a)=g(f(a)) for every a \in A.

It is easily seen that h is a well-defined function, as f(a) \in B for every a \in A and g(b) \in C for every b \in B. The function h is called the composition of the functions g and f and is denoted by g \circ f.

Examples.

a) Suppose that in a forest, carnivorous animals sustain themselves by feeding only on herbivorous animals and the nutrition level of a herbivorous animal depends on the vegetation around the animal. So the nutrition level of a carnivorous animal ultimately depends on the vegetation around the population of herbivorous animals it feeds on. Thus, if V is the density of vegetation around the herbivorous animals n is the level of nutrition of the herbivorous animal (for simplicity measured by its weight, though there are often more parameters depicting the level of nutrition of an animal), n: V \rightarrow \Re is a function. Similarly, if c is the level of nutrition of a carnivorous animal, then c is a function depending on the level of nutrition of herbivorous animals it feeds on:

c: \Re \rightarrow \Re

Thus, c \circ n: V \rightarrow \Re

is the level of nutrition of the carnivorous animal ultimately depending on the density of vegetation.

b) Take for example the force experienced by a moving charged particle in a magnetic field which is varying in time. We know that the force on the charged particle depends on the strength of the magnetic field in which it moves. Again, as the magnetic field strength varies with time, the force experienced is ultimately a function of time and position.

c) Suppose there is a lamp  in the room. The intensity of illumination at point in the room depends on the illuminating power of the lamp. The illuminating power of the lamp again depends on the voltage of the electricity supply which makes the lamp glow. So, ultimately, the intensity of illumination depends on the voltage of the power supply.

Exercise.

Give five more examples of composition of functions.

More later,

Nalin Pithwa

 

 

Vector Valued Functions

Recall that for a particle moving in a straight line, for every time t we have a real number x(t) representing the distance of the particle measured from a definite point at the time t. But what about a particle moving in planc or in space? We know that every time t, the particle has a position in the plane or the space. But a point in the plane (or the space) is represented by a pair (or a triple) of real numbers. Thus, for a particle moving in a plane its position at time t is represented by a pair (x(t),y(t)). We may say that the pair is the value of a function whose range is the set of points in the plane \Re^{2} \equiv \Re \times \Re. Thus, the function representing the position of the particle at different times is a function

\overline{\gamma}: \Re \rightarrow \Re^{2}

such that \overline{\gamma} (t)=(x(t),y(t)) \in \Re^{2}, or if we write \overline{i} and \overline{j} for the unit vector along the x and y axis respectively, then

\overline{\gamma}(t)=\overline{i}x(t)+\overline{j}y(t).

We can similarly write the position of a particle in space as a function \overline{\gamma}(t): \Re \rightarrow \Re^{2} such that

\overline{\gamma}(t)=\overline{i}x(t)+\overline{j}y(t)+\overline{k}z(t)

Function of many variables

We have discussed before that the temperature at a point on Earth, at any instant, is a unique real number. Now every point on Earth is represented by a pair of real numbers depicting its latitude and longitude respectively (one ought to be careful in making this statement when it comes to a point on the date line. Indeed there is a little ambiguity in representing the longitude of a point on the date line. Besides, the poles have unique latitude, but what about their longitudes? Barring such ambiguity, every point on Earth can be represented uniquely by a pair of real numbers.) Thus, for a point with latitude \theta and longitude \phi, we have a definite temperature

T(\theta, \phi) at any instant. Thus,

T: (-\pi/2,\pi/2) \times (0,2\pi) represents the temperature at a point.

Similarly, if we take any point in the atmosphere, the atmospheric pressure at the point depends on the latitude, the longitude and the altitude of the point. Indeed, for a point with latitude \theta, longitude \phi and altitude h, at a a given instant, we have a unique real number p(\theta, \phi, h) called the atmospheric pressure at that point. Thus, p can be deemed as a function whose domain is a part of \Re^{3} and the range \Re.

Vector Fields

Electric field strength at a point is defined as the force experienced by a unit electric charge at at that point. This means that with every point (x,y,z) \in \Re^{3} is associated a vector \overline{E}(x,y,z) \in \Re^{3} which is called the electric field strength at the point (x,y,z). Thus, we may think of electric field strength as a map \overline{E}:\Re^{3} \rightarrow \Re^{3}. Similarly, magnetic field strength is a function

\overline{H}: \Re^{3} \rightarrow \Re^{3} and the velocity of a fluid is again a function (or map)  \overline{q}: \Re^{3} \rightarrow \Re^{3}.

Exercises:

a) Give five more examples of vector valued functions.

b) Give five more examples of functions of many variables.

More later,

Nalin Pithwa