Monthly Archives: September 2015

Differentiation — continued

Differentiation of the Sum, Difference, Product and Quotient of Two Functions

Theorem:

If f, g: I \rightarrow \Re are differentiable at x_{0} \in I, where I is an open interval in \Re, then so are f \pm g, fg at x_{0}. Furthermore, \frac{f}{g} is differentiable at x_{0} if g(x_{0}) \neq 0. We have

(a) derivative of f \pm g at x_{0} is f^{'}(x_{0}) \pm g^{'}(x_{0})

(b) derivative of f.g at x_{0} is f^{'}(x_{0})g(x_{0})+f(x_{0})g^{'}(x_{0})

(c) derivative of \frac{f}{g} at x_{0} is \frac{f^{'}(x_{0})g(x_{0})-f(x_{0})g^{'}(x_{0})}{{g(x_{0})}^{2}} if g^{'}(x_{0}) \neq 0

The proofs are straightforward and therefore omitted.

We also have

Theorem (Chain Rule):

Let I and J be two intervals in \Re and f: I \rightarrow J, and g: J \rightarrow \Re be differentiable at x_{0} \in I and f(x_{0}) \in J respectively. Then, h \equiv g \circ f : I \rightarrow \Re is differentiable at x_{0} and

h^{'}(x_{0})=g^{'}(f(x_{0}))f^{'}(x_{0})

Note that h = g \circ f is defined as h(x)=g(f(x)).

Proof.

Let us write y=f(x) so that by the continuity of f at x_{0}, we have that as x \rightarrow x_{0}, y \rightarrow y_{0}=f(x_{0}). Since g is differentiable at y_{0}, we have

g(y)-g(y_{0})=(g^{'}(y_{0})+r_{1}(y,y_{0}))(y-y_{0})

Here r_{1}(y,y_{0}) as y \rightarrow y_{0}. Again, since f is differentiable at x_{0}, we have

f(x)-f(x_{0})=(f^{'}(x_{0})+r_{2}(x,x_{0}))(x-x_{0}).

Here, r_{2}(x,x_{0}) \rightarrow 0 as x \rightarrow x_{0}. Thus, we have

f(x)-g(f(x_{0})) = (g^{'}(f(x_{0}))+r_{1}(y,y_{0}))(f(x)-f(x_{0})), which equals

(g^{'}(y_{0})+r_{1}(y,y_{0}))(f^{'}(x_{0})+r_{2}(x,x_{0}))(x-x_{0}), which in turn, equals

g^{'}(f(x_{0}))f^{'}(x_{0})(x-x_{0})+(x-x_{0})r_{3} where

r_{3}=g^{'}(f(x_{0}))r_{2}(x,x_{0})+f^{x_{0}}r_{1}(y,y_{0})+r_{1}(y,y_{0})r_{2}(x,x_{0}).

Surely, r_{3} \rightarrow 0 as x \rightarrow x_{0} and hence,

h^{'}(x_{0})=g^{'}(f(x_{0}))f^{'}(x_{0}).

The above result is also often called the Chain Rule.

Differential Notation of Leibniz

For a differentiable function f, if we write y=f(x) and y+\triangle y=f(x+\triangle x), then we get

\lim_{\triangle x \rightarrow 0} \frac{\triangle y}{\triangle x}=\lim_{\triangle x \rightarrow 0}\frac{f(x+\triangle x)-f(x)}{\triangle x}=f^{'}(x)

The expression

\lim_{\triangle x \rightarrow 0}\frac{\triangle y}{\triangle x} is often written as \frac{dy}{dx}.

It is NOT true that \frac{dy}{dx} is the quotient of limits of \triangle y and of \triangle x because both of them tend to zero. It should rather be thought of as an operator (or operation) \frac{d}{dx} is the operation of differentiation operating on the variable y so that we have

\frac{d}{dx}(y)=\frac{dy}{dx}.

The operator \frac{d}{dx} has the property \frac{d}{dx}(y+u)=\frac{dy}{dx}+\frac{du}{dx} and

\frac{d}{dx}(cy)=c\frac{dy}{dx}, and for f and g, two differentiable functions with the domain of g containing the range of f, if we write y=f(x) and u=g(y), we have u=g(f(x)), then the chain rule gives \frac{dy}{dx}=\frac{du}{dy}. \frac{dy}{dx}. In the case, when y=f(x) so that \frac{dy}{dx}=f^{x} we write, following the German mathematician Leibnitz,

dy=f^{'}(x)dx,

and dy and dx are called the differentials of y and x respectively.

Remark:

Let f: \Re^{2} \rightarrow \Re be a function. We say that f is differentiable at (x_{0},y_{0}) \in \Re^{2} if we can find numbers A, B depending on (x_{0},y_{0}) only, so that

f(x,y) - f(x_{0},y_{0})=A(x-x_{0})+B(y-y_{0})+R(x,y,x_{0},y_{0}) such that

\lim \frac{R(x,y,x_{0},y_{0})}{\sqrt{(x-x_{0})^{2}-(y-y_{0})^{2}}}=0 as (x,y) \rightarrow (x_{0},y_{0})

Observe that

A=\lim_{x \rightarrow x_{0}}\frac{f(x,y_{0})-f(x_{0},y_{0})}{x-x_{0}} and B=\lim_{y \rightarrow y_{0}}\frac{f(x_{0},y)-f(x_{0},y_{0})}{y-y_{0}}

We call A and B the partial derivatives of f with respect to x and y respectively at (x_{0},y_{0}), and we write

A=\frac{\partial f}{\partial x}(x_{0},y_{0}), B=\frac{\partial f}{\partial y}(x_{0},y_{0})

Sometimes, we also  write \frac{\partial f}{\partial x}(x_{0},y_{0}) = f_{x}(x_{0},y_{0}) and \frac{\partial f}{\partial y}(x_{0},y_{0})=f_{y}(x_{0},y_{0}). Again, as before, \frac{\partial}{\partial x} and \frac{\partial}{\partial y} may be thought of as operators which, operating on a function, give its partial derivatives with respect to x and y respectively.

Suppose that \phi: \Re^{2} \rightarrow \Re is differentiable, and that x and y: \Re \rightarrow \Re are differentiable functions. Furthermore, let u: \Re \rightarrow \Re be defined by

u(t)=\phi (x(t),y(t))

It is not difficult to show that (Exercise!)

u^{'}(t)=\frac{\partial \phi}{\partial x} x^{'}(t)+\frac{\partial \phi}{\partial y} y^{'}(t).

In Leibnitz’s notation, it reads du=\frac{\partial u}{\partial x}dx + \frac{\partial u}{\partial y} dy.

As an application of this idea, consider the notion of equipotential surface in electrostatics. An electrically charged (infinite) cylindrical conductor is one such and because of the symmetry, it is enough to look at only a horizontal section of the cylinder which is a closed curve, that is, the problem gets reduced to a 2-dimensional one. This equipotential curve is given by the equation \phi (x,y)=constant, say c, where \phi is the real valued potential function. What is the electric field outside the curve? Using Leibnitz’s notation as described above, we can write d\phi=\frac{\partial \phi}{\partial x}dx + \frac{\partial \phi}{\partial y}dy.

Since \phi is a constant, the relation above reads as

\frac{\partial \phi}{\partial x} x^{'}(t) + \frac{\partial \phi}{\partial y} y^{'}(t)=0,

that is, the vector (\frac{\partial \phi}{\partial x}, \frac{\partial \phi}{\partial y}) is orthogonal to the tangent vector (x^{'}(t),y^{'}(t)).

Recalling that the electric field E at a point (x,y) outside has components E_{1}=-\frac{\partial \phi}{\partial x} and E_{2}=-\frac{\partial \phi}{\partial y}, we conclude that the electric field E is along the outward normal. For example, for a right circular cylinder, the electric field is along the radial direction of every section.

We have seen that if f is differentiable at x_{0}, then f(x)-f(x_{0})-f^{'}(x_{0})(x-x_{0}) is of smaller order than (x-x_{0}) as x \rightarrow x_{0}.

Writing \triangle y=f(x)-f(x_{0}), \triangle x = (x-x_{0}) we get

\triangle y = f^{'}(x_{0}) \triangle x

which is of smaller order than \triangle x. In other words, we are claiming that the increment in y is proportional to the increment in x when it is *small*, which is the principle of proportional parts. We have put the word small within asterisks as it is a relative term. Let us consider an example.

Let f(x) = \sqrt {x} with x \geq 0. Then, f^{'}(x)=\frac{1}{2\sqrt{x}} for x > 0. So, f(16)=4, f(17)=f(16)+f^{'}(16).1+r.1, since \triangle x = 1. This gives us \sqrt{17}=4+\frac {1}{8} + r = 4.125+r, whereas computation on a hand calculator will give \sqrt{17}=4.1231 \ldots This shows that |r|<0.002 and thus the differential approximates the increment in f(x) correct at least to the second place of decimal.

More later,

Nalin Pithwa

 

A bit challenging problem on complex numbers

Problem:

If \omega and \omega^{2} satisfy the equation

\frac{1}{a+x}+\frac{1}{b+x}+\frac{1}{c+x}+\frac{1}{d+x} = \frac{2}{x}

then find the value of \frac{1}{a+1} + \frac{1}{b+1} + \frac{1}{c+1} + \frac{1}{d+1}.

Solution:

We can write the given equation as

\sum {x(b+x)(c+x)(d+x)}=2(a+x)(b+x)(c+x)(d+x)

\Longrightarrow \sum {x(x^{3}+(b+c+d)x^{2}+(bc+cd+bd)x+bcd)}

= 2{(x^{4}+(a+b+c+d))x^{3}+(ab+ac+ad+bc+bd+cd)x^{2}+(abc+abd+acd+bcd)x+abcd)}

\Longrightarrow 2x^{4}+(a+b+c+d)x^{3}+0x^{2}-(abc+abd+acd+bcd)x-2abcd=0

This is a fourth degree equation whose two roots are \omega, \omega^{2}. Let \alpha, \beta be the other two roots. Then,

(\alpha + \beta)(\omega + \omega^{2}) + \alpha \beta + \omega . \omega^{2} = 0 (sum of the products taken two at a time)

\Longrightarrow (\alpha + \beta)(-1)+\alpha \beta +1 =0

\Longrightarrow (1 - \alpha)(1 - \beta) = 0

\alpha = 1 or \beta = 1

Hence, we get \frac{1}{a+1} + \frac{1}{b+1} + \frac{1}{c+1} + \frac{1}{d+1}=1.

Note that in deriving the fourth degree equation we used a basic technique of expansion or multiplication of several binomials.

More later,

Nalin Pithwa

Who was Pythagoras?

We recognize the name “Pythagoras” because it is attached to a theorem, one that most of us have grappled with at school.  The square on the Pythagoras of a right angled triangle is equal to the sum of squares on the other two sides. That is, if you take any right-angled triangle, then the square of the longest side is equal to the sum of the squares of the other two sides. Well known as his theorem may be, the actual person has proved rather elusive, although we know more about him as a historical figure, than we do for, say, Euclid. What we don’t know is whether he proved his eponymous theorem, and there are good reasons to suppose that even if he did, he wasn’t the first one to do so.

But more of that story later.

Pythagoras was Greek, born around 569BC on the island of Samos in the north-eastern Aegean. (The exact date is disputed, but this one is wrong by at most 20 years.) His father, Mnesarchus was a merchant from Tyre; his mother, Pythais, was from Samos. They may have met when Mnesarchus brought corn to Samos during a famine, and was publicly thanked by being made a citizen.

Pythagoras studied philosophy under Pherekydes. He probably visited another philosopher, Thales of Miletus. He attended lectures given by Anaximander, a pupil of Thales, and absorbed many of his ideas on cosmology and geometry. He visited Egypt, was captured by Cambyses, II, the King of Persia, and taken to Babylon as a prisoner. There he learned Babylonian mathematics and musical theory. Later he found the school of Pythagoreans in the Italian city of Croton (now, Crotone), and it is for this that he is best remembered. The Pythagoreans were a mystical cult. They believed that the universe is mathematical, and that various symbols and numbers have a deep spiritual meaning.

Various ancient writers attributed various mathematical theorems to  the Pythgoreans, and by extension to Pythagoras — notably, his famous theorem about right-angled triangles. But we have no idea what mathematics Pythagoras himself originated. We don’t know whether Pythagoreans could prove his theorem, or just believed it to be true. And there is evidence from the inscribed clay tablet known as Plimpton 322 that the ancient Babylonians may have understood the theorem 1200 years earlier — though they probably didn’t possess a proof, because Babylonians didn’t go much for proofs anyway.

More later,

Nalin Pithwa

A ode to Geometry

Euclid alone

Has looked on Beauty bare. Fortunate they

Who, though once only and then but far away,

Have heard her massive sandal set on stone.

— Edna St. Vincent Millay (1923)

Pre RMO type practice questions

  1. Let x_{1}, x_{2}, \ldots, x_{100} be positive integers such that x_{i}+x_{i+1}=k for all i, where k is a constant. I x_{10}=1, find the value of x_{1}.
  2. If a_{0}=1, a_{1}=1 and a_{n}=a_{n-1}a_{n-2}+1 for n > 1, then find out if a_{465}, a_{466} are even or odd.
  3. Two trains of equal length L, travelling at speeds V_{1} and V_{2} miles per hour in opposite directions, take T seconds to cross each other. Then, find L in feet (1 mile 1280 feet).
  4. A salesman sold two pipes at Rs. 12 each. His profit on one was 20% and the loss on the other was 20%. Then, on the whole, what amount did he gain or lose or did he break even?
  5. What is the digit in the units position of the integer 1! +2! +3! + \ldots +99!?
  6. Find the value of the following expression:

(1+q)(1+q^{2})(1+q^{4})(1+q^{8})(1+q^{16})(1+q^{32})(1+q^{64}) where q \neq 1.

Good luck for the ensuing oct pre RMO 🙂

Nalin Pithwa

How to Remember a Round Number

A traditional French rhyme goes like this*:

Que j’aime a fatre apprendre

Un nombre utile aux sages!

Glorieux Archimede, artiste ingenieux,

Toi, de qui Syracuse loue encore le merite!

( * A loose translation is:

How I like to make

The sages learn a useful number!

Glorious Archimedes, ingenious artist,

You whose merit Syracuse still praises.)

But to which ‘number useful to the sages’ does it refer? Counting the letters in each word, treating ‘j’ as a word with one letter and placing a decimal point after the first digit, we get

3.141 592 653 589 793 238 4626

which is \pi to the first 22 decimal places. Many similar mnemonics for \pi exist in many languages. In English, one of the best known is

How I want a drink, alcoholic, of course, after the heavy 

chapters involvlng quantum mechanics. One is, yes,

adequate even enough to induce some fun and pleasure

for an instant, miserably brief.

It probably stopped there because the next digit is a 0, and it’s not entirely clear how best to represent a word with no letters. Another is

Sir, I bear a rhyme excelling

In mystic force, and magic spelling

Celestial sprites elucidate

All my own strivings can’t relate.

An ambitious \pi -mnemonic featured in “The Mathematical Intelligencer” in 1986 (volume 8, page 56). This is an informal “house journal” for professional mathematicians. The mnemonic is a self-referential story encoding the first 402 decimals of \pi. It uses punctuation marks (ignoring full stops) to represent the digit zero, and words with more than 9 letters represent two consecutive digits — for instance, a word with 13 letters represents the digits 13 in that order. Oh, and any actual digit represents itself. The story begins like this:

For a time I stood pondering on circle sizes. The large 

computer mainframe quietly processed  all of its assembly 

code Inside my entire hope lay for figuring out an elusive

expansion. Value pi. Decimals expected soon. I nervously

entered a format procedure. The mainframe processed the

request. Error. I again entering it, carefully retyped. This

iteration gave zero error printouts in all — success.

You can find out more about \pi related mnemonics in various languages please use the internet.

More later,

Nalin Pithwa

Differentiation

We have seen how the concept of continuity is naturally associated with attempts to model gradual changes. For example, consider the function f: \Re \rightarrow \Re given by f(x)=ax+b, where change in f(x) is proportional to the change in x. This simple looking function is often used to model many practical problems. One such case is given below:

Suppose 30 men working for 7 hours a day can complete a piece of work in 16 days. In how many days can 28 men working for 6 hours a day complete the work? It must be evident to most of the readers that the answer is \frac{16 \times 7 \times 30}{28 \times 6}=20 days.

(While solving this we have tacitly assumed that the amount of work done is proportional to the number of men working, to the number of hours each man works per day, and also to the number of days each man works. Similarly, Boyle’s law for ideal gases states that pressure remaining constant, the increase in volume of a mass of gas is proportional to the increase in temperature of the gas).

But, there are exceptions to this as well. Galileo discovered that the distance covered by a body, falling from rest, is proportional to the square of the time for which it has fallen, and the velocity is proportional to the square root of the distance through which it has fallen. Similarly, Kepler’s law tells us that the square of the period of the planet going round the sun is proportional to the cube of the mean distance from the sun.

These and many other problems involve functions that are not linear. If for example we plot the graph of the distance covered by a particle versus time, it is a straight line only when the motion is uniform. But, we are seldom lucky to encounter only uniform motion. (Besides, uniform motion would be so monotonous. Perhaps, there would be no life at all motions if all motions were uniform. Imagine a situation in which each body is in uniform motion. A body at rest would be eternally at rest and those once in motion, would never stop.) So the simple method of proportionality becomes quite inadequate to tackle such non-linear problems. The genius of Newton lay in looking at those problems which are next best to linear, the ones that are nearly linear.

We know that the graph of a linear function is a straight line. What Newton suggested was to look at functions, small portions of whose graphs look almost like a straight line (see Fig 1).

In Fig 1, the graph certainly is not a straight line. But a small portion of it looks like a straight like a straight line. To formalize this idea, we need the concept of differentiability.

Definition.

Let I be an open interval and f: I \rightarrow \Re be a function. We say that f is locally linear or differentiable at x_{0} \in I if there is a constant m such that

f(x)-f(x_{0})=m(x-x_{0})+r(x_{0},x)(x-x_{0})

or equivalently, for x in a punctured interval around x_{0},

\frac{f(x)-f(x_{0})}{x-x_{0}}=m+r(x_{0},x)

where r(x_{0},x) \rightarrow 0 as x \rightarrow x_{0}

What this means is that for small enough x-x_{0}, \frac{f(x)-f(x_{0})}{x-x_{0}} is nearly a constant or, equivalently, f(x)-f(x_{0}) is nearly proportional to the increment x-x_{0}. This is what is called the principal of proportional parts and used very often in calculations using tables, when the number for which we are looking up the table is not found there.

Thus, if a function f is differentiable at x_{0}, then \lim_{x \rightarrow x_{0}}\frac{f(x)-f(x_{0})}{x-x_{0}}

exists and is called the derivative of f at x_{0} and denoted by f^{'}(x_{0}). So we write

\lim_{x \rightarrow x_{0}}\frac{f(x)-f(x_{0})}{x-x_{0}}=f^{'}(x_{0}).

We need to look at functions which are not differentiable at some point, to fix our ideas. For example, consider the function f: \Re \rightarrow \Re defined by f(x)=|x|.

This function though continuous at every point is not differentiable at $latex x=0$. In fact, \lim_{x \rightarrow 0_{+}}\frac{|x|}{x}=-1. What all this means is that if one looks at the graph of f(x)=|x|, it has a sharp corner at the origin.

No matter how small a part of the graph containing the point (0,0) is taken, it never looks like a line segment. The reader can test for the non-differentiability of f(x)=|\sin{x}| at x=n\pi.

This leads us to the notion of the direction of the graph at a point: Suppose f: I \rightarrow \Re is a function differentiable at x_{0} \rightarrow I, and let P and Q be the points (x_{0},f(x_{0})) and (x, f(x)) respectively in the graph of f. (see Fig 2).

The chord PQ has the slope \frac{f(x)-f(x_{0})}{x-x_{0}}. As x comes close to x_{0}, the chord tends to the tangent to the curve at (x_{0}, f(x_{0})). So, \lim_{x \rightarrow x_{0}} \frac{f(x)-f(x_{0}}{x-x_{0}} really represents the slope of the tangent at (x_{0},f(x_{0})) (see Fig 3).

Similarly, if x(t) is the position of a moving point in a straight line at time t, then \frac{x(t)-x(t_{0}}{t-t_{0}} is its average velocity in the interval of time [t_{0},t]. Its limit as t goes to t_{0}, if it exists, will be its instantaneous velocity at the instant of time t_{0}. We have

x^{'}{t_{0}}=\lim_{t \rightarrow t_{0}}\frac{x(t)-x(t_{0})}{t-t_{0}} is instantaneous velocity at t_{0}.

If the limit of \frac{f(x)-f(x_{0})}{x-x_{0}} does not exist as x tends to x_{0}, the curve (x, f(x)) cannot have a tangent at (x_{0},f(x_{0})), as we saw in the case of f(x)=|x| at (0,0); the graph abruptly changes its direction. If we look at the motion of a particle which is moving with uniform velocity till time t_{0} and is abruptly brought to rest at that instant, then its graph would look as in Fig 4a.

This is also what we think happens when a perfectly elastic ball impinges on another ball of the same mass at rest, or  when a perfectly elastic ball moving at a constant speed impinges on a hard surface (see fig 4b). We see that there is a sharp turn in the space time graph of such a motion at time t=t_{0}. Recalling the interpretation of

x^{'}(t)=\lim_{t \rightarrow t_{0}} \frac{x(t)-x(t_{0})}{t-t_{0}} as its instantaneous velocity at t=t_{0}, we see that in the situation described above, instantaneous velocity at t=t_{0} is not a meaningful concept.

We have already seen that continuous functions need not be differentiable at some points of their domain. Actually there are continuous functions which are not differentiable anywhere also. On the other hand, as the following result shows, every differentiable function is always continuous.

Theorem:

If a function is differentiable at x_{0}, then it is continuous there.

Proof:

If f is differentiable at x_{0}, then let \lim_{x \rightarrow x_{0}} \frac{f(x)-f(x_{0}}{x-x_{0}}=l. Setting

r(x,x_{0})=\frac{f(x)-f(x_{0})}{x-x_{0}}-l, we see that \lim_{x \rightarrow x_{0}}r(x, x_{0})=0. Thus, we have

f(x)-f(x_{0})=(x-x_{0})l + (x-x_{0})r(x,x_{0})

Now, \lim_{x \rightarrow x_{0}} (f(x)-f(x_{0}))=\lim_{x \rightarrow x_{0}}(x-x_{0})l + \lim_{x \rightarrow x_{0}} (x-x_{0})r(x, x_{0})=0

This shows that f is continuous at x_{0}.

QED.

Continuity of f at x_{0} tells us that f(x)-f(x_{0}) tends to zero as x - x_{0} tends to zero. But, in the case of differentiability, f(x)-f(x_{0}) tends to zero at least as fast as x-x_{0}. The portion l(x-x_{0}) goes to zero no doubt but the remainder |f(x)-f(x_{0})-l(x-x_{0})| goes to zero at a rate faster than that of |x-x_{0}|. This is how differentiation was conceived by Newton and Leibniz. They introduced a concept called an infinitesimal. Their idea was that when x-x_{0} is an infinitesimal, then so is f(x)-f(x_{0}), which is of the same order of infinitesimal as x-x_{0}.The idea of infinitesimals served them well but had a little problem in its definition. They were introduced seemed to run against the Archimedean property. The definition of infinitesimals can be made rigorous But, we do not go into it here. However, we can still usefully deal with concepts and notation like:

(a) f(x)=\mathcal{O}(g(x)) as x \rightarrow x_{0} if there exists a K such that |f(x)| \leq K|g(x)| for x sufficiently near x_{0}.

(b) f(x)=\mathcal{o}(g(x)) as x \rightarrow x_{0} if \lim_{x \rightarrow x_{0}}\frac{f(x)}{g(x)}=0.

Informally, f(x)=\mathcal{o}(g(x))=0 means f(x) is of smaller order than g(x) as

x \rightarrow x_{0}. In this notation, f is differentiable at x_{0} if there is an l such that

|f(x)-f(x_{0})-l(x-x_{0})|=\mathcal{o}(|x-x_{0}|).

We shall return to this point again. Let us first give examples of derivatives of some functions.

Examples.

(The proof are left as exercises).

(a) f(x)=x^{n}, f^{'}(x_{0})=\lim_{x \rightarrow x_{0}}\frac{x^{n}-{x_{0}}^{n}}{x-x_{0}}=n{x_{0}}^{n-1}, n a positive integer.

(b) f(x)=x^{n} (x \neq 0, where n Is a negative integer), f^{'}(x)=nx^{n-1}

(c) f(x)=e^{x}, f^{'}(x)=e^{'}(x)

(d) f(x)=a^{x}, f^{'}(x)=a^{x}\log[e]{a}