Category Archives: calculus

Can anyone have fun with infinite series?

Below is list of finitely many puzzles on infinite series to keep you a bit busy !! 🙂 Note that these puzzles do have an academic flavour, especially concepts of convergence and divergence of an infinite series.

Puzzle 1: A grandmother’s vrat (fast) requires her to keep odd number of lamps of finite capacity lit in a temple at any time during 6pm to 6am the next morning. Each oil-filled lamp lasts 1 hour and it burns oil at a constant rate. She is not allowed to light any lamp after 6pm but she can light any number of lamps before 6pm and transfer oil from some to the others throughout the night while keeping odd number of lamps lit all the time. How many fully-filled oil lamps does she need to complete her vrat?

Puzzle 2: Two number theorists, bored in a chemistry lab, played a game with a large flask containing 2 liters of a colourful chemical solution and an ultra-accurate pipette. The game was that they would take turns to recall a prime number p such that p+2 is also a prime number. Then, the first number theorist would pipette out \frac{1}{p} litres of chemical and the second \frac{1}{(p+2)} litres. How many times do they have to play this game to empty the flask completely?

Puzzle 3: How farthest from the edge of a table can a deck of playing cards be stably overhung if the cards are stacked on top of one another? And, how many of them will be overhanging completely away from the edge of the table?

Puzzle 4: Imagine a tank that can be filled with infinite taps and can be emptied with infinite drains. The taps, turned on alone, can fill the empty tank to its full capacity in 1 hour, 3 hours, 5 hours, 7 hours and so on. Likewise, the drains opened alone, can drain a full tank in 2 hours, 4 hours, 6 hours, and so on. Assume that the taps and drains are sequentially arranged in the ascending order of their filling and emptying durations.

Now, starting with an empty tank, plumber A alternately turns on a tap for 1 hour and opens the drain for 1 hour, all operations done one at a time in a sequence. His sequence, by using t_{i} for i^{th} tap and d_{j} for j^{th} drain, can be written as follows: \{ t_{1}, d_{1}, t_{2}, d_{2}, \ldots\}_{A}.

When he finishes his operation, mathematically, after using all the infinite taps and drains, he notes that the tank is filled to a certain fraction, say, n_{A}<1.

Then, plumber B turns one tap on for 1 hour and then opens two drains for 1 hour each and repeats his sequence: \{ (t_{1},d_{1},d_{2}), (t_{2},d_{3},d_{4}), (t_{3},d_{4},d_{5}) \ldots \}_{B}.

At the end of his (B’s) operation, he finds that the tank is filled to a fraction that is exactly half of what plumber A had filled, that is, 0.5n_{A}.

How is this possible even though both have turned on all taps for 1 hour and opened all drains for 1 hour, although in different sequences?

I hope u do have fun!!

-Nalin Pithwa.

Huygen’s Clock

Ref: Calculus and Analytic Geometry, G B Thomas and Finney, 9th edition.

The problem with a pendulum clock whose bob swings in a circular arc is that the frequency of the swing depends on the amplitude of the spring. The wider the swing, the longer it takes the bob to return to centre.

This does not happen if the bob can be made to swing in a cycloid. In 1673, Chritiaan Huygens (1629-1695), the Dutch mathematician, physicist and astronomer who discovered the rings of Saturn, driven by a need to make accurate determinations of longitude at sea, designed a pendulum clock whose bob would swing in a cycloid. He  hung the bob from a fine wire constrained by guards that caused it to draw up as it swung away from the centre. How were the guards shaped? They were cycloids, too.

Aufwiedersehen,

Nalin Pithwa.

Limits that arise frequently

We continue our presentation of basic stuff from Calculus and Analytic Geometry, G B Thomas and Finney, Ninth Edition. My express purpose in presenting these few proofs is to emphasize that Calculus, is not just a recipe of calculation techniques. Or, even, a bit further, math is not just about calculation. I have a feeling that such thinking nurtured/developed at a young age, (while preparing for IITJEE Math, for example) makes one razor sharp.

We verify a few famous limits.

Formula 1:

If |x|<1, \lim_{n \rightarrow \infty}x^{n}=0

We need to show that to each \in >0 there corresponds an integer N so large that |x^{n}|<\in for all n greater than N. Since \in^{1/n}\rightarrow 1, while |x|<1. there exists an integer N for which \in^{1/n}>|x|. In other words,

|x^{N}|=|x|^{N}<\in. Call this (I).

This is the integer we seek because, if |x|<1, then

|x^{n}|<|x^{N}| for all n>N. Call this (II).

Combining I and II produces |x^{n}|<\in for all n>N, concluding the proof.

Formula II:

For any number x, \lim_{n \rightarrow \infty}(1+\frac{x}{n})^{n}=e^{x}.

Let a_{n}=(1+\frac{x}{n})^{n}. Then, \ln {a_{n}}=\ln{(1+\frac{x}{n})^{n}}=n\ln{(1+\frac{x}{n})}\rightarrow x,

as we can see by the following application of l’Hopital’s rule, in which we differentiate with respect to n:

\lim_{n \rightarrow \infty}n\ln{(1+\frac{x}{n})}=\lim_{n \rightarrow \infty}\frac{\ln{(1+x/n)}}{1/n}, which in turn equals

\lim_{n \rightarrow \infty}\frac{(\frac{1}{1+x/n}).(-\frac{x}{n^{2}})}{-1/n^{2}}=\lim_{n \rightarrow \infty}\frac{x}{1+x/n}=x.

Now, let us apply the following theorem with f(x)=e^{x} to the above:

(a theorem for calculating limits of sequences) the continuous function theorem for sequences:

Let a_{n} be a sequence of real numbers. If \{a_{n}\} be a sequence of real numbers. If a_{n} \rightarrow L and if f is a function that is continu0us at L and defined at all a_{n}, then f(a_{n}) \rightarrow f(L).

So, in this particular proof, we get the following:

(1+\frac{x}{n})^{n}=a_{n}=e^{\ln{a_{n}}}\rightarrow e^{x}.

Formula 3:

For any number x, \lim_{n \rightarrow \infty}\frac{x^{n}}{n!}=0

Since -\frac{|x|^{n}}{n!} \leq \frac{x^{n}}{n!} \leq \frac{|x|^{n}}{n!},

all we need to show is that \frac{|x|^{n}}{n!} \rightarrow 0. We can then apply the Sandwich Theorem for Sequences (Let \{a_{n}\}, \{b_{n}\} and \{c_{n}\} be sequences of real numbers. if a_{n}\leq b_{n}\leq c_{n} holds for all n beyond some index N, and if \lim_{n\rightarrow \infty}a_{n}=\lim_{n\rightarrow \infty}c_{n}=L,, then \lim_{n\rightarrow \infty}b_{n}=L also) to  conclude that \frac{x^{n}}{n!} \rightarrow 0.

The first step in showing that |x|^{n}/n! \rightarrow 0 is to choose an integer M>|x|, so that (|x|/M)<1. Now, let us the rule (formula 1, mentioned above), so we conclude that:(|x|/M)^{n}\rightarrow 0. We then restrict our attention to values of n>M. For these values of n, we can write:

\frac{|x|^{n}}{n!}=\frac{|x|^{n}}{1.2 \ldots M.(M+1)(M+2)\ldots n}, where there are (n-M) factors in the expression (M+1)(M+2)\ldots n, and

the RHS in the above expression is \leq \frac{|x|^{n}}{M!M^{n-M}}=\frac{|x|^{n}M^{M}}{M!M^{n}}=\frac{M^{M}}{M!}(\frac{|x|}{M})^{n}. Thus,

0\leq \frac{|x|^{n}}{n!}\leq \frac{M^{M}}{M!}(\frac{|x|}{M})^{n}. Now, the constant \frac{M^{M}}{M!} does not change as n increases. Thus, the Sandwich theorem tells us that \frac{|x|^{n}}{n!} \rightarrow 0 because (\frac{|x|}{M})^{n}\rightarrow 0.

That’s all, folks !!

Aufwiedersehen,

Nalin Pithwa.

Cauchy’s Mean Value Theorem and the Stronger Form of l’Hopital’s Rule

Reference: Thomas, Finney, 9th edition, Calculus and Analytic Geometry.

Continuing our previous discussion of “theoretical” calculus or “rigorous” calculus, I am reproducing below the proof of the finite limit case of the stronger form of l’Hopital’s Rule :

L’Hopital’s Rule (Stronger Form):

Suppose that

f(x_{0})=g(x_{0})=0

and that the functions f and g are both differentiable on an open interval (a,b) that contains the point x_{0}. Suppose also that g^{'} \neq 0 at every point in (a,b) except possibly at x_{0}. Then,

\lim_{x \rightarrow x_{0}}\frac{f(x)}{g(x)}=\lim_{x \rightarrow x_{0}}\frac{f^{x}}{g^{x}} ….call this equation I,

provided the limit on the right exists.

The proof of the stronger form of l’Hopital’s Rule is based on Cauchy’s Mean Value Theorem, a mean value theorem that involves two functions instead of one. We prove Cauchy’s theorem first and then show how it leads to l’Hopital’s Rule. 

Cauchy’s Mean Value Theorem:

Suppose that the functions f and g are continuous on [a,b] and differentiable throughout (a,b) and suppose also that g^{'} \neq 0 throughout (a,b). Then there exists a number c in (a,b) at which

\frac{f^{'}(c)}{g^{'}(c)} = \frac{f(b)-f(a)}{g(b)-g(a)}…call this II.

The ordinary Mean Value Theorem is the case where g(x)=x.

Proof of Cauchy’s Mean Value Theorem:

We apply the Mean Value Theorem twice. First we use it to show that g(a) \neq g(b). For if g(b) did equal to g(a), then the Mean Value Theorem would give:

g^{'}(c)=\frac{g(b)-g(a)}{b-a}=0 for some c between a and b. This cannot happen because g^{'}(x) \neq 0 in (a,b).

We next apply the Mean Value Theorem to the function:

F(x) = f(x)-f(a)-\frac{f(b)-f(a)}{g(b)-g(a)}[g(x)-g(a)].

This function is continuous and differentiable where f and g are, and F(b) = F(a)=0. Therefore, there is a number c between a and b for which F^{'}(c)=0. In terms of f and g, this says:

F^{'}(c) = f^{'}(c)-\frac{f(b)-f(a)}{g(b)-g(a)}[g^{'}(c)]=0, or

\frac{f^{'}(c)}{g^{'}(c)}=\frac{f(b)-f(a)}{g(b)-g(a)}, which is II above. QED.

Proof of the Stronger Form of l’Hopital’s Rule:

We first prove I for the case x \rightarrow x_{o}^{+}. The method needs no  change to apply to x \rightarrow x_{0}^{-}, and the combination of those two cases establishes the result.

Suppose that x lies to the right of x_{o}. Then, g^{'}(x) \neq 0 and we can apply the Cauchy’s Mean Value Theorem to the closed interval from x_{0} to x. This produces a number c between x_{0} and x such that \frac{f^{'}(c)}{g^{'}(c)}=\frac{f(x)-f(x_{0})}{g(x)-g(x_{0})}.

But, f(x_{0})=g(x_{0})=0 so that \frac{f^{'}(c)}{g^{'}(c)}=\frac{f(x)}{g(x)}.

As x approaches x_{0}, c approaches x_{0} because it lies between x and x_{0}. Therefore, \lim_{x \rightarrow x_{0}^{+}}\frac{f(x)}{g(x)}=\lim_{x \rightarrow x_{0}^{+}}\frac{f^{'}(c)}{g^{'}(c)}=\lim_{x \rightarrow x_{0}^{+}}\frac{f^{'}(x)}{g^{'}(x)}.

This establishes l’Hopital’s Rule for the case where x approaches x_{0} from above. The case where x approaches x_{0} from below is proved by applying Cauchy’s Mean Value Theorem to the closed interval [x,x_{0}], where x< x_{0}QED.

Derivatives of different orders — Leibniz’ Rule: IITJEE Maths training

Let a function y=f(x) be differentiable on some interval [a,b]. Generally speaking, the values of the derivative f^{'}(x) depend on x, which is to say that the derivative f^{'}(x) is also a function of x. Differentiating this function, we obtain the so-called second derivative of the function f(x).

The derivative of a first derivative is called a derivative of the second order or the second derivative of the original function and is denoted by the symbol y^{''} or f^{''}(x): y^{''}=(y^{'})^{'} = f^{''}(x)

For example, if y=x^{5}, then y^{'}=5x^{4} and y^{''} = (5x^{4})^{'}=20x^{3}

The derivative of the second derivative is called a derivative of the third order or the third derivative and is denoted by y^{'''} or f^{'''}(x).

Generally, a derivative of the nth order of a function f(x) is called the derivative (first order) of the derivative of the (n-1)th order and is denoted by the symbol y^{(n)} or f^{(n)}(x):

y^{(n)} = (y^{(n-1)})^{'}=f^{(n)}(x)

(Note: the order of the derivative is taken in parentheses so as to avoid confusion with the exponent of a power.)

Derivatives of the fourth, fifth and higher orders are also denoted by Roman numerals: y^{IV}, y^{V}, y^{VI}, \ldots. Here, the order of the derivative may be written without brackets. For instance, if y=x^{5}, y^{'}=5x^{4}, y^{''}=20x^{3}, y^{'''}=60x^{2}, y^{IV}=y^{(4)}=120x, y^{V}=y^{(5)}=100, y^{(6)}=y^{(7)}=\ldots = 0

Example 1:

Given a function y=e^{kx}, where k is a constant, find the expression of its derivative of any order n.

Solution 1:

y^{'}=ke^{kx}, y^{''}=k^{2}e^{kx}, y^{(n)}=k^{n}e^{kx}

Example 2:

y=\sin{x}. Find y^{(n)}.

Solution 2:

y^{'}=\cos{x}=\sin(x+\frac{\pi}{2})

y^{''}=-\sin{x}=\sin(x+2\frac{\pi}{2})

y^{'''}=-\cos{x}=\sin(x+3\frac{\pi}{2})

y^{IV}=\sin{x}=\sin(x+4\frac{\pi}{2})

\vdots

y^{(n)}=\sin(x+n\frac{\pi}{2})

In similar fashion, we can also derive the formulae for the derivatives of any order of certain other elementary functions. You can find yourself the formulae for derivatives of the n^{th} order of the functions y=x^{k}, y=\cos{x}, y=\ln (x).

Let us derive a formula called the Leibniz rule that will enable us to calculate the n^{th} derivative of the product of two functions u(x), v(x). To obtain this formula, let us first find several derivatives and then establish the general rule for finding the derivative of any order:

y=uv

y^{'}=u^{'}v+uv^{'}

y^{''}=u^{''}v+u^{'}v^{'}+u^{'}v^{'}+uv^{''}=u^{''}v+2u^{'}v^{'}+uv^{''}

y^{'''}=u^{'''}v+u^{''}v^{'}+2u^{''}v^{'}+2u^{'}v^{''}+u^{'}v^{''}+uv^{'''}

which in turn equals u^{'''}v+3u^{''}v^{'}+3u^{'}v^{''}+uv^{'''}

y^{IV}=u^{IV}v+4u^{'''}v+6u^{''}v^{''}+4u^{'}v^{'''}+uv^{IV}

The rule for forming derivatives holds for the derivative of any order and obviously consists in the following:

The expression (u+v)^{n} is expanded by the binomial theorem, and in the expansion obtained the exponents of the powers of and are replaced by indices that are the orders of the derivatives, and the zero  powers (u^{0}=v^{0}=1) in the end terms of the expansion are replaced by the function themselves (that is, “derivatives of zero order”):

y^{(n)}=(uv)^{(n)}=u^{(n)}v+nu^{(n-1)}v^{'}+\frac{n(n-1)}{1.2}u^{(n-2)}v^{''}+\ldots + uv^{(n)}

This is the Leibniz Rule.

A rigorous proof of this formula may be carried out by the method of complete mathematical induction (in other words, to prove that if this formula holds for the nth order, it will also hold for the order n+1).

Example:

y=e^{ax}x^{2}. Find the derivative y^{(n)}.

Solution:

u=e^{ax}, v=x^{2}

u^{'}=ae^{ax}, v^{'}=2x

u^{''}=a^{2}e^{ax}, v^{''}=2

\vdots, \vdots

u^{(n)}=a^{n}e^{ax}, v^{'''}=v^{IV}=\ldots=0

y^{(n)}=a^{n}e^{ax}x^{2}+na^{n-1}e^{ax}.2x+\frac{n(n-1)}{1.2}a^{n-2}e^{ax}.2, or

y^{(n)}=e^{ax}[a^{n}x^{2}+2na^{n-1}x+n(n-1)a^{n-2}]

More calculus in the pipeline…there is no limit to it 🙂

Nalin Pithwa

 

 

Lagrange’s Mean Value Theorem and Cauchy’s Generalized Mean Value Theorem

Lagrange’s Mean Value Theorem:

If a function f(x) is continuous on the interval [a,b] and differentiable at all interior points of the interval, there will be, within [a,b], at least one point c, a<c<b, such that f(b)-f(a)=f^{'}(c)(b-a).

Cauchy’s Generalized Mean Value Theorem:

If f(x) and phi(x) are two functions continuous on an interval [a,b] and differentiable within it, and phi(x) does not vanish anywhere inside the interval, there will be, in [a,b], a point x=c, a<c<b, such that \frac{f(b)-f(a)}{phi(b)-phi(a)} = \frac{f^{'}(c)}{phi^{'}(c)}.

Some questions based on the above:

Problem 1:

Form Lagrange’s formula for the function y=\sin(x) on the interval [x_{1},x_{2}].

Problem 2:

Verify the truth of Lagrange’s formula for the function y=2x-x^{2} on the interval [0,1].

Problem 3:

Applying Lagrange’s theorem, prove the inequalities: (i) e^{x} \geq 1+x (ii) \ln (1+x) <x, for x>0. (iii) b^{n}-a^{n}<ab^{n-1}(b-a) for b>a. (iv) \arctan(x) <x.

Problem 4:

Write the Cauchy formula for the functions f(x)=x^{2}, phi(x)=x^{3} on the interval [1,2] and find c.

More churnings with calculus later!

Nalin Pithwa.

 

 

Some questions based on Rolle’s theorem

Problem 1:

Verify the truth of Rolle’s theorm for the following functions:

(a) y=x^{2}-3x+2 on the interval [1,2].

(b) y=x^{3}+5x^{2}-6x on the interval [0,1].

(c) y=(x-1)(x-2)(x-3) on the interval [1,3].

(d) y=\sin^{2}(x) on the interval [0,\pi].

Problem 2:

The function f(x)=4x^{3}+x^{2}-4x-1 has roots 1 and -1. Find the root of the derivative f^{'}(x) mentioned in Rolle’s theorem.

Problem 3:

Verify that between the roots of the function y=\sqrt[3]{x^{2}-5x+6} lies the root of its derivative.

Problem 4:

Verify the truth of Rolle’s theorem for the function y=\cos^{2}(x) on the interval [-\frac{\pi}{4},+\frac{\pi}{4}].

Problem 5:

The function y=1-\sqrt[5]{x^{4}} becomes zero at the end points of the interval [-1,1]. Make it clear that the derivative of the function does not vanish anywhere in the interval (-1,1). Explain why Rolle’s theorem is NOT applicable here.

Calculus is the fountainhead of many many ideas in mathematics and hence, technology. Expect more beautiful questions on Calculus !

-Nalin Pithwa

Applications of sinc function

The occurrence of the function \frac{\sin {x}}{x} in calculus is an isolated event. The function arises in diverse fields such as quantum physics (where it appears in solutions of the wave equation) and electrical engineering(in signal analysis and DSP filter design) as well as in the mathematical fields of differential equations and probability theory.

-Nalin Pithwa.

Some Applications of Derivatives — Part II

Derivatives in Economics.

Engineers use the terms velocity and acceleration to refer to the derivatives of functions describing motion. Economists, too, have a specialized vocabulary for rates of change and derivatives. They call them marginals.

In a manufacturing operation, the cost of production c(x) is a function of x, the number of units produced. The marginal cost of production is the rate of change of cost (c) with respect to a level of production (x), so it is dc/dx.

For example, let c(x) represent the dollars needed needed to produce x tons of steel in one week. It costs more to produce x+h units, and the cost difference, divided by h, is the average increase in cost per ton per week:

\frac{c(x+h)-c(x)}{h}= average increase in cost/ton/wk to produce the next h tons of steel

The limit of this ratio as h \rightarrow 0 is the marginal cost of producing more steel when the current production level is x tons.

\frac{dc}{dx}=\lim_{h \rightarrow 0} \frac{c(x+h)-c(x)}{h}= marginal cost of production

Sometimes, the marginal cost of production is loosely defined to be the extra cost of producing one unit:

\frac{\triangle {c}}{\triangle {x}}=\frac{c(x+1)-c(x)}{1}

which is approximately the value of dc/dx at x. To see why this is an acceptable approximation, observe that if the slope  of c does not change quickly near x, then the difference quotient will be close to its limit, the derivative dc/dx, even if \triangle {x}=1. In practice, the approximation works best for large values of x.

Example: Marginal Cost

Suppose it costs c(x)=x^{3}-6x^{2}+15x  dollars to produce x radiators when 8 to 30 radiators are produced. Your shop currently produces 10 radiators a day. About how much extra cost will it cost to produce one more radiator a day?

Example : Marginal tax rate

To get some feel for the language of marginal rates, consider marginal tax rates. If your marginal income tax rate is 28% and your income increases by USD 1000, you can expect to have to pay an extra USD 280 in income taxes. This does not mean that you pay 28 percent of your entire income in taxes. It just means that at your current income level I, the rate of increase of taxes I with respect to income is dT/dI = 0.28. You will pay USD 0.28 out of every extra dollar you earn in taxes. Of course, if you earn a lot more, you may land in a higher tax bracket and your marginal rate will increase.

Example: Marginal revenue:

If r(x) = x^{3}-3x^{2}+12x gives the dollar revenue from selling x thousand candy bars, 5<= x<=20, the marginal revenue when x thousand are sold is

r^{'}(x) = \frac{d}{dx}(x^{3}-3x^{2}+12x)=3x^{2}-6x+12.

As with marginal cost, the marginal revenue function estimates the increase in revenue that will result from selling one additional unit. If you currently sell 10 thousand candy bars a week, you can expect your revenue to increase by about r^{'}(10) = 3(100) -6(10) +12=252 USD, if you increase sales to 11 thousand bars a week.

Choosing functions to illustrate economics.

In case, you are wondering why economists use polynomials of low degree to illustrate complicated phenomena like cost and revenue, here is the rationale: while formulae for real phenomena are rarely available in any given instance, the theory of  economics can still provide valuable guidance. the functions about which theory speaks can often be illustrated with low degree polynomials on relevant intervals. Cubic polynomials provide a good balance between being easy to work with and being complicated enough to illustrate important points.

Ref: Calculus and Analytic Geometry by G B Thomas.

More later,

Nalin Pithwa

 

Some Applications of Derivatives — Part I

Sensitivity to Change.

When a small change in x produces a large change in the value of a function f(x), we say that the function is relatively sensitive to changes in x. The derivative f^{'}(x) is a measure of the senstivity to change at x.

Example:

The Austrian monk Gregor Johann Mendel (1822-1884), working with garden peas and other plants, provided the first scientific explanation of hybridization. His careful records show that if p (a number between 0 and 1) is the frequency of the gene for smooth skin in peas (dominant) and (1-p) is the frequency of the gene for wrinkled skin in peas, then the proportion of smooth-skinned peas in the population at large is

y = 2p(1-p)+p^{2}=2p-p^{2}

The graph of y versus p is an inverted parabola. If you plot and see, it suggests that the value of y is more sensitive to a change in p, when p is small than when p is large. Instead, this is borne out by the derivative graph which shows that (\frac{dy}{dp}) is close to 2, when p is  near zero, and close to zero, when p is near 1.

More later,

Nalin Pithwa

PS: why peas wrinkle

British geneticists had discovered that the wrinkling trail comes from an extra piece of DNA that prevents the  gene that directs starch synthesis from functioning properly. With the plant’s starch conversion impaired, sucrose and water build up in the young seeds. As the seeds mature, they lose much of this water, and the shrinkage leaves them wrinkled.