Category Archives: Cnennai Math Institute Entrance Exam

Logicalympics — 100 meters!!!

Just as you go to the gym daily and increase your physical stamina, so also, you should go to the “mental gym” of solving hard math or logical puzzles daily to increase your mental stamina. You should start with a laser-like focus (or, concentrate like Shiva’s third eye, as is famous in Hindu mythology/scriptures!!) for 15-30 min daily and sustain that pace for a month at least. Give yourself a chance. Start with the following:

The logicalympics take place every year in a very quiet setting so that the competitors can concentrate on their events — not so much the events themselves, but the results. At the logicalympics every event ends in a tie so that no one goes home disappointed ūüôā There were five entries in the room, so they held five races in order that each competitor could win, and so that each competitor could also take his/her turn in 2nd, 3rd, 4th, and 5th place. The final results showed that each competitor had duly taken taken their turn in finishing in each of the five positions. Given the following information, what were the results of each of the five races?

The five competitors were A, B, C, D and E. C didn’t win the fourth race. In the first race A finished before C who in turn finished after B. A finished in a better position in the fourth race than in the second race. E didn’t win the second race. E finished two places ¬†behind C in the first race. D lost the fourth race. A finished ahead of B in the fourth race, but B finished before A and C in the third race. A had already finished before C in the second race who in turn finished after B again. B was not first in the first race and D was not last. D finished in a better position in the second race than in the first race and finished before B. A wasn’t second in the second race and also finished before B.

So, is your brain racing now to finish this puzzle?

Cheers,

Nalin Pithwa.

PS: Many of the puzzles on my blog(s) are from famous literature/books/sources, but I would not like to reveal them as I feel that students gain the most when they really try these questions on their own rather than quickly give up and ask for help or look up solutions. Students have finally to stand on their own feet! (I do not claim creating these questions or puzzles; I am only a math tutor and sometimes, a tutor on the web.) I feel that even a “wrong” attempt is a “partial” attempt; if u can see where your own reasoning has failed, that is also partial success!

Pick’s theorem to pick your brains!!

Pick’s theorem:

Consider a square lattice of unit side. A simple polygon (with non-intersecting sides) of any shape is drawn with its vertices at the lattice points. The area of the polygon can be simply obtained as (B/2)+I-1 square units, where B is number of lattice points on the boundary, I is number of lattice points in the interior of the polygon. Prove this theorem!

Do you like this challenge?

Nalin Pithwa.

Limits that arise frequently

We continue our presentation of basic stuff from Calculus and Analytic Geometry, G B Thomas and Finney, Ninth Edition. My express purpose in presenting these few proofs is to emphasize that Calculus, is not just a recipe of calculation techniques. Or, even, a bit further, math is not just about calculation. I have a feeling that such thinking nurtured/developed at a young age, (while preparing for IITJEE Math, for example) makes one razor sharp.

We verify a few famous limits.

Formula 1:

If |x|<1, \lim_{n \rightarrow \infty}x^{n}=0

We need to show that to each \in >0 there corresponds an integer N so large that |x^{n}|<\in for all n greater than N. Since \in^{1/n}\rightarrow 1, while |x|<1. there exists an integer N for which \in^{1/n}>|x|. In other words,

|x^{N}|=|x|^{N}<\in. Call this (I).

This is the integer we seek because, if |x|<1, then

|x^{n}|<|x^{N}| for all n>N. Call this (II).

Combining I and II produces |x^{n}|<\in for all n>N, concluding the proof.

Formula II:

For any number x, \lim_{n \rightarrow \infty}(1+\frac{x}{n})^{n}=e^{x}.

Let a_{n}=(1+\frac{x}{n})^{n}. Then, \ln {a_{n}}=\ln{(1+\frac{x}{n})^{n}}=n\ln{(1+\frac{x}{n})}\rightarrow x,

as we can see by the following application of l’Hopital’s rule, in which we differentiate with respect to n:

\lim_{n \rightarrow \infty}n\ln{(1+\frac{x}{n})}=\lim_{n \rightarrow \infty}\frac{\ln{(1+x/n)}}{1/n}, which in turn equals

\lim_{n \rightarrow \infty}\frac{(\frac{1}{1+x/n}).(-\frac{x}{n^{2}})}{-1/n^{2}}=\lim_{n \rightarrow \infty}\frac{x}{1+x/n}=x.

Now, let us apply the following theorem with f(x)=e^{x} to the above:

(a theorem for calculating limits of sequences) the continuous function theorem for sequences:

Let a_{n} be a sequence of real numbers. If \{a_{n}\} be a sequence of real numbers. If a_{n} \rightarrow L and if f is a function that is continu0us at L and defined at all a_{n}, then f(a_{n}) \rightarrow f(L).

So, in this particular proof, we get the following:

(1+\frac{x}{n})^{n}=a_{n}=e^{\ln{a_{n}}}\rightarrow e^{x}.

Formula 3:

For any number x, \lim_{n \rightarrow \infty}\frac{x^{n}}{n!}=0

Since -\frac{|x|^{n}}{n!} \leq \frac{x^{n}}{n!} \leq \frac{|x|^{n}}{n!},

all we need to show is that \frac{|x|^{n}}{n!} \rightarrow 0. We can then apply the Sandwich Theorem for Sequences (Let \{a_{n}\}, \{b_{n}\} and \{c_{n}\} be sequences of real numbers. if a_{n}\leq b_{n}\leq c_{n} holds for all n beyond some index N, and if \lim_{n\rightarrow \infty}a_{n}=\lim_{n\rightarrow \infty}c_{n}=L,, then \lim_{n\rightarrow \infty}b_{n}=L also) to  conclude that \frac{x^{n}}{n!} \rightarrow 0.

The first step in showing that |x|^{n}/n! \rightarrow 0 is to choose an integer M>|x|, so that (|x|/M)<1. Now, let us the rule (formula 1, mentioned above), so we conclude that:(|x|/M)^{n}\rightarrow 0. We then restrict our attention to values of n>M. For these values of n, we can write:

\frac{|x|^{n}}{n!}=\frac{|x|^{n}}{1.2 \ldots M.(M+1)(M+2)\ldots n}, where there are (n-M) factors in the expression (M+1)(M+2)\ldots n, and

the RHS in the above expression is \leq \frac{|x|^{n}}{M!M^{n-M}}=\frac{|x|^{n}M^{M}}{M!M^{n}}=\frac{M^{M}}{M!}(\frac{|x|}{M})^{n}. Thus,

0\leq \frac{|x|^{n}}{n!}\leq \frac{M^{M}}{M!}(\frac{|x|}{M})^{n}. Now, the constant \frac{M^{M}}{M!} does not change as n increases. Thus, the Sandwich theorem tells us that \frac{|x|^{n}}{n!} \rightarrow 0 because (\frac{|x|}{M})^{n}\rightarrow 0.

That’s all, folks !!

Aufwiedersehen,

Nalin Pithwa.

Cauchy’s Mean Value Theorem and the Stronger Form of l’Hopital’s Rule

Reference: Thomas, Finney, 9th edition, Calculus and Analytic Geometry.

Continuing our previous discussion of “theoretical” calculus or “rigorous” calculus, I am reproducing below the proof of the finite limit case of the stronger form of l’Hopital’s Rule :

L’Hopital’s Rule (Stronger Form):

Suppose that

f(x_{0})=g(x_{0})=0

and that the functions f and g are both differentiable on an open interval (a,b) that contains the point x_{0}. Suppose also that g^{'} \neq 0 at every point in (a,b) except possibly at x_{0}. Then,

\lim_{x \rightarrow x_{0}}\frac{f(x)}{g(x)}=\lim_{x \rightarrow x_{0}}\frac{f^{x}}{g^{x}} ….call this equation I,

provided the limit on the right exists.

The proof of the stronger form of l’Hopital’s Rule is based on Cauchy’s Mean Value Theorem, a mean value theorem that involves two functions instead of one. We prove Cauchy’s theorem first and then show how it leads to l’Hopital’s Rule.¬†

Cauchy’s Mean Value Theorem:

Suppose that the functions f and g are continuous on [a,b] and differentiable throughout (a,b) and suppose also that g^{'} \neq 0 throughout (a,b). Then there exists a number c in (a,b) at which

\frac{f^{'}(c)}{g^{'}(c)} = \frac{f(b)-f(a)}{g(b)-g(a)}…call this II.

The ordinary Mean Value Theorem is the case where g(x)=x.

Proof of Cauchy’s Mean Value Theorem:

We apply the Mean Value Theorem twice. First we use it to show that g(a) \neq g(b). For if g(b) did equal to g(a), then the Mean Value Theorem would give:

g^{'}(c)=\frac{g(b)-g(a)}{b-a}=0 for some c between a and b. This cannot happen because g^{'}(x) \neq 0 in (a,b).

We next apply the Mean Value Theorem to the function:

F(x) = f(x)-f(a)-\frac{f(b)-f(a)}{g(b)-g(a)}[g(x)-g(a)].

This function is continuous and differentiable where f and g are, and F(b) = F(a)=0. Therefore, there is a number c between a and b for which F^{'}(c)=0. In terms of f and g, this says:

F^{'}(c) = f^{'}(c)-\frac{f(b)-f(a)}{g(b)-g(a)}[g^{'}(c)]=0, or

\frac{f^{'}(c)}{g^{'}(c)}=\frac{f(b)-f(a)}{g(b)-g(a)}, which is II above. QED.

Proof of the Stronger Form of l’Hopital’s Rule:

We first prove I for the case x \rightarrow x_{o}^{+}. The method needs no  change to apply to x \rightarrow x_{0}^{-}, and the combination of those two cases establishes the result.

Suppose that x lies to the right of x_{o}. Then, g^{'}(x) \neq 0 and we can apply the Cauchy’s Mean Value Theorem to the closed interval from x_{0} to x. This produces a number c between x_{0} and x such that \frac{f^{'}(c)}{g^{'}(c)}=\frac{f(x)-f(x_{0})}{g(x)-g(x_{0})}.

But, f(x_{0})=g(x_{0})=0 so that \frac{f^{'}(c)}{g^{'}(c)}=\frac{f(x)}{g(x)}.

As x approaches x_{0}, c approaches x_{0} because it lies between x and x_{0}. Therefore, \lim_{x \rightarrow x_{0}^{+}}\frac{f(x)}{g(x)}=\lim_{x \rightarrow x_{0}^{+}}\frac{f^{'}(c)}{g^{'}(c)}=\lim_{x \rightarrow x_{0}^{+}}\frac{f^{'}(x)}{g^{'}(x)}.

This establishes l’Hopital’s Rule for the case where x approaches x_{0} from above. The case where x approaches x_{0} from below is proved by applying Cauchy’s Mean Value Theorem to the closed interval [x,x_{0}], where x< x_{0}.¬†QED.

The Sandwich Theorem or Squeeze Play Theorem

It helps to think about the core concepts of Calculus from a young age, if you want to develop your expertise or talents further in math, pure or applied, engineering or mathematical sciences. At a tangible level, it helps you attack more or many questions of the IIT JEE Advanced Mathematics. Let us see if you like the following proof, or can absorb/digest it:

Reference: Calculus and Analytic Geometry by Thomas and Finney, 9th edition.

The Sandwich Theorem:

Suppose that g(x) \leq f(x) \leq h(x) for all x in some open interval containing c, except possibly at x=c itself. Suppose also that \lim_{x \rightarrow c}g(x)= \lim_{x \rightarrow c}h(x)=L. Then, \lim_{x \rightarrow c}f(x)=c.

Proof for Right Hand Limits:

Suppose \lim_{x \rightarrow c^{+}}g(x)=\lim_{x \rightarrow c^{+}}h(x)=L. Then, for any \in >0, there exists a \delta >0 such that for all x, the inequality c<x<c+\delta implies L-\in<g(x)<L+\in and L-\in<h(x)<L+\in ….call this (I)

These inequalities combine with the inequality g(x) \leq f(x) \leq h(x) to give

L-\in <g(x) \leq f(x) \leq h(x)<L+\in

L-\in <f(x)<L+\in

-\in <f(x)-L<\in….call this (II)

Therefore, for all x, the inequality c<x<c+\delta implies |f(x)-L|<\in. …call this (III)

Proof for LeftHand Limits:

Suppose \lim_{x \rightarrow c^{-}} g(x)=\lim_{x \rightarrow c^{-}}=L. Then, for \in >0 there exists a \delta >0 such that for all x, the inequality c-\delta <x<c implies L-\in<g(x)<L+\in and L-\in<h(x)<L+\in …call this (IV).

We conclude as before that for all x, c-\delta <x<c implies |f(x)-L|<\in.

Proof for Two sided Limits:

If \lim_{x \rightarrow c}g(x) = \lim_{x \rightarrow c}h(x)=L, then g(x) and h(x) both approach L as x \rightarrow c^{+} and as x \rightarrow c^{-} so \lim_{x \rightarrow c^{+}}f(x)=L and \lim_{x \rightarrow c^{-}}f(x)=L. Hence, \lim_{x \rightarrow c}f(x)=L. QED.

Let me know your feedback on such stuff,

Nalin Pithwa

Lagrange’s Mean Value Theorem and Cauchy’s Generalized Mean Value Theorem

Lagrange’s Mean Value Theorem:

If a function f(x) is continuous on the interval [a,b] and differentiable at all interior points of the interval, there will be, within [a,b], at least one point c, a<c<b, such that f(b)-f(a)=f^{'}(c)(b-a).

Cauchy’s Generalized Mean Value Theorem:

If f(x) and phi(x) are two functions continuous on an interval [a,b] and differentiable within it, and phi(x) does not vanish anywhere inside the interval, there will be, in [a,b], a point x=c, a<c<b, such that \frac{f(b)-f(a)}{phi(b)-phi(a)} = \frac{f^{'}(c)}{phi^{'}(c)}.

Some questions based on the above:

Problem 1:

Form Lagrange’s formula for the function y=\sin(x) on the interval [x_{1},x_{2}].

Problem 2:

Verify the truth of Lagrange’s formula for the function y=2x-x^{2} on the interval [0,1].

Problem 3:

Applying Lagrange’s theorem, prove the inequalities: (i) e^{x} \geq 1+x (ii) \ln (1+x) <x, for x>0. (iii) b^{n}-a^{n}<ab^{n-1}(b-a) for b>a. (iv) \arctan(x) <x.

Problem 4:

Write the Cauchy formula for the functions f(x)=x^{2}, phi(x)=x^{3} on the interval [1,2] and find c.

More churnings with calculus later!

Nalin Pithwa.

 

 

Could a one-sided limit not exist ?

Here is basic concept of limit :

Pappus’s theorem

Problem:

Given a point on the circumference of a cyclic quadrilateral, prove that the product of the distances from the point to any pair of opposite sides or to the diagonals are equal.

Proof:

Let a, b, c, d be the coordinates of the vertices A, B, C, D of the quadrilateral and consider the complex plane with origin at the circumcenter of ABCD. Without loss of generality, assume that the circumradius equals 1.

The equation of line AB is

\left | \begin{array}{ccc}    a & \overline{a} & 1 \\    b & \overline{b} & 1 \\    z & \overline{z} & 1 \end{array} \right | = 0.

This is equivalent to z(\overline{a}-\overline{b})-\overline{z}(a-b)=\overline{a}b-a\overline{b}, that is,

z+ab\overline{z}=a+b

Let point M_{1} be the foot of the perpendicular from a point M on the circumcircle to the line AB. If m is the coordinate of the point M, then

z_{M_{1}}=\frac{m-ab\overline{m}+a+b}{2}

and

d(M, AB)=|m-m_{1}|=|m-\frac{m-ab\overline{m}+a+b}{2}|=|\frac{(m-a)(m-b)}{2m}| since m \overline{m} = 1.

Likewise,

d(M, BC)=|\frac{(m-b)(m-c)}{2m}|, d(M, CD)=|\frac{(m-c)(m-d)}{2m}|

d(M, DA)=|\frac{(m-d)(m-a)}{2m}|, d(M, AC)=|\frac{(m-a)(m-c)}{2m}|

and d(M, BD)=|\frac{(m-b)(m-d)}{2m}|

Thus,

d(M, AB).d(M, CD)=d(M, BC).d(M, DA)=d(M, AC).d(M, BD) as claimed.

QED.

More later,

Nalin Pithwa

PS: The above example indicates how easy it is prove many fascinating theorems of pure plane geometry using the tools and techniques of complex numbers.

 

Maxima and Minima using calculus

Problem:

The vertices of an (n+1)-gon lie  on the sides of a regular n-gon and divide its perimeter into  parts of equal length. How should one construct the (n+1)- gon so that its area is :

(a) maximum

(b) minimum

Hint only:

[One of the golden rule of solving problems in math/physics is to draw diagrams, as had benn emphasized by the maverick American physics Nobel Laureate, Richard Feynman. He expounded this technique even in software development. So, in the present problem, first draw several diagrams.]

There exists a side B_{1}B_{2} of the (n+1) -gon that lies entirely on a side A_{1}A_{2} of the n-gon. Let b=B_{1}B_{2} and b=A_{1}A_{2}. Show that b=\frac{n}{n+1}a. Then, for x=A_{1}B_{1}, we have 0 \leq x \leq \frac{n}{n+1} and the area S of the (n+1) -gon is given by

S(x)=\frac{\sin{\phi}}{2}\Sigma_{i=1}^{n}(\frac{i-1}{n+1}a+x)(\frac{n-i+1}{n+1}a-x)

where \phi=\angle{A_{1}A_{2}A_{3}}. Thus, S(x) is a quadratic function of x. Show that S(x) is a minimal when x=0 or x=\frac{a}{n+1} and S(x) is maximal when x=\frac{a}{2(n+1)}.

Let me know if you have any trouble when you attempt it,

-Nalin Pithwa

 

Inclusion Exclusion Principle theorem and examples

Reference: Combinatorial Techniques by Sharad Sane, Hindustan Book Agency.

Theorem: 

The inclusion-exclusion principle: Let X be a finite set and let and let P_{i}: i = 1, 2, \ldots n  be a set of n properties satisfied by (s0me of) the elements of X. Let A_{i} denote the set of those elements of X that satisfy the property P_{i} . Then, the size of the set \overline{A_{1}} \bigcup \overline{A_{2}} \bigcup \ldots \bigcup \overline{A_{n}} of all those elements that do not satisfy any one of these properties is given by

\overline{A_{1}} \bigcup \overline{A_{2}} \bigcup \ldots \bigcup \overline{A_{n}} = |X| - \sum_{i=1}^{n}|A_{n}|+ \sum_{1 \leq i <j \leq n}|A_{i} \bigcup A_{j}|- \ldots + \{ (-1)^{k} \sum_{1 \leq i_{1} < i_{2}< \ldots < i_{k} \leq n}|A_{i_{1} \bigcup A_{i_{2}}} \ldots \bigcup A_{i_{k}}|\}+ \ldots+ (-1)^{n}|A_{1} \bigcup A_{2} \ldots \bigcup A_{n}|.

Proof:

The proof will show  that every object in the set X is counted the same number of times on both the sides. Suppose x \in X and assume that x is an element of the set on the left hand side of above equation. Then, x has none of the properties P_{i}. We need to show that in this case, x is counted only once on the right hand side. This is obvious since x is not in any of the A_{i} and x \in X. Thus, X is counted only once in the first summand and is not counted in any other summand since x \notin A_{i} for all i. Now let x have k properties say P_{i_{1}}, P_{i_{2}}, \ldots, P_{i_{k}} (and no  others). Then x is counted once in X. In the next sum, x occurs {k \choose 2} times and so on. Thus, on the right hand side, x is counted precisely,

{k \choose 0}-{k \choose 1}+{k \choose 2}+ \ldots + (-1)^{k}{k \choose k}

times. Using the binomial theorem, this sum is (1-1)^{k} which is 0 and hence, x is not counted on the right hand side. This completes the proof. QED.

More later,

Nalin Pithwa