Category Archives: IITJEE Advanced

Some good trig practice problems for IITJEE Mains Mathematics

Problem:

Find the value of \cos^{4}\frac{\pi}{8} + \cos^{4}\frac{3\pi}{8}+\cos^{4}\frac{5\pi}{8}+\cos^{4}\frac{7\pi}{8}.

Problem:

Find the value of \cos{\theta}\cos{2\theta}\cos{4\theta}\ldots\cos{2^{n-1}}{\theta}

Problem:

Find the value of \cos{(2\pi/15)}\cos{(4\pi/15)}\cos{(8\pi/15)}\cos{(14\pi/15)}

Problem:

Prove that (\frac{\cos{A}+\cos{B}}{\sin{A}-\sin{B}})^{n}+(\frac{\sin{A}+\sin{B}}{\cos{A}-\cos{B}})^{n} = 2\cot^{n}(\frac{A-B}{2}), if n is even, and is zero, if n is odd.

More later,

Nalin Pithwa

Bus Conductor’s son from Khar slum makes it to IIT-Bombay

Never Say Die!

(Reproduced from DNA Newspaper, Mumbai, Tuesday 12.07.2016, page 2, print edition for the purpose of motivating my own students and readers of my blogs)

(Author: Bilal Khan; E-mail: correspondent@dnaindia.net)

“Beating the odds”

They say, “if you can dream it, then you achieve it”, and how true this has turned out to be for this 18-year old Arbaz Shaikh, son of a bus conductor from a Khar (a suburb in Mumbai) slum, has got selected for IIT-B’s Bachelor of Technology course.

Arbaz stays with his parents and younger sister, in a 10 x 12 square feet room. His father Rajvalli (43) has been working as a BEST conductor for five years. “I was a technician before I joined the Undertaking. I did not have a shop; I used to go door to door to repair TV’s and other appliances. Those were tough days — trying to earn a livelihood enough to feed the family and pay the rent,” he said.

Arbaz has been good in his studies right from childhood. He secured 95.64 % and 85.85 % in standards X and XII, respectively. In JEE Advanced, he secured an AIR (All India Rank) 2262 in the second attempt.

The teenager, who used to escape to the college library to study, said, “I stay in a slum area, where it’s always noisy. Neither did I have a room to myself for studying. The library was my refuge.”

He added that he never took stress over studies and always managed to find the right balance.

This year was Arbaz’s second attempt to get admission to IIT-B. “My rank was not good enough last year. My father motivated me to try once more,” he said.

When asked why he pushed his son to give it another go, Rajvalli said Arbaz had been taking scholarship exams and other tests apart from those conducted in school and college. “He had got scholarship for Std XI and XII too without tuition classes. Because of that I was sure that he would get through in the second try,” added Rajvalli.

Arbaz’s father has big dreams for his son. “I want my children to be well-educated so that they can provide a good education and a good life to their children.”

Arbaz is trying to get a scholarship for his IIT-B course too. At the moment, he has got assistance assurance from an NGO, the Association of Muslim Professionals.

(With thanks to Bilal Khan and DNA newspaper),

 (Nothing is impossible, thus spake the wise, famous French warrior Napoleon Bonaparte!)

You are requested to share more motivational stories about pursuit of excellence in Math, IITJEE, RMO etc. here in my blog 🙂

-Nalin Pithwa

Differentiation

We have seen how the concept of continuity is naturally associated with attempts to model gradual changes. For example, consider the function f: \Re \rightarrow \Re given by f(x)=ax+b, where change in f(x) is proportional to the change in x. This simple looking function is often used to model many practical problems. One such case is given below:

Suppose 30 men working for 7 hours a day can complete a piece of work in 16 days. In how many days can 28 men working for 6 hours a day complete the work? It must be evident to most of the readers that the answer is \frac{16 \times 7 \times 30}{28 \times 6}=20 days.

(While solving this we have tacitly assumed that the amount of work done is proportional to the number of men working, to the number of hours each man works per day, and also to the number of days each man works. Similarly, Boyle’s law for ideal gases states that pressure remaining constant, the increase in volume of a mass of gas is proportional to the increase in temperature of the gas).

But, there are exceptions to this as well. Galileo discovered that the distance covered by a body, falling from rest, is proportional to the square of the time for which it has fallen, and the velocity is proportional to the square root of the distance through which it has fallen. Similarly, Kepler’s law tells us that the square of the period of the planet going round the sun is proportional to the cube of the mean distance from the sun.

These and many other problems involve functions that are not linear. If for example we plot the graph of the distance covered by a particle versus time, it is a straight line only when the motion is uniform. But, we are seldom lucky to encounter only uniform motion. (Besides, uniform motion would be so monotonous. Perhaps, there would be no life at all motions if all motions were uniform. Imagine a situation in which each body is in uniform motion. A body at rest would be eternally at rest and those once in motion, would never stop.) So the simple method of proportionality becomes quite inadequate to tackle such non-linear problems. The genius of Newton lay in looking at those problems which are next best to linear, the ones that are nearly linear.

We know that the graph of a linear function is a straight line. What Newton suggested was to look at functions, small portions of whose graphs look almost like a straight line (see Fig 1).

In Fig 1, the graph certainly is not a straight line. But a small portion of it looks like a straight like a straight line. To formalize this idea, we need the concept of differentiability.

Definition.

Let I be an open interval and f: I \rightarrow \Re be a function. We say that f is locally linear or differentiable at x_{0} \in I if there is a constant m such that

f(x)-f(x_{0})=m(x-x_{0})+r(x_{0},x)(x-x_{0})

or equivalently, for x in a punctured interval around x_{0},

\frac{f(x)-f(x_{0})}{x-x_{0}}=m+r(x_{0},x)

where r(x_{0},x) \rightarrow 0 as x \rightarrow x_{0}

What this means is that for small enough x-x_{0}, \frac{f(x)-f(x_{0})}{x-x_{0}} is nearly a constant or, equivalently, f(x)-f(x_{0}) is nearly proportional to the increment x-x_{0}. This is what is called the principal of proportional parts and used very often in calculations using tables, when the number for which we are looking up the table is not found there.

Thus, if a function f is differentiable at x_{0}, then \lim_{x \rightarrow x_{0}}\frac{f(x)-f(x_{0})}{x-x_{0}}

exists and is called the derivative of f at x_{0} and denoted by f^{'}(x_{0}). So we write

\lim_{x \rightarrow x_{0}}\frac{f(x)-f(x_{0})}{x-x_{0}}=f^{'}(x_{0}).

We need to look at functions which are not differentiable at some point, to fix our ideas. For example, consider the function f: \Re \rightarrow \Re defined by f(x)=|x|.

This function though continuous at every point is not differentiable at $latex x=0$. In fact, \lim_{x \rightarrow 0_{+}}\frac{|x|}{x}=-1. What all this means is that if one looks at the graph of f(x)=|x|, it has a sharp corner at the origin.

No matter how small a part of the graph containing the point (0,0) is taken, it never looks like a line segment. The reader can test for the non-differentiability of f(x)=|\sin{x}| at x=n\pi.

This leads us to the notion of the direction of the graph at a point: Suppose f: I \rightarrow \Re is a function differentiable at x_{0} \rightarrow I, and let P and Q be the points (x_{0},f(x_{0})) and (x, f(x)) respectively in the graph of f. (see Fig 2).

The chord PQ has the slope \frac{f(x)-f(x_{0})}{x-x_{0}}. As x comes close to x_{0}, the chord tends to the tangent to the curve at (x_{0}, f(x_{0})). So, \lim_{x \rightarrow x_{0}} \frac{f(x)-f(x_{0}}{x-x_{0}} really represents the slope of the tangent at (x_{0},f(x_{0})) (see Fig 3).

Similarly, if x(t) is the position of a moving point in a straight line at time t, then \frac{x(t)-x(t_{0}}{t-t_{0}} is its average velocity in the interval of time [t_{0},t]. Its limit as t goes to t_{0}, if it exists, will be its instantaneous velocity at the instant of time t_{0}. We have

x^{'}{t_{0}}=\lim_{t \rightarrow t_{0}}\frac{x(t)-x(t_{0})}{t-t_{0}} is instantaneous velocity at t_{0}.

If the limit of \frac{f(x)-f(x_{0})}{x-x_{0}} does not exist as x tends to x_{0}, the curve (x, f(x)) cannot have a tangent at (x_{0},f(x_{0})), as we saw in the case of f(x)=|x| at (0,0); the graph abruptly changes its direction. If we look at the motion of a particle which is moving with uniform velocity till time t_{0} and is abruptly brought to rest at that instant, then its graph would look as in Fig 4a.

This is also what we think happens when a perfectly elastic ball impinges on another ball of the same mass at rest, or  when a perfectly elastic ball moving at a constant speed impinges on a hard surface (see fig 4b). We see that there is a sharp turn in the space time graph of such a motion at time t=t_{0}. Recalling the interpretation of

x^{'}(t)=\lim_{t \rightarrow t_{0}} \frac{x(t)-x(t_{0})}{t-t_{0}} as its instantaneous velocity at t=t_{0}, we see that in the situation described above, instantaneous velocity at t=t_{0} is not a meaningful concept.

We have already seen that continuous functions need not be differentiable at some points of their domain. Actually there are continuous functions which are not differentiable anywhere also. On the other hand, as the following result shows, every differentiable function is always continuous.

Theorem:

If a function is differentiable at x_{0}, then it is continuous there.

Proof:

If f is differentiable at x_{0}, then let \lim_{x \rightarrow x_{0}} \frac{f(x)-f(x_{0}}{x-x_{0}}=l. Setting

r(x,x_{0})=\frac{f(x)-f(x_{0})}{x-x_{0}}-l, we see that \lim_{x \rightarrow x_{0}}r(x, x_{0})=0. Thus, we have

f(x)-f(x_{0})=(x-x_{0})l + (x-x_{0})r(x,x_{0})

Now, \lim_{x \rightarrow x_{0}} (f(x)-f(x_{0}))=\lim_{x \rightarrow x_{0}}(x-x_{0})l + \lim_{x \rightarrow x_{0}} (x-x_{0})r(x, x_{0})=0

This shows that f is continuous at x_{0}.

QED.

Continuity of f at x_{0} tells us that f(x)-f(x_{0}) tends to zero as x - x_{0} tends to zero. But, in the case of differentiability, f(x)-f(x_{0}) tends to zero at least as fast as x-x_{0}. The portion l(x-x_{0}) goes to zero no doubt but the remainder |f(x)-f(x_{0})-l(x-x_{0})| goes to zero at a rate faster than that of |x-x_{0}|. This is how differentiation was conceived by Newton and Leibniz. They introduced a concept called an infinitesimal. Their idea was that when x-x_{0} is an infinitesimal, then so is f(x)-f(x_{0}), which is of the same order of infinitesimal as x-x_{0}.The idea of infinitesimals served them well but had a little problem in its definition. They were introduced seemed to run against the Archimedean property. The definition of infinitesimals can be made rigorous But, we do not go into it here. However, we can still usefully deal with concepts and notation like:

(a) f(x)=\mathcal{O}(g(x)) as x \rightarrow x_{0} if there exists a K such that |f(x)| \leq K|g(x)| for x sufficiently near x_{0}.

(b) f(x)=\mathcal{o}(g(x)) as x \rightarrow x_{0} if \lim_{x \rightarrow x_{0}}\frac{f(x)}{g(x)}=0.

Informally, f(x)=\mathcal{o}(g(x))=0 means f(x) is of smaller order than g(x) as

x \rightarrow x_{0}. In this notation, f is differentiable at x_{0} if there is an l such that

|f(x)-f(x_{0})-l(x-x_{0})|=\mathcal{o}(|x-x_{0}|).

We shall return to this point again. Let us first give examples of derivatives of some functions.

Examples.

(The proof are left as exercises).

(a) f(x)=x^{n}, f^{'}(x_{0})=\lim_{x \rightarrow x_{0}}\frac{x^{n}-{x_{0}}^{n}}{x-x_{0}}=n{x_{0}}^{n-1}, n a positive integer.

(b) f(x)=x^{n} (x \neq 0, where n Is a negative integer), f^{'}(x)=nx^{n-1}

(c) f(x)=e^{x}, f^{'}(x)=e^{'}(x)

(d) f(x)=a^{x}, f^{'}(x)=a^{x}\log[e]{a}

Exponentials and logarithms

We continue this topic after the intermediate value theorem posted earlier.

For a>1, define f: \Re \rightarrow \Re by f(x)=a^{x}. It is easily seen that f(x)<f(y) if x>y. This shows that f is one to one. Further, \lim_{x \rightarrow \infty}f(x)=\infty, whereas \lim_{x \rightarrow -\infty}f(x)=0. That, f: \Re \rightarrow \Re is onto \Re_{+} follows from the intermediate value theorem. Thus, f:\Re \rightarrow \Re_{+} defined by f(x)=a^{x} is bijective. So there is a unique map

g: \Re_{+} \rightarrow \Re

such that f(g(y))=y for every y in \Re_{+} and g(f(x))=x for every x in \Re.

This function g is what we call the logarithm function of y to the base a, written as \log_{a}{y}. In fact, the logarithm is a continuous function.

For y_{0} \in \Re_{+}, \varepsilon>0, let \delta=min\{ a^{x_{0}+\varepsilon}-a^{x_{0}}, a^{x_{0}}-a^{x_{0}-\varepsilon}\}, where x_{0}=\log_{a}{y_{0}}. Then, we have for

|y_{0}-y|<b, a^{x_{0}-\varepsilon} \leq y_{0}-\delta <y < y_{0}+\delta \leq a^{x_{0}+\varepsilon}, or

x_{0}-\varepsilon<\log_{a}{y}<x_{0}+\varepsilon or

g(y_{0})-\varepsilon<g(y)<g(y)+\varepsilon

Exercise.

If f:\Re \rightarrow \Re is an increasing continuous function, show that it is bijective onto its range and its inverse is also continuous.

With the help of the logarithm function, we can evaluate \lim_{x \rightarrow 0}\frac{a^{x}-1}{x}.

Let a^{x}=1+y so that y \rightarrow 0 as x \rightarrow 0. Also, x=\log_{a}{1+y}. So, we have

\lim_{x \rightarrow 0}\frac{a^{x}-1}{x}=\lim_{y \rightarrow 0}\frac{y}{\log_{a}{1+y}}=\lim_{y \rightarrow 0}\frac{1}{\frac{1}{y}\log_{a}{1+y}}=\lim_{y \rightarrow 0}\frac{1}{\log_{a}{(1+y)^{\frac{1}{y}}}}, that is,

\frac{1}{\log_{a}{e}}=\log_{e}{a}.

In the step before last, we have used the fact that the logarithm is a continuous function and that \lim_{y \rightarrow 0}{(1+y)^{1/y}}=e, while in the last step we have observed that (\log_{a}{e})^{-1}=\log_{e}{a} (Exercise).

More later,

Nalin Pithwa

 

Intermediate value theorem

Intermediate value theorem.

Let f:[a,b] \rightarrow \Re be continuous. Suppose for x_{1},x_{2} \in [a,b], f(x_{1}) \neq f(x_{2}). If c is a real number between f(x_{1}) and f(x_{2}), then there is an x_{0} between x_{1} and x_{2} such that f(x_{0})=c.

Proof.

Define g:[a,b] \rightarrow \Re by g(x)=f(x)-c. Then, g is a continuous function such that g(x_{1}) and g(x_{2}) are of opposite signs. The assertion of the theorem amounts to saying that there is a point x_{0} between x_{1} and x_{2} such that g(x_{0})=0. Without loss of generality, we may take g(x_{1})>0 and g(x_{2})<0 (otherwise replace g by -g). If g(\frac{x_{1}+x_{2}}{2})>0, we write \frac{x_{1}+x_{2}}{2}=a_{1} and x_{2}=b_{1}; otherwise, write a_{1}=x_{1} and b_{1}=\frac{x_{1}+x_{2}}{2} so that we have g(a_{1})>0, g(b_{1})<0. Now if g(\frac{x_{1}+x_{2}}{2})=0, then x_{0}=\frac{a_{1}+b_{1}}{2}.

If g(\frac{a_{1}+b_{1}}{2})>0, write \frac{a_{1}+b_{1}}{2}=a_{2} and b_{1}=b_{2}, otherwise write a_{2}=a_{1} and b_{2}=\frac{a_{1}+b_{1}}{2}, so that we have g(a_{2})>0 and g(b_{2})<0. We could continue this process and find sequences (a_{n})_{n=1}^{\infty},

(b_{n})_{n=1}^{\infty} with g(a_{n})>0 and g(b_{n})<0 and

a_{1} \leq a_{2} \leq \ldots \leq a_{n} \leq a_{n+1} \leq \ldots \leq x_{2},

b_{1} \geq b_{2} \geq \ldots \geq b_{n} \geq b_{n+1} \geq \ldots \geq x_{1}.

Since (a_{n})_{n=1}^{\infty} is a monotonically non-decreasing sequence bounded above, it must converge. Suppose it converges to \alpha. Similarly, (b_{n})_{n=1}^{\infty} is monotonically non-increasing, bounded below and therefore converges to, say, \beta. We further note that b_{n}-a_{n}=\frac{x_{2}-x_{1}}{2^{n}} \rightarrow 0 as n \rightarrow \infty implying \alpha=\beta. Let us call this x_{0}. By the continuity of g, we have \lim_{n \rightarrow \infty}g(a_{n})=g(x_{0})=\lim_{n \rightarrow \infty}g(b_{n}), and since g(a_{n})>0 for all n, we must have g(x_{0}) \geq 0 and at the same time since g(b_{n})<0 for all n, we must also have g(x_{0}) \leq 0. This implies g(x_{0})=0. QED.

Corollary. 

If f is a continuous function in an interval I and f(a)f(b)<0 for some a,b \in I, then there is a point c between a and b for which f(c)=0. (Exercise).

The above result is often used to locate the roots of equations of the form f(x)=0.

For example, consider the equation: f(x) \equiv x^{3}+x-1=0.

Note that f(0)=-1 whereas f(1)=1. This shows that the above equation has a root between 0 and 1. Now try with 0.5. f(0.5)=-0.375. So there must be a root of the equation between 1 and 0.5. Try 0.75. f(0.75)>0, which means that the root is between 0.5 and 0.75. So, we may try 0.625. f(0.625)<0. So the root is between 0.75 and 0.625. Now, if we take the approximate root to be 0.6875, then we are away from the exact root at most a distance of 0.0625. If we continue this process further, we shall get better and better approximations to the root of the equation.

Exercise. 

Find the cube root of 10 using the above method correct to 4 places of decimal.

More later,

Nalin Pithwa

A differentiation identity — Thanks to Prof Terence Tao

https://terrytao.wordpress.com/2015/05/30/a-differentiation-identity

Regards,

Nalin Pithwa

Announcement: A Full Scholarship Program

We are Mathematics Hothouse, Bangalore, http://www.mathothouse.com We are pleased to announce that henceforth, every academic year, we will be admitting 5 students with full scholarship or 100% discount, from any part of India, who are talented, deserving or needy, to our program for RMO and INMO coaching. The coaching will be via on-line, live, video interactive Skype sessions mimicking traditional classroom or just classroom coaching or even correspondence course.

If you wish to apply, please write to mathhothouse01@gmail.com

Regards,

Nalin Pithwa

Some properties of continuous functions : continued

We now turn to the question whether f(x_{n}) approximates f(x) when x_{n} approximates x. This is the same as asking: suppose (x_{n})_{n=1}^{\infty} is a sequence of real numbers converging to x, does (f(x_{n}))_{n=1}^{\infty} converges to f(x)?

Theorem.

If f: \Re \rightarrow \Re is continuous at x \in \Re if and only if (f(x_{n}))_{n=1}^{\infty} converges to f(x) whenever (x_{n})_{n=1}^{\infty} converges to x, that is,

\lim_{n \rightarrow \infty}f(x_{n})=f(\lim_{n \rightarrow \infty} x_{n}).

Proof.

Suppose f is continuous at x and (x_{n})_{n=1}^{\infty} converges to x. By continuity, for every \varepsilon > 0, there exists \delta >0 such that |f(x)-f(y)|<\varepsilon whenever |x-y|<\delta. Since (x_{n})_{n=1}^{\infty} converges to x, for this \delta >0, we can find an n_{0} such that |x-x_{0}|<\delta for n > n_{0}.

So |f(x)-f(x_{n})|<\varepsilon for n > n_{0} as |x_{n}-x|<\delta.

Conversely, suppose (f(x_{n}))_{n=1}^{\infty} converges to f(x) when (x_{n})_{n=1}^{\infty} converges to x. We have to show that f is continuous at x. Suppose f is not continuous at x. That is to say, there is an \varepsilon >0 such that however small \delta we may choose, there will be a y satisfying |x-y|<\delta yet |f(x)-f(y)|\geq \varepsilon. So for every n, let x_{n} be such a number for which |x-x_{n}|<(1/n) and |f(x)-f(x_{n})| \geq \varepsilon. Now, we see that the sequence (x_{n})_{n=1}^{\infty} converges to x. But (f(x_{n}))_{n=1}^{\infty} does not converge to f(x) violating our hypothesis. So f must be continuous at x. QED.

More later,

Nalin Pithwa

 

 

The Sieve — elementary combinatorial applications

One powerful tool in the theory of enumeration as well as in prime number theory is the inclusion-exclusion principle (sieve of Erathosthenes). This relates the cardinality of the union of certain sets to  the cardinalities of the intersections of some of them, these latter cardinalities often being easier to handle. However, the formula does have some handicaps, it contains terms alternating in sign, and in general it has too many of them!

A natural setting for the sieve is in the language of probability theory. Of course, this only means a division by the cardinality of the underlying set, but it has the advantage that independence of occurring events can be defined. Situations in which events are almost independent are extremely important in number theory and also arise in certain combinatorial applications. Number theorists have developed ingenious methods to estimate the formula when the events (usually divisibility by certain primes) are almost independent. We give here the combinatorial background of some of these methods. Their actual use, however, rests upon complicated number theoretic considerations which are here illustrated only by two problems.

It should be emphasized that the sieve formula has many applications in quite different situations.

A beautiful general theory of inclusion-exclusion, usually referred to as the theory of the Mobius function is due to L. Weisner, P. Hall and G. C. Rota.

Question 1: In a high school class of 30 pupils, 12 pupils like mathematics, 14 like physics and 18 chemistry, 5 pupils like both mathematics and physics, 7 both physics and chemistry, 4 pupils like mathematics and chemistry. There are 3 who like all three subjects. How many pupils do not like any of them?

Question 2: (a) The Sieve Formula: 

Let A_{1}, \ldots, A_{n} be arbitrary events of a probability space (\Omega, P). For each

I \subseteq \{ 1, \ldots , n\}, let

A_{I}= \prod_{i\in I}A_{i}, A_{\phi}=\Omega

and let \sigma_{k}=\sum_{|I|=k}P(A_{I}), \sigma_{0}=1

Then, P(A_{1}+\ldots + A_{n})=\sum_{j=1}^{n}(-1)^{j-1}\sigma_{j}

Question 2: (b) (Inclusion-Exclusion Formula)

Let A_{1}, \ldots, A_{n} \subseteq S, where S is a finite set, and let

A_{I}=\bigcap_{ j \in J}A_{j}, A_{\phi}=S. Then,

|S-(A_{1}\cup \ldots \cup A_{n})|=\sum_{J \subset \{ 1, \ldots n\}}(-1)^{|I|}|A_{I}|

Hints:

1) The number of pupils who like mathematics or physics is not 12+14. By how much is 26 too  large?

2) Determine the contribution of any atom of the Boolean Algebra generated by A_{1},, \ldots A_{n} on each side.

Solutions.

1) Let us subtract from 30 the number of pupils who like mathematics, physics, chemistry, respectively:

30-12-14-13.

This way, however, a student who likes both mathematics and physics is subtracted twice; so we have to add them back, and also for the other pairs of subjects:

30-12-14-13+5+7+4.

There is still trouble with those who like all three subjects. They were subtracted 3 times, but back 3 times, so we have to subtract them once more to get the result:

30-12-14-13+5+7+4-3=4/

2) (a) Let B=A_{1}A_{2}\ldots A_{k}\overline A_{k+1}\ldots \overline A_{n}

be any atom of the Boolean algebra generated by A_{1}, A_{2}, \ldots A_{n} (with an appropriate choice of indices, every atom has such a form.) Every event in the formula is the union of certain (disjoint) atoms; let us express each P(A_{I}) and P(A_{1}+A_{2}+\ldots +A_{n}) as the sum of the probabilities of the corresponding atoms. We show that the probability of any given atom cancels out.

The coefficient of P(B) on the left hand side is

1, if k \neq 0 and 0, if k=0

B occurs in A_{I} if I \subseteq \{ 1, \ldots k\}. so its coefficient on the right hand side is

\sum_{j=1}^{k}\left( \begin{array}{c}    k \\    j \end{array} \right) (-1)^{j}=1-(1-1)^{k}=1, k \neq 04, and latex 0 if k =0$.

Thus, P(B) has the same coefficient on both sides, which proves part a.

Solution (b):

Choose an element x of S by a uniform distribution. Then, A_{i} can be identified with the event that

x \in A_{I}, and we have

P(A_{i})=\frac{|A_{i}|}{|S|}

So, we have, by the above,

P(A_{1}+A_{2}+\ldots + A_{n})=\sum_{j=1}^{n}(-1)^{j-1}\sum_{|I|=j}\frac{|A_{I}|}{|S|}, where

\sum_{\phi \neq I \subseteq \{ 1, \ldots n\} }(-1)^{|I|-1}\frac{A_{I}}{|S|}, or equivalently

P(\overline{A_{1}}\ldots \overline{A_{n}})=1-P(A_{1}+\ldots + A_{n}), which in turn equals,

1- \sum_{\phi \neq I \subseteq \{ 1, \ldots , n\}}(-1)^{|I|-1}\frac{|A|}{|S|}

The assertion (b) follows on multiplying by |S|.

More later,

Nalin Pithwa

Jensen’s inequality and trigonometry

The problem of maximizing \cos{A}+\cos{B}+\cos{C} subject to  the constraints A \geq 0,

B \geq 0, C \geq 0 and A+B+C=\pi can be done if instead of the AM-GM inequality we use a stronger inequality, called Jensen’s inequality. It is stated as follows:

Theorem. 

Suppose h(x) is a twice differentiable, real-valued function on an interval [a,b] and that h^{''}(x)>0 for all a<x<b. Then, for every positive integer m and for all points x_{1}, x_{2}, \ldots x_{m} in [a,b], we have

h(\frac{x_{1}+x_{2}+\ldots+x_{m}}{m}) \leq \frac{h(x_{1})+h(x_{2})+h(x_{3})+\ldots+h(x_{m})}{m}

Moreover, equality holds if and only if x_{1}=x_{2}=\ldots=x_{m}. A similar result holds if

h^{''}(x)<0 for all a<x<b except that the inequality sign is reversed.

What this means is that the value of assumed by the function h at the arithmetic mean of a given set of points in the interval [a,b] cannot exceed the arithmetic mean of the values assumed by h at these points, More compactly, the value at a mean is at most the mean of values if h^{''} is positive in the open interval (a,b) and the value at a mean is at least the mean of values if h^{''} is negative on it. (Note that h^{''} is allowed to vanish at one or both the end-points of the interval [a,b].)

A special case of Jensen’s inequality is the AM-GM inequality.

Jensen’s inequality can also be used to give easier proofs of certain other trigonometric inequalities whose direct proofs are either difficult or clumsy. For example, applying Jensen’s inequality to the function h(x)=\sin{x} on the interval [0,\pi] one gets the following result. (IITJEE 1997)

If n is a positive integer and 0<A_{i}<\pi for i=1,2,\ldots, n, then

\sin{A_{1}}+\sin{A_{2}}+\ldots+\sin{A_{n}} \leq n \sin{(\frac{A_{1}+A_{2}+\ldots+A_{n}}{n})}.

More later,

Nalin Pithwa