Tag Archives: trigonometry

Graphs of trig raised to trig

Question: Consider the function

y=f(x)=x^{x}. Can you graph it? It is variable raised to variable. Send me your observations.

Now, consider the functions:

(\tan \theta)^{\tan \theta}, (\tan \theta)^{\cot \theta},

(\cot \theta)^{\tan {\theta}}, (\cot \theta)^{\cot \theta}.

Can you graph these? What is the difference between these and the earlier generalized case?

Now, consider the function:

Let 0 \deg < \theta < 45 \deg.

Arrange t_{1}=(\tan \theta)^{\tan \theta}, t_{2}=(\tan \theta)^{\cot \theta}

t_{3}=(\cot \theta)^{\tan \theta} and t_{4}=(\cot \theta)^{\cot \theta}

in decreasing order.

Kindly send your comments/observations.

More later,

Nalin Pithwa

De Moivre’s Theorem application


If f_{r}(\alpha)=(\cos{\frac{\alpha}{r^{2}}}+i\sin{\frac{\alpha}{r^{2}}}) \times (\cos{\frac{2\alpha}{r^{2}}}+i\sin{\frac{2\alpha}{r^{2}}}) \ldots (\cos{\frac{\alpha}{r}}+i\sin{\frac{\alpha}{r}}), then

\lim_{n \rightarrow \infty}f_{n}{\pi} equals

(a) -1

(b) 1

(c) -i

(d) i


Using De Moivre’s theorem,

f_{r}{\alpha}=e^{i\frac{\alpha}{r^{2}}}e^{i\frac{2\alpha}{r^{2}}}\ldots e^{i\frac{\alpha}{r}}

which in turn equals e^{(i \frac{\alpha}{r^{2}})(1+2+\ldots+r)}=e^{(i\frac{\alpha}{r^{2}})(\frac{r(r+1)}{2})}=e^{i(\frac{\alpha}{2})(1+\frac{1}{r})}

Hence, \lim_{n \rightarrow \infty}f_{n}(\pi)=\lim_{n \rightarrow \infty}e^{(i)(\frac{\pi}{2})(1+\frac{1}{n})}=e^{i(\frac{\pi}{2})}=\cos{\frac{\pi}{2}}+i\sin{\frac{\pi}{2}}=i.

More complex stuff to be continued in next blog (pun intended) ūüôā

Nalin Pithwa

Quick Review of Trigonometric Optimization Methods

Let us review together the four general methods we can use for triangular optimization.

I) Trigonometric Method: 

The essence of this method is the observation that the cosine of an angle is at most one and that it equals 1 only when the angle is zero. This fact is applied to the difference between two of the angles A, B and C, holding the third angle fixed, to show that unless those two angles are equal, the objective functionn can be increased (or decreased as the case may be). Consequently, at an optimal solution, these two angles must be equal. If the objective function is symmetric (as is the case, in almost all problems of triangular optimization), then every two of A, B and C must be equal to each other and hence, the triangle ABC must be equilateral.

This method is elementary and easy to apply. Even when the objective function is only partially symmetric, that is, symmetric in two but not in all the three variables, it can be applied to those two variables, holding the third variable fixed. Suppose, for example, that we want to maximize f(A,B,C)=\cos{A}+2\cos{B}+\cos{C}. This is symmetric in A and C. So, by the same reasoning as for maximizing \cos{A}+\cos{B}+\cos{C}, which at an optimal solution we must have A=C. Then, B=\pi-2A, which makes f effectively a function of just one variable, viz., \cos{A}+2\cos(\pi-2A)+\cos{A}, which equals 2\cos{A}-4\cos^{2}{A}+2. This can be maximized as a quadratic in \cos{A} either by completing the square or using calculus. The maximum occurs when A=\cos^{-1}{1/4}. Thus, the maximum value of \cos{A}+2\cos{B}+\cos{C} for a triangle ABC is 1/4. The method, of course, fails if the function is not even partially symmetric. This is not surprising. Basically, in a triangular optimization problem, we are dealing with a function f(A,B,C) of three variables. Because of the constraint A+B+C=\pi, any one of the variables can be expressed in terms of the other two. This effectively makes f a function of two variables. Optimization of functions of several variables requires advanced methods. It is only when f satisfies some other conditions such as partial symmetry that we can hope to reduce the number of variables further so that elementary methods can be applied.

II) Algebraic Method: 

The essence of this method is to reduce the optimization problem to some inequality using suitable trigonometric formulae or identities. The inequality is then established using some standard inequality such as the AM-GM-HM inequality, or Jensen’s inequality, or sometimes, by doing some more basic work. The fundamental ideas are very simple, viz., (a) the square of any real number is non-negative and is zero only when that number is zero, and (b) the sum of two or more non-negative numbers is non-negative and vanishes if and only if each of the term is zero. When this method works, it works elegantly. But it is not always easy to come up with the right algebraic manipulations. Sometimes, certain simplifying substitutions have to be used. Still, it is an elementary method and deserves to be tried.

III) Jensen’s inequality:

This is a relatively advanced method. It is directly applicable when the objective function is, or can be recast, in a certain form, viz., h(A)+h(B)+h(C), where h is a function of one variable whose second derivative maintains the same sign over a suitable interval. But, even when h fails to do so, the method can sometimes be applied with a suitable conversion of the problem.

IV) Lagrange’s Multipliers:

This is a highly advanced method based on the calculus of functions of several variables. It is applicable to all types of objective functions, not just those that are symmetric or partially symmetric. When applied to triangular optimization problems with symmetric objective functions, the optimal solution is either degenerate or an equilateral triangle.

Naturally, for a particular given problem, some of these methods may work better than others. The method of Lagrange’s multipliers is the surest but the most mechanical of all the four. The algebraic method is artistic and sometimes gives the answer very fast. Jensen’s inequality also works fast once you are able to cast the objective function in a certain form. Such a recasting may involve some ingenuity sometimes. The trouble is that both these methods work only in the case of an ¬†optimization problem where the objective function is symmetric. And, in such cases, the method of Lagrange’s multipliers makes a mincemeat of the problem. From an examination point of view, this is a boon if a question about triangular optimization is asked in a “fill in the blanks” or “multiple choice” form, where you don’t have to show any reasoning. if the objective function is symmetric, then the optimal solution is either degenerate or an equilateral triangle. But, degenerate triangles are often excluded from the very definition of a triangle because of the requirement that the three vertices of a triangle must be distinct and non-collinear and, in any case, such absurdities are unlikely to be asked in an examination! So, it is a safe bet to simply¬†assume¬†that the optimal solution is an equilateral triangle and proceed with further work (namely, calculating the value of the objective function for an equilateral triangle). This saves you a lot of time.

More later,

Nalin Pithwa


Jensen’s inequality and trigonometry

The problem of maximizing \cos{A}+\cos{B}+\cos{C} subject to  the constraints A \geq 0,

B \geq 0, C \geq 0 and A+B+C=\pi can be done if instead of the AM-GM inequality we use a stronger inequality, called¬†Jensen’s inequality. It is stated as follows:


Suppose h(x) is a twice differentiable, real-valued function on an interval [a,b] and that h^{''}(x)>0 for all a<x<b. Then, for every positive integer m and for all points x_{1}, x_{2}, \ldots x_{m} in [a,b], we have

h(\frac{x_{1}+x_{2}+\ldots+x_{m}}{m}) \leq \frac{h(x_{1})+h(x_{2})+h(x_{3})+\ldots+h(x_{m})}{m}

Moreover, equality holds if and only if x_{1}=x_{2}=\ldots=x_{m}. A similar result holds if

h^{''}(x)<0 for all a<x<b except that the inequality sign is reversed.

What this means is that the value of assumed by the function h at the arithmetic mean of a given set of points in the interval [a,b] cannot exceed the arithmetic mean of the values assumed by h at these points, More compactly, the value at a mean is at most the mean of values if h^{''} is positive in the open interval (a,b) and the value at a mean is at least the mean of values if h^{''} is negative on it. (Note that h^{''} is allowed to vanish at one or both the end-points of the interval [a,b].)

A special case of Jensen’s inequality is the AM-GM inequality.

Jensen’s inequality can also be used to give easier proofs of certain other trigonometric inequalities whose direct proofs are either difficult or clumsy. For example, applying Jensen’s inequality to the function h(x)=\sin{x} on the interval [0,\pi] one gets the following result. (IITJEE 1997)

If n is a positive integer and 0<A_{i}<\pi for i=1,2,\ldots, n, then

\sin{A_{1}}+\sin{A_{2}}+\ldots+\sin{A_{n}} \leq n \sin{(\frac{A_{1}+A_{2}+\ldots+A_{n}}{n})}.

More later,

Nalin Pithwa

Trigonometric Optimization continued

Prove that in any acute angled triangle ABC, \tan {A}+\tan{B}+\tan{C} \geq 3\sqrt{3} with equality holding if and only if the triangle is equilateral. (IITJEE 1998)


Suggestion: Try this without reading further! It looks complicated, but need not be so!! Then, after you have attempted whole-heartedly, compare your solution with the one below.

The solution to the above problem is based on the well-known identity:

\tan{A}+\tan{B}+\tan{C}=\tan{A}\tan{B}\tan{C}. For brevity, denote \tan{A}, \tan{B}, \tan{C}

by x, y and z respectively. As ABC is acute-angled, x, y, z are all positive and the AM-GM inequality which says

x+y+z \geq 3{(xyz)}^{1/3} can be applied. Taking cubes of both the sides and cancelling

x+y+z, (which is positive) this gives (x+y+z)^{2} \geq 27. Taking square root we get the desired inequality. If equality is to hold, then it must also hold in the AM-GM inequality, which can happen if and only if

x=y=z, that is, if and only if the triangle is equilateral.

Still, this approach requires some caution. Actually, there are so many trigonometric identities that there is no unanimity as to which ones among them are standard enough to be assumed without proof !! But, of course, the IITJEE Examinations, both Mains and Advanced are multiple choice only.

More later,

Nalin Pithwa