Monthly Archives: August 2014

A motivation for Math and some Math competitive exams in India

Sometime back, there was a tremendous publicity in the Indian media to two Fields medallists of Indian origin. They also talked about what motivated them towards Math when they were young. One should  not do Math just lured by its glamorous applications in IT or other engineering disciplines. But, one can develop both aptitude and attitude  towards it if one works from a young age.

What you need is intrinsic motivation. In this context, I like to quote the following words of a famous mathematician:

“And, a final observation. We should not forget that the solution to any worthwhile problem very rarely comes to us easily and without hard work; it is rather the result of intellectual effort of days or weeks or months. Why should the young mind be willing to make this supreme effort? The explanation is probably the instinctive preference for certain values, that is, the attitude which rates intellectual effort and spiritual achievement higher than material advantage. Such valuation can only be the result of long cultural development of environment and public spirit which is difficult to accelerate by the governmental aid or even by more intensive training in mathematics. The most effective means may consist of transmitting to the young mind the beauty of intellectual work and the feeling of satisfaction following a great and successful mental effort.” —- Gabor Szego

For those of you who are willing to work hard, the following competitive exams (in Math) are available at various levels of your school-life:

1) National Talent Search Exam (NTSE) — standard 10 and 12

2) IMO and NSO of Science Foundation (http://www.sofworld.org ) These exams can be attempted by students of almost all standards.

3) IPM: Institute for Promotion of Mathematics exams.

4) KVPY: Kishor Vaignanik Protsahan Yojana: This can be attempted by students of 11th, 12th and F.Y.B.Sc. This leads to a seat in the pure sciences including Math in the prestigious IISc, Bangalore.

5) Regional Math Olympiads (RMO) and Indian National Math Olympiad (INMO): These exams are really intense. Students from 9th to 11th standard can attempt these. These are conducted by Tata Institute of Fundamental Research (TIFR) and Homi Bhabha.

More later…

Nalin Pithwa

 

Polynomials — quartics

Let us continue our exploration of polynomials. Just as in the previous blog, let me present to you an outline of some method(s) to solve Quartics. As I said earlier, “filling up the gaps” will kindle your intellect. Above all, the main aim of all teaching is teaching “to think on one’s own”.

I) The Quartic Equation. Descartes’s Method (1637).

(a) Argue that any quartic equation can be solved once one has a method to handle quartic equations of  the form:

t^{4}+pt^{2}+qt+r=0

(b) Show that the quartic polynomial in (a) can be written as the product of two factors

(t^{2}+ut+v)(t^{2}-ut+w)

where u, v, w satisfy the simultaneous system

v + w - u^{2}=p

u(w-v)=q

vw=r

Eliminate v and w to obtain a cubic equation in u^{2}.

(c) Show how any solution u obtained in (b) can be used to find all the roots of the quartic equation.

(d) Use Descartes’s Method to solve the following:

t^{4}+t^{2}+4t-3=0

t^{4}-2t^{2}+8t-3=0

II) The Quartic Equation. Ferrari’s Method:

(a) Let a quartic equation be presented in the form:

t^{4}+2pt^{3}+qt^{2}+2rt+s=0

The strategy is to complete the square on the left side in such a way as to incorporate the cubic term. Show that the equation can be rewritten in the form

(t^{2}+pt+u)^{2}=(p^{2}-q+2u)t^{2}+2(pu-r)t+(u^{2}-s)

where u is indeterminate.

(b) Show that the right side of the transformed equation in (a) is the square of  a linear polynomial if u satisfies a certain cubic equation. Explain how such a value of u can be used to completely solve the quartic.

(c) Use Ferrari’s Method to solve the following:

t^{4}+t^{2}+4t-3=0

t^{4}-2t^{3}-5t^{2}+10t-3=0.

Happy problem-solving 🙂

Do leave your comments, questions, suggestions…!

More later…

Nalin

Polynomials — commuting and cubics

Let us start delving deeper in Algebra. But, I will be providing only an outline to you in the present article. I encourage you to fill in the  details. This is a well-known way to develop mathematical aptitude/thinking. (This method of  learning works even in Physics and esoteric/hardcore programming).

Definition. Commuting polynomials. Two polynomials are said to commute under composition if and only if (p \circ q)(t)=(q \circ p)(t)

(i.e., p(q(t)=q(p(t)))). We define the composition powers of a polynomial as follows

p^{(2)}(t)=p(p(t))

p^{(3)}(t)=p(p(p(t)))

and in general, p^{(k)}(t)=p(p^{(k-1)}(t) for k=2,3 \ldots

Show that any two composition powers of the same polynomial commute with each other.

One might ask whether two commuting polynomials must be composition powers of the same polynomial. The  answer is no. Show that any pair of  polynomials in the  following  two sets commute

I. {t^{n}: n=1,2 \ldots}

II. {T^{n}(t):n=1,2 \ldots}

Let a and b be any constants with a not equal to zero. Show that, if p and q are two polynomials which commute under composition, then the polynomials 

(t/a-b/a) \circ p \circ (at+b) and (t/a-b/a) \circ q \circ (at+b) also  commute under the composition. Use this fact to find from sets I and II other  families which commute under composition.

Can you find pairs of polynomials not  comprised in the foregoing discussion which commute under composition? Find families of polynomials which commute under composition and within which  there is exactly one polynomial of each positive degree.

The Cubic Equation. Cardan’s Method. An elegant way to solve the general cubic is due to Cardan. The strategy is to replace an equation in one variable by one in two variables. This provides an extra degree of  freedom by which we can impose a convenient second constraint, allowing us to  reduce the problem to that of solving a quadratic.

(a) Suppose  the given equation is t^{3}+pt+q=0. Set t=u+v andn obtain the equation u^{3}+v^{3}+(3uv+p)(u+v)+q=0.

Impose the second condition 3uv+p=0 (why do we do  this?) and argue that we can obtain solutions for  the cubic by solving the system

u^{3}+v^{3}=-q

uv = -p/3

(b) Show that u^{3} and v^{3} are roots of  the quadratic equation

x^{2}+qx-p^{3}/27=0

(c) Let D=27q^{2}+4p^{3}. Suppose that p and q are both real and that D>0. Show that the quadratic in (b) has real solutions, and that if

u_{0} and v_{0} are the real cubic roots of  these solutions, then the system in (a) is satisfied by

(u,v)=(u_{0},v_{0}), (u_{0}\omega, v_{0}\omega^{2}), (u_{0}\omega^{2}, v_{0}\omega)

where \omega is the imaginary cube root (0.5)(-1+\sqrt{-3}) of  unity. Deduce that the cubic polynomial t^{3}+pt+q has one real and two nonreal zeros.

(d) Suppose that p and q are both real and that D=0. Let u_{0} be the real cube root of the solution of  the quadratic in (b). Show  that, in this case, the cubic has all its zeros real, and in fact can be written in the form

sy^{2} where y=(t+u_{0}) and s=t-2u_{0}

(e) Suppose that p and q are both real and that D<0. Show that the solutions  of  the quadratic equation in (b) are nonreal complex conjugates, and that it is possible to choose cube roots u and v of  these solutions which are complex conjugates and satisfy the system in (a). If

u=r(cos \theta + isin \theta) and v=r(cos \theta - isin \theta), show that the three roots of  the cubic equation are the reals

2r cos \theta

2r cos (\theta + (2/3)\pi)

2r cos(\theta + (4/3)\pi) .

(f) Prove that every cubic equation with real coefficients has at least one real root.

Use Cardan’s Method to solve the cubic equation.

(a) x^{3}-6x+9=0

(b) x^{3}-7x+6=0.

Part (b) above will require  the use of a pocket calculator and some trigonometry. You will also need De Moivre’s Theorem and give a solution to an accuracy of 3 decimal places.

More later…

-Nalin

Nevalninna Prize to IITB alumnus Subhash Khot

Nevalninna Prize to IITB alumnus Subhash Khot

R Ramachandran

The Hindu Aug 14 2014.

Award for his Unique games Conjecture in Computational Complexity Theory

The 36 year-old IIT Bombay alumnus Subhash Khot, an Indian-American theoretical computer scientist at the Courant Institute of Mathematical Sciences of New York University, has been chosen for the International Mathematical Union’s Nevanlinna Prize, which is given “for outstanding contributions in mathematical aspects of information sciences”.

The award is given once every four years during the International Congress of Mathematicians (ICM). The ICM2014 began on August 13 at Seoul, Republic of Korea. Khot’s research has to do with a field in computer science called ‘Computational Complexity’, which seeks to understand the power and limits of efficient computation with standard computers.

“Khot’s prescient definition of the “Unique Games” problem,” said the award citation, “and his leadership in the effort to understand its complexity and its pivotal role in the study of efficient approximation of optimization problems, have produced breakthroughs in algorithmic design and approximation hardness, and new exciting interactions between computational complexity, analysis and geometry.”

Unlike the normal practice of awarding major prizes in mathematics for groundbreaking results, the Nevalninna Prize this time is for a conjecture and that too when the opinion on its truth is divided within the world computer science community. But, by posing the right questions that have enabled great insights into the nature of computational complexity and approximations to computationally hard problems, the conjecture, called the Unique Games Conjecture (UGC), has already proved its great value. More pertinently, Khot has also used the conjecture to prove major results that will remain valid regardless of the truth of the UGC.

“I do believe whether it is eventually shown to be false or not changes little about the brilliance of the conjecture,” wrote Richard Lipton of Georgia Institute of Technology, Atlanta, in a blog post on the UGC. “It takes insight, creativity of the highest order, and a bit of ‘guts’ to make such a conjecture,” Lipton said.

The central question in computational complexity is: How hard are problems to solve? More precisely, if one has found the cleverest possible way to solve a particular problem, how fast will a computer find the answer using it? It is now a truism that some problems are so intractably difficult that computers cannot reliably find the answer at all, at least not in any reasonable amount of time (such as before the end of the universe).

A typical optimization problem, to quote an example given by Meena Mahajan, a theoretical computer scientist from the Institute of Mathematical Sciences, Chennai, is: What is the minimum number of tea stalls that should be put up in a large university campus so that no one needs to walk, say, more than 200 metres along a road to reach one? As the size of the campus increases, the computational time to find the minimum number of stalls grows exponentially fast, even with the best current algorithms.

This is what underlies the famous P ≠ NP conjecture, which is one of the seven $1 million Millennium Problems posed by the Clay Mathematics Institute (CMI). A problem that is tractable and the number of algorithmic steps that its solution requires is at most some power of the problem size is said to belong to P class (where P stands for ‘polynomial time’). A problem belongs to NP class (NP stands for ‘non-deterministic polynomial time’) if the computer can efficiently verify a proposed solution for it is correctness but does not have the resources – time or the number of algorithmic steps or memory size — as a function of the “input size” of the problem to obtain the solution.

Formulated by the American-Canadian computer scientist Stephen Cook in 1971, the conjecture states that these two classes are distinct. This means that there are computational problems whose solutions are beyond the reach of any computer algorithm. Most computer scientists believe that the conjecture is true but even after four decades it is yet to be proved, not want of attempts though.

Such “computationally intractable” or “NP-hard” problems have profound consequences. For instance, they limit our ability to tackle large-scale problems in science and engineering, such as optimal design of protein folding or figuring out the best design for a chip or the best train schedule. Conversely, however, computational intractability enables computer security against hackers attempting to access on-line confidential data.

So, computer scientists asked: If a problem is too hard to be computationally solved quickly and precisely, can we at least find a good approximation? “Counter-intuitive though it may seem,” Mahajan points out returning to her tea stall example, “while we do not know how to find the minimum efficiently, we can find a number that is no more than twice the minimum efficiently! That is, we can efficiently find an approximate solution. Unfortunately, there are many optimization problems for which even this may not be possible.”

The UGC essentially addresses the question of solving NP-hard problems even approximately. It thus complements the P ≠ NP conjecture. In the initial years after P vs, NP conjecture was made, many computer scientists believed that that good approximations must be easier than finding the exact answer to an NP-hard problem. But they soon discovered that, while they could come up with good approximation algorithms for some NP-hard problems (like our tea-stall problem), for most of them even finding a good approximation was not possible. There was no prescriptive way of determining whether approximation was possible. That is, approximation itself was an NP-hard problem.

The Unique Games problem, a remarkably simple problem, encapsulates the elements that make many hard problems hard to solve even approximately. The problem is simply about finding an efficient way of assigning colours to the nodes of a network such that any two connected nodes have different colours (Fig).

 

A random network

Figure. A Random Network

If one has only two colours (say yellow and green), the problem is easy. The problem becomes trickier even when you add just one more colour (say blue). When you colour the first node, say with Y, you don’t know what colour the connected nodes should have, G or B. If you choose one and get to a node that cannot be coloured without violating the condition, you have no way of knowing if a different selection would have solved the problem.

It is not the method that was faulty. In fact, no other method will be able to solve it reliably and efficiently. The problem is NP-hard, meaning effectively impossible. But Khot asked the related question: Which colouring scheme breaks the fewest rules possible? That is, which colouring is the best approximation. The conjecture basically is that if you have lots of colours, even an efficient method to colour the nodes anywhere close to the best one is impossible.

The UGC, which Khot enunciated in 2002, can be stated as follows: It is not just hard but impossible reliably to find an approximate answer to Unique Games quickly. That it is, the problem is NP-hard even to solve approximately. Thus, if the conjecture is true, the problem Unique Games problem, in a technical sense, sets a benchmark for NP-hard problems.

“Khot’s work attempts to give a unified explanation for why so many problems seem hard to approximate,” points out Mahajan. “What makes this so wonderful is that if the UGC is true, it explains in one shot why a host of other problems have resisted solutions so far; they are all at least as hard as Unique Games. All the difficulties encountered in tackling many different optimization problems get distilled into one problem,” she adds.

A couple of years later after Khot made his conjecture, computer scientists realized the real power and importance of the conjecture. They found that, if the UGC was indeed true, then they could set firm limits on how well many other problems could be approximated. For instance, in our tea-stall example, it turns out that twice the minimum is the best one can do under the assumption that P ≠ NP. If one tried to do better than a factor-of-2 approximation, say with a more sophisticated algorithm, it would imply that P = NP. The simple algorithm that gave the factor-of-2 approximation is the best one can do. According to the UGC, an efficient approximation for the Unique Games problem would imply P = NP.

Independent of its truth, the conjecture, however, has proved to be remarkably powerful. In the process of determining how well NP-hard problems could be approximated, Khot and others have proved several significant results in other areas, which seem far removed from computational complexity, such as geometry of different ways of measuring distances, some new theorems in Fourier analysis, better understanding algorithms based on linear and semi-definite programming and structure of ‘foams’. The last connection, which is essentially a tiling problem, came as a surprise even to Khot, according to Mahajan.

While there is a significant group of researchers working to prove the conjecture, there is an equally significant set working to disprove it. Although scientists are yet to find an algorithm that can efficiently find a good approximate solution to Unique Games, finding one such would mean a significant algorithmic breakthrough. Such a new approximation algorithm is most likely to be very different from the approximation algorithms that are known today. Indeed, the process has already thrown up some excellent new algorithmic methods for other situations. In any case, the UGC is likely to keep theoretical computer scientists busy for some years to come.

Mathematics wizard is an IIT Bombay alumnus

Mathematics wizard is an IIT Bombay alumnus

R Ramachandran

http://www.thehindu.com/news/cities/mumbai/mathematics-wizard-is-an-iitbombay-alumnus/article6314039.ece?ref=relatedNews

The Hindu Aug 14 2014.

One of the important contributions made by Fields Medal winner Manjul Bhargava is the generalisation of the ‘composition law’ of binary quadratics (polynomial expressions of the form ax2 + bxy + cy2) discovered 200 years ago by Carl Friedrich Gauss (1777-1855), to higher degree polynomials using an ingenious geometric technique that he discovered.

The awards were announced at the inaugural of the nine-day International Congress of Mathematicians that began in Seoul on Wednesday.

The awards, presented at the quadrennial ICM event, include the Fields Medal, the highest award in mathematics; the Rolf Nevanlinna Prize and the Carl Friedrich Gauss Prize. At the last ICM held in Hyderabad, the Chern Medal and the Leelavati Prize were added.

The Fields Medal is awarded “to recognise outstanding mathematical achievement for existing work and for the promise of future achievement”. “Manjul Bhargava has developed powerful new methods in the geometry of numbers and applied them to count rings of small rank and to bound the average rank of elliptic curves,” said the medal citation.

Besides mathematics, Dr. Bhargava pursues his interests in linguistics and Indian classical music. The Indian-American theoretical computer scientist Subhash Khot, a theoretical computer scientist at the Courant Institute of Mathematical Sciences of New York University, gets the Rolf Nevanlinna Prize. The citation for him read: “Subhash Khot’s prescient definition of the Unique Games problem, and his leadership in the effort to understand its complexity and its pivotal role in the study of efficient approximation of optimization problems, have produced breakthroughs in algorithmic design and approximation hardness, and new exciting interactions between computational complexity, analysis and geometry.”

Born in Ichalkaranji in Maharashtra, Dr. Khot (36) an IIT Bombay alumnus, won the silver medal in the International Mathematics Olympiad in 1994 and 1995 and stood first in the IIT Joint Entrance Examination in 1995. His area of research is Computational Complexity Theory. His Unique Games Conjecture is about the impossibility of even obtaining good approximations to problems that are computationally hard to solve using standard computing algorithms.

 

Major Mathematics awards to two Indian origin scientists

http://www.thehindu.com/sci-tech/science/major-mathematics-awards-to-two-indian-origin-scientists/article6309293.ece?ref=relatedNews

The Hindu Aug 14 2014

Manjul Bhargava and Subhash Khot, are among the eight winners of the prestigious International Mathematical Union awards

Two mathematicians of Indian origin, Manjul Bhargava and Subhash Khot, are among the eight winners of the prestigious awards of the International Mathematical Union (IMU) that were announced at the inaugural of the 9-day International Congress of Mathematicians (ICM) which began today at Seoul, Republic of Korea. The President of Korea, Park Geun-hye, gave away the awards.

The ICM is held every four years and, traditionally, the IMU awards are presented at this quadrennial event. The awards include the Fields Medal, the highest award in mathematics, the Rolf Nevanlinna Prize and the Carl Friedrich Gauss Prize. At the last ICM held at Hyderabad, India, two new awards, the Chern Medal and the Leelavati Prize, were added to the existing three awards.

The 40 year-old Canadian-American Manjul Bhargava, a number theorist from Princeton University, is one of the four Fields Medalists chosen for the ICM2014 awards. The Fields Medal is awarded “to recognize outstanding mathematical achievement for existing work and for the promise of future achievement”. A minimum of two and a maximum of four Fields Medals are given to mathematicians under the age of 40 on January 1 of the year of the Congress.

“Manjul Bhargava has developed powerful new methods in the geometry of numbers and applied them to count rings of small rank and to bound the average rank of elliptic curves,” said the IMU citation for the award.

The other three Fields Medalists are:

The Brazilian mathematician Arthur Avila (35) of the Paris Diderot University-Paris 7 and Instituto Nacional de Matemática Pura e Aplicada, Rio de Janeiro, who has been awarded “for his profound contributions to dynamical systems theory, which have changed the face of the field, using the powerful idea of renormalizations as a unifying principle”; the British mathematician Martin Hairer (39) of the University of Warwick “for his outstanding contributions to stochastic partial differential equations, and in particular for the creation of a theory of regularity structures for such equations”; and, the Iranian mathematician Maryam Mirzakhani (37) of Stanford University “for her outstanding contributions to the dynamics and geometry of Riemann surfaces and their modulii spaces”.

The 36 year-old IIT Bombay alumnus Subhash Khot, an Indian-American theoretical computer scientist at the Courant Institute of Mathematical Sciences of New York University has been chosen for the ICM2014 Rolf Nevanlinna Prize. The Nevanlinna Prize is awarded “for outstanding contributions in mathematical aspects of information sciences”.

“Subhash Khot’s prescient definition of the “Unique Games” problem, and his leadership in the effort to understand its complexity and its pivotal role in the study of efficient approximation of optimization problems, have produced breakthroughs in algorithmic design and approximation hardness, and new exciting interactions between computational complexity, analysis and geometry,” the award citation said.

The Gauss Prize in Applied Mathematics is awarded “to honor scientists whose mathematical research has had an impact outside mathematics – either in technology, in business, or simply in people’s everyday lives”.

The winner of the ICM2014 Gauss Prize is Stanley Osher (72) of University of California, Los Angeles, who has been awarded the Prize “for his influential contributions to several fields in applied mathematics, and for far reaching inventions that have changed our conception of physical, perceptual and mathematical concepts, giving us new tools to apprehend the world.”

The Chern Medal is given “to an individual whose accomplishments warrant the highest level of recognition for outstanding achievements in the field of mathematics”.

The Chern Medal this time goes to the American algebraic geometer Phillip Griffiths (76) “for his groundbreaking and transformative development of transcendental methods in complex geometry, particularly his seminal work in Hodge theory and periods of algebraic varieties”.

Unlike the other awards, the Leelavati Prize is not given for achievements in mathematics research but for outstanding public outreach work in mathematics. Proposed by India, it was originally intended as a one-time award using the grant from the Norwegian Abel Foundation. Thanks to the efforts by Indian mathematicians in finding a sponsor to make it a regular affair, it has now been instituted as a recurring four-yearly award under the IMU charter to be given away at the closing ceremony of the ICM. The award is now being sponsored by Infosys, the Indian IT major.

The ICM2014 Leelavati Prize has been given to the Argentine Adrián Paenza (65) “for his decisive contributions to changing the mind of a whole country about the way it perceives mathematics in daily life, and in particular for his books, his TV programmes, and his unique gift of enthusiasm and passion in communicating the beauty and joy of mathematics”.

Hope Indian youth take up research in sciences: Fields Medal winner

http://www.thehindu.com/opinion/op-ed/fields-medal-winner-manjul-bhargava-hope-indian-youth-take-up-research-in-sciences/article6312471.ece?homepage=true 

The Hindu Aug 14 2014.

Manjul Bhargava, one of the recipients of the Fields Medal, speaks about mathematics, music and more.

How does it feel to have won the Fields Medal? You are the first person of Indian origin to be getting it…

It is of course a great honour; beyond that, it is a great source of inspiration and encouragement – not just for me, but for my students, collaborators, and colleagues who work with me. Hopefully, it is also be a source of inspiration for young people in India to take up research in the sciences!

You have grown up in Canada… did you have any cultural identity questions? Do you think of yourself as a Canadian, American, Indian or none of these or all of these?

I was born in Canada, but grew up mostly in the U.S. in a very Indian home. I learned Hindi and Sanskrit, read Indian literature, and learned classical Indian music. I ate mostly Indian food! On the other hand, I grew up playing with American kids and went to school mostly in the U.S. I liked growing up in two cultures like that because it allowed me to pick and choose the best of both worlds. My Indian upbringing was very important to me.

I also spent a lot of time in India growing up. Every three or four years, I would take off six months of school to spend it in India — mostly in our hometown Jaipur — with my grandparents. There I had the opportunity to truly live in India for extended periods of time, go to school there, brush up on my Hindi and Sanskrit, and learn tabla (as well as some sitar and vocal music). I particularly enjoyed celebrating all the Indian holidays as a child, and flying kites on Makar Sankranti.

I feel very much at home in all three countries. So I definitely think of myself as all three – Canadian, American, and of course Indian.

How did you get interested in Tabla playing? You have learnt the Tabla from Ustad Zakir Husain… Can you tell us how this came about and what it means to you?

I first started learning from my mother, who also plays the tabla. When I was maybe 3 years old, I used to hear my mother playing often, and I asked her to teach me to play a little bit. She tried to teach me the basic sound “na.” She demonstrated the sound to me, and I tried to mimic her to reproduce the sound, but nothing came out. I was hooked! I always loved the beauty and the intricacy of the tabla sound and repertoire, and how it also perfectly complemented sounds on the sitar, or vocal, etc. I learned with my mom first, and then with Pandit Prem Prakash Sharma in Jaipur whenever I visited there.

I met Zakir ji when I was an undergraduate at Harvard. He came to perform there when I was a third year student. I had the exciting opportunity to meet him afterwards at a reception, and he invited me to visit him in California (where he lives). I have had the great pleasure and privilege of learning from him a bit off and on since then. More than that, he has been a wonderful and inspirational friend, and he and his whole family — in both California and Bombay — have been such a huge source of love, encouragement, and support to me for so long, and I am very grateful to them for that.

Do you collaborate with mathematicians in India? Do you have contacts with the institutes in India?

For many years now, I have been an adjunct professor at TIFR-Mumbai (Tata Institute for Fundamental Research), IIT-Bombay, and the University of Hyderabad. I’ve spent a lot of time at these three institutes, especially at TIFR and IIT-B, over many years. I’ve lectured extensively to students at these institutes, as well as collaborated a lot with mathematicians there, such as with Eknath Ghate at TIFR (who recently won the Shanti Swarup Bhatnagar Prize for mathematical sciences).

I’ve also been involved in starting a new institute in Bangalore called “ICTS” (International Center for Theoretical Sciences). It will be inaugurated next year, and we hope it will be a great success. The director is Professor Spenta Wadia of TIFR, and the head of the International Advisory Board is Nobel Prize Winner Professor David Gross. So hopefully I will spend even more time in India after the inauguration next year!

Recently you have won prizes for your work on the Birch and Swinnerton-Dyer conjecture which was listed as one of the seven millennium prize problems. Can you explain the significance of this work?

In joint work with Christopher Skinner and Wei Zhang, we have shown that the Birch and Swinnerton-Dyer Conjecture is true “most” of the time (more precisely, for more than 66.48 per cent of elliptic curves!). Previously, it was not known that it was true for more than 0 per cent. So that is significant progress, but it is still “not” a complete solution!

Finishing a proof of the Birch and Swinnerton-Dyer Conjecture would be a momentous achievement, and it is one of my favorite problems!, but it is not solved yet.

Do you believe that this is the best time to study math – for instance, number theory is now being applied in cryptography and so on? What does it take to do great mathematics?

It is interesting that pure mathematicians, like myself, rarely think directly about applications. We are instead guided primarily by what directions we find most beautiful, elegant, or most promising. We tend to treat our discipline more as an art than as a science! And indeed, this is the attitude that allows us to be the most creative and productive.

On the other hand, it is also true, historically, that the mathematics that has been the most applicable and important to society over the years has been the mathematics that scientists found while searching for beauty; and eventually all beautiful and elegant mathematics tends to find applications.

That is why it is very important to fund basic science research. When science funding is only application-driven, it does not allow full freedom and creativity. Funding basic science allows a large interconnected database of scientific techniques and knowledge to accumulate, so that when a societal need arises, the science is ready to be applied and adapted to the purpose.

Elliptic curves (and the related Birch and Swinnerton-Dyer Conjecture) are indeed a good example! They were first studied by pure mathematicians, but are now one of the most important mathematical objects in cryptography. So that is indeed exciting, but I just want to emphasize that they were exciting and central to number theory well before these applications were found; but it was inevitable that they would be found, given their fundamental nature.

That is why elliptic curves have fascinated me! They are so fundamental in both pure and applied mathematics. Beyond advancing the subject of number theory in general, a heightened understanding of elliptic curves also has important implications in coding theory and cryptography. Encryption schemes, such as those used to protect our privacy when transmitting information online, often centrally involve the use of elliptic curves.

Math is generally considered a difficult subject but you have been enjoying math since your childhood. What aspect of your education could have contributed to this enjoyment?

I’ve always enjoyed mathematics as far back as I can remember, since I was two or three years old. Since my mother was a mathematician, I always had her as a resource – I would always go and ask her questions and so I learned a lot from her. She was also a great source of encouragement – she always answered my questions enthusiastically, and always encouraged me to pursue whatever I was interested in – and that probably single-handedly contributed the most to my enjoyment of mathematics (and of all my interests)!

An Indian origin mathematician with Midas Touch Manjul Bhargava

Midas Touch Mathematician Manjul Bhargava

The Hindu Aug 14 2014

http://www.thehindu.com/sci-tech/science/manjul-bhargava-the-midas-touch-mathematician/article6309323.ece?ref=relatedNews

Number theorist Manjul Bhargava wins Fields Medal

Manjul Bhargava, the Canadian-American number theorist from Princeton University, is one of the four who have been chosen for the highest award in mathematics, the Fields Medal, which is given once every four years by the International Mathematical Union (IMU) during the quadrennial International Congress of Mathematicians (ICM). The ICM2014 got underway on August 13 at Seoul, Republic of Korea.

Fields medal

Awarded in recognition of “outstanding mathematical achievement for existing work and for the promise of future achievement”, the Fields Medal is given to mathematicians of age less than 40 on January 1 of the year of the Congress. Born of Indian parents who migrated from Jaipur in the late 1950s, Bhargava, who turned 40 just last week, could not have hoped for a better birthday gift.

“Bhargava”, says the IMU citation, has been awarded the Fields Medal “for developing powerful new methods in the geometry of numbers, which he applied to count rings of small rank and to bound the average rank of elliptic curves.” (See Box for definitions of italicized terms)

  • In ‘geometry of numbers’ one imagines a plane or a 3-dimensional space populated by a lattice whose grid points have integer co-ordinates.• A ‘ring’ is an algebraic structure with two binary operations, commonly called addition and multiplication, which are generalizations of the familiar arithmetic operations with integers applied to algebraic objects. Examples of rings are polynomials of one variable with real coefficients, or square matrices of a given dimension. Algebraic number theory is the study of this and other algebraic structures.• ‘Rank’ refers to the minimum number of objects required to generate the entire set of algebraic objects being studied; the dimension of a vector space, for example. The familiar 3-d vector space is of rank 3.• ‘Elliptic curves’ are graphs generated by equations of the form y2= a polynomial of degree 3, such as x3+ ax + b, where a and b are rational numbers.

A large body of work in number theory relates to the study of how numbers of interest, such as prime numbers, are distributed among the entire set of integers. Bhargava developed novel techniques to count objects in algebraic number theory that were previously considered completely inaccessible. His work has completely revolutionized the way in which fundamental arithmetic objects in algebraic number theory, such as number fields and elliptic curves, are now understood and studied, and this has given rise to wonderful applications.

About 200 years ago the German mathematician Carl Friedrich Gauss, one of the historical greats, had discovered a remarkable ‘composition law’ for binary quadratic forms, which are polynomials of the form ax2 + bxy + cy2, where a, b and c are integers. Using this law two binary quadratic forms could be combined to give a third one. Gauss’s law is a central tool in algebraic number theory. Bhargava discovered an ingenious and simpler geometrical technique to derive it and the technique allowed him to obtain composition laws for higher-degree polynomials as well.

Geometry of numbers

The technique reportedly dawned upon Bhargava one day while he was playing with Rubik’s cube. Implicit in Gauss’s method was the use of ‘geometry of numbers’ and it is this realization that enabled Bhargava to extend it to higher degrees. He then discovered 13 new composition laws for higher-degree polynomials. Until then, Gauss’s law was thought to be accidental and unique to binary quadratics. Nobody had even imagined that higher composition laws existed until Bhargava showed that Gauss’s law is part of a bigger theory applicable to polynomials of arbitrary degree. His approach has also broadened the canvas of applying geometry of numbers to address outstanding problems of algebraic number theory.

This work immediately led Bhargava to tackle a related problem, which was the counting of ‘number fields of fixed degree by discriminant’.

Discriminant

A number field is obtained by extending the rational numbers to include non-rational roots of a polynomial equation; if the polynomial equation is quadratic, such ax2+bx+c = 0, whose roots are given by the well-known formula [– b/2a ± √(b2 – 4ac)/2a], then one obtains a quadratic number field. The expression under the square root sign is called the ‘discriminant’ (defined appropriately for polynomials of different degrees). Higher degree number fields — cubic, quartic, quintic etc. — are correspondingly generated by higher degree polynomials.

The degree of the polynomial and its discriminant are two fundamental quantities associated with a polynomial. Despite number fields being one of the fundamental objects in algebraic number theory, answers to questions like how many number fields there are for a given degree n and a given determinant D were not known. If one has a quadratic polynomial, counting the number of lattice points in a certain region of 3-d space gives information about the associated quadratic number field. For example, using the geometry of numbers it can be shown that, for discriminant with absolute value less than D, there are approximately D quadratic number fields. The case of cubic number fields had been solved 40 years ago by Harold Davenport and Hans Heilbronn but since then the higher degree cases saw little progress until Bhargava came on the scene.

Quintic number fields

Armed with his new technique, Bhargava was able to solve the case of quartic and quintic number fields. The new composition laws and his new technique in using the geometry of numbers have together extended the reach and power of counting number fields. The cases of degrees greater than 5 still remain open as Bhargava’s composition laws alone seem inadequate to resolve these higher cases at present.

While the above work were al carried out between 2004 and 2008, more recently, Bhargava has employed his improved geometry of numbers technique to obtain striking results about ‘hyperellpitic curves’, which are graphs of equations of the form y2 = a polynomial with rational coefficients, the case where the degree of the polynomial is 3 being called the ‘elliptic curve’.

Elliptic curves have important applications in pure as well as applied mathematics. Even though Fermat’s Last Theorem seems to be not even remotely connected with elliptic curves, it was key to its proof in 1995 by Andrew Wiles, who, incidentally, was also Bhargava’s thesis advisor. Operations using elliptic curves have become a core component of many of the cryptographic protocols that encode credit card numbers in online transactions. “Intellectual stimulation, beautiful structure, applications – elliptic curves have it all,” Bhargava has said.

An outstanding problem in algebraic number theory has been how to count the number of points on ‘hyperelliptic curves’ that have rational coordinates, which is the same as asking how many rational solutions does a hyperellptic equation have? The answer, it turns out, following Bhargava’s work, depends on the degree of the curve.

One can easily see that the number of rational solutions of a polynomial equation of degree 1, such as y = 9x + 4, is infinite: any rational value for x produces a rational value for y, and vice versa. Quadratics, such as, such as y2 = 2x2 + 5x – 3, have either no rational solutions or infinitely many. For curves of degree 1 and 2, there is an effective way of finding all the rational points. In 1983, Gerd Faltings, director of Max Planck Institute for Mathematics, Bonn, showed that for degree 5 and more there are only finitely many rational points. That left unresolved the cases of degree 3 – the elliptic curves – and of degree 4.

Finding rational points for elliptic curves is, however, not an easy matter. They can have zero, finitely many, or infinitely many rational solutions. When does a cubic equation have infinitely many solutions has been a central question in number theory since Pierre de Fermat in the 17th Century. In the recent past mathematicians have attempted to devise algorithms to decide whether a given elliptic curve has finitely many or infinite rational points but that route took them nowhere. They have only been able to guess how often these different possibilities arise.

But once you have found some rational points on an elliptic curve, it becomes possible to generate more by using the simple connecting-the-dots method. For example (see fig.), if you draw a line through two rational points, it usually intersects the elliptic at exactly one more point, which is again a rational point. But the opposite, namely given one rational point finding the two rational points that would generate it. This is what underlies the use of elliptic curves in cyber security.

connecting the dots method

Connecting-the dots method: Given two rational points of an elliptic curve y2 = x3 + 2x + 3, the point at which the line through those points intersects the curve at one more point is guaranteed to be a rational point. This connect-the-dots procedure is a means to generate all of an elliptic curve’s rational points starting from a small finite number. (Credit: Quanta, illustration by Manjul Bhargava)

Curve’s rank

When the number of rational points of an elliptic curve is infinite, the smallest number of rational points that can generate essentially all the rational points is called the curve’s rank. When the infinite set of rational points can be generated essentially from just one point, the curve has rank 1, and so on. When the number of rational points is finite or none at all, the rank is 0.

In 1992 Armand Brumer showed that a 1965 conjecture made by Birch and Swinnerton-Dyer (BSD) implied that the average rank of the group of rational points of an elliptic curve defined over rational numbers is bounded. Later in 1979 Dorian Goldfeld conjectured that the bound is, in fact, is equal to ½. That is, in a statistical sense, half of all elliptic curves have rank 0 and half have rank 1. Previously, however, mathematicians did not even know that the average rank was finite (let alone ½).

The conjecture, of course, does not mean that curves of higher rank – 2, 3 and so on – do not exist, or even that there are only finitely many such. Indeed, computationally mathematicians have found such curves, the highest known rank till date is 28! But as the number of elliptic curves asymptotically becomes infinitely large, the curves with higher ranks approach a vanishingly small percentage of the whole.

Enter Bhargava and his collaborators, his doctoral student Arul Shankar (a 2007 Chennai Mathematical Institute graduate) in particular. Instead treading the beaten track of algorithms, they asked the question: what could be said about rational points on a typical curve? From this perspective they first showed that a sizeable fraction of elliptic curves has only one rational point (rank 0) and another sizeable proportion has infinitely many rational points (rank > 0). Using newly developed techniques, they were able to show that the average rank is, in fact, bounded. They have been further able to show that the bound is also less than 1, indicating that the conjecture is perhaps true.

“Bhargava introduced dramatically new ideas ​to study the average number of solutions and proved that the average rank of elliptic curves is bounded, and that the BSD Conjecture is true on the average, making it one of the most spectacular successes in number theory in recent years,” says Deependra Prasad, a number theorist from Tata Institute of Fundamental Research (TIFR).

Analogously, for the case of degree 4 too Bhargava and Shankar showed that a significant chunk of such curves has no rational points and another significant chunk positive proportion has infinitely many rational points. Using his expanded geometry of numbers technique Bhargava has also explored higher-degree curves in general.

While Faltings Theorem tells us that for curves of degree greater than 5, there are only finitely many rational points, it does not give a way to determine how many exactly there are. For the even degree case, Bhargava showed that the “typical” hyperelliptic curve had no rational points at all. The joint work of Bhargava and Benedict Gross, followed up by that of Bjorn Poonen and Michael Stoll, established the same result for the odd degree case as well. Bhargava’s work has thus clearly shown that the number of curves having rational points decreases rapidly as the degree increases. For example, for a typical 10 degree polynomial, there is a greater than 99 per cent chance that the curve has no rational points.

Bhargava’s work in number theory has had profound influence in the field. “A mathematician of extraordinary creativity, he has a taste for simple problems of timeless beauty, which he has solved by developing elegant and powerful new methods that offer deep insights,” said IMU’s information sheet on his work. “With his keen intuition, immense insight and great technical mastery, he seems to bring a ‘Midas touch’ to everything he works on,” it added.

Tabla player

Besides being one of the world’s leading mathematicians, Bhargava is also an accomplished Tabla player and plays at the concert level. He learnt the art initially from his mother and later came under the tutelage of the well-known tabla maestros Pandit Prem Prakash Sharma and Ustad Zakir Hussain. “Classical Indian music,” Bhargava told Princeton Weekly Bulletin when he was featured, “is very mathematical, but consciously thinking of the math would interfere with the improvisation and emotion of the playing. But somehow the connection is there. I often use music as a break, and many times I come back to the math later and things have cleared up.” Indeed, Bhargava thinks of mathematics art. He is also keenly interested in linguistics in which he has published research work. It was his grandfather, a linguistics scholar, who taught him Sanskrit and developed his interest in linguistics.

 

 

Symbols — the Meat of Mathematics

Let us take a gentle look at algebra now. (The present article is derived from a similar article by Tobias Dantzig). It is an expository article only.

Algebra, in  the broad sense in which the term is used today, deals with operations upon symbolic forms. In this capacity, it not only permeates all of mathematics, but encroaches upon the domain of formal logic and even of metaphysics. Furthermore, when so construed, algebra is as old as man’s faculty to deal with general propositions; as old as his ability to discriminate between “some” and “any”.

Here, however, we are interested in algebra in a much more restricted sense, that part of general algebra which is very properly called the theory of equations. It is in this narrower sense that the term algebra was used at the outset. The word is of Arabic origin. Al is the Arabic title “the”, and gebar is the verb “to set”, to  restitute. To this day the word algebrista is used in Spain to  designate a bone-setter, a sort of chiropractor.

It is generally true that algebra in its development in individual countries passed successively through three stages: the rhetorical, the syncopated, and the symbolic. Rhetorical algebra is characterized by the complete absence of any symbols, except, of course, that the words themselves are being used in their symbolic sense. To this day, rhetorical algebra is used in such a statement as “the sum is independent of the order of the terms”, which in symbols would be designated by a+b=b+a.

Syncopated algebra, of which the Egyptian is a typical example, is a further development of rhetorical. Certain words of frequent use are gradually abbreviated. Eventually, these abbreviations become contracted to the point where their origin has been forgotten, so that the symbols have no obvious connection with the operation which they represent. The syncopation has become a symbol.

The history of the symbols + and – may illustrate the point. In mediaval Europe, the latter was long denoted by the full word “minus”, then by the first letter “m” duly superscribed. Eventually the letter itself was dropped, leaving the superscript only. The sign “plus” passed through a similar metamorphosis.

The turning point in the history of algebra was an essay written late in the sixteenth century by a Frenchman, Viete, who  wrote under the Latin name Franciscus Vieta. His great achievement appears, simple enough to us today. It is summed up in  the following passage from this work:

In this we are aided by an artifice which permits us to distinguish given magnitudes from those which are unknown or sought, and this by means of a symbolism which is permanent in nature and clear to understand — for instance, by denoting the unknown magnitudes by A or any other vowels, while the given magnitudes are designated by B,C, G or other consonants.

This vowel-consonant notation had a short existence. Within half a century of Vieta’s death appeared Descartes’s Geometrie, in which the first letters of the alphabet were used for given quantities, the last for  those unknown. The Cartesian notation not only displaced the Vietan, but has survived to this day.

But, while few of Vieta’s proposals were carried out in letter, they certainly were adopted in spirit. The systematic use of letters for undetermined but constant magnitudes, the “logistica speciosa” as he called it, which has played such a dominant role in the development of mathematics, was the great achievement of Vieta.

The lay mind may find it difficult to estimate the achievement of Vieta at its true value. Is not the literal notation a mere formality after all, a convenient shorthand at best? There is, no doubt, economy in writing

(a+b)^{2}=a^{2}+b^{2}+2ab

but does it really convey more to the mind than the verbal form of the same identity: the square of the sum of two numbers equals the sum of the squares of the numbers, augmented by twice their product?

Again, the literal notation had the fate of all very successful innovations. The universal of these makes it difficult to conceive of a time when inferior methods were in vogue. Today formulae in which letters represent general magnitudes are almost as familiar as common script, and our ability to handle symbols is regarded by many almost as a natural endowment of any intelligent man; but it is natural only because it has become a fixed habit of our minds. In the days of Vieta this notation constituted a radical departure from the tradition of ages.

Wherein lies the power of this symbolism?

First of all, the letter liberated algebra from the slavery of the word. And, by this, I do not mean merely that without the literal notation any general statement would become a mere flow of verbiage, subject to all the ambiguities and misinterpretations of human speech. This is important enough; but, what is still more important is that the letter is free from the taboos which have attached to words through centuries of use. The A of Vieta or our present “x” has an existence independent of the concrete object which it is assumed to represent. The symbol has a meaning which transcends the objects symbolized: that is why it is not a mere formality.

In the second place, the letter is susceptible of operations which enables one to transform literal expressions and thus to paraphrase any statement into a number of equivalent forms. It is the power of transformations that lifts algebra above the level of a convenient shorthand.

Before the introduction of literal notation, it was possible to speak of individual expressions only; each expression, such as 2x+3, 3x-5;

x^{2}+4x+7; 3x^{2}-4x+5, had an individuality all its own and had to be handled on its own merits. The literal notation made it possible to pass from the individual to the collective, from the “some” to the “any” and the “all”. The linear form ax+b, the quadratic form ax^{2}+bx+c, each of these forms is regarded now as a single species. It is this that made possible the general theory of functions, which is the basis of all applied mathematics.

But, the most important contribution of the logistica speciosa, and the one that concerns us most in this study, is the role it played in the formation of the generalized number concept.

As long as one deals with numerical equations, such as

x+4=0; 2x=8 and x^{2}=9, call this equation I

x+0=4; 2x=5; x^{2}=7, call this equation II

one can content himself (as most mediavel algebraists did) with the statement that the first group of equations is possible, while the second is impossible.

But, when one considers literal equations of  the same types:

x+b=a; bx=a; x^{n}=a

the very indeterminateness of the data compels one to give an indicated or symbolic solution to the problem:

x=a-b; x=a/b; x= (a)^{1/n}.

In vain, after this, will one stipulate that the expression a-b has a meaning only if a is greater than b, that a/b is meaningless when a is not a multiple of b, and that a^{1/n} is not a number unless a is a perfect nth power. The very act of writing down the meaningless has given it meaning; and, it is not easy to deny the existence of something that has received a name.

Moreover, with the reservation that a>b, that a is a multiple of b, that a is perfect nth power, rules are devised for operating on such symbols as a-b; a/b; a^{1/n}. But, sooner or later the very fact that there is nothing on the face of these symbols to indicate whether a legitimate or an illegitimate case is before us, will suggest that there is no contradiction involved in operating on  these symbolic beings as if they bona fide numbers. And from this there is but one step to recognizing these symbolic beings as numbers “in extenso”.

What distinguishes modern arithmetic from that of the pre-Vieta period is the changed attitude towards the “impossible”. Up to the seventh century the algebraists invested this term with an absolute sense. Committed to natural numbers as the exclusive field for all arithmetic operations, they regarded possibility, or restricted possiblity, as an intrinsic property of these operations.

Thus, the direct operations of arithmetic — addition (a+b), multiplication (ab), potentiation a^{n} — were omni-possible; whereas, the inverse operations — subtraction (a-b), division a/b, extraction or roots a^{1/n} — were possible only under restricted conditions. The pre-Vieta algebraists were satisfied with stating these facts, but were incapable of a closer analysis of the problem.

Thus, the direct operations of arithmetic are omnipossible because they are but a succession of iterations, a step-by-step penetration into the sequence of natural numbers, which is assumed a priori unlimited. Drop this assumption, restrict the field of the operand to a finite collection (say to the first 1000 numbers), and operations such as 925+125 or 67 x 15 become impossible and the corresponding operations meaningless.

Or, let us assume that the field is restricted to odd numbers only. Multiplication is still omni-possible, for the product of any two odd numbers is odd. However, in such a restricted field addition is an altogether impossible operation, because the sum of any two odd numbers is never an odd number.

Yet, again, if the field were restricted to prime numbers, multiplication would be impossible, for the simple reason that the product of two primes is never a prime; while, addition would be possible only in such rare cases as when one of the two terms is 2, the other being the smaller of a couple of twin-primes like

2+11=13.

Other examples could be adduced, but even these few will suffice to bring out the relative nature of  the words possible, impossible, and meaningless. And, once this relativity is recognized, it is natural to inquire whether through a proper extension of  the restricted field the inverse operations of arithmetic may not be rendered as omni-possible as the direct are.

To accomplish this with respect to subtraction it is sufficient to adjoin to  the sequence of natural numbers zero and the negative integers. The field so created is called the general integer field.

Similarly, the adjunction of positive and negative fractions to this integer field will render division omni-possible.

The numbers thus created — the integers, and the fractions, positive and negative, and the number zero — constitute the rational domain. It supersedes the natural domain of integer arithmetic. The four fundamental operations, which heretofore applied to integers only, are now by analogy extended to these generalized numbers.

All this can be accomplished without a contradiction. And, what is more, with a single reservation which we shall take up presently, the sum, the difference, the product, and the quotient of  any two rational numbers are themselves rational numbers. This very important fact is often paraphrased into the statement: the rational domain is closed with respect to the fundamental operations of arithmetic.

The single but very important reservation is that of division by zero. This is equivalent to the solution of the equation x.0=a. If a is not zero, the equation is impossible, because we are compelled, in defining the number zero, to admit the identity a.0=0. There exists therefore no rational number which satisfies the equation x.0=a.

On the contrary, the equation x.0=a is satisfied for any rational value of x. Consequently, x is here an indeterminate quantity. Unless the problem that led to such equations provides some further information, we must regard 0/0 as the symbol of *any* rational number, and a/0 as the symbol of *no* rational number.

Elaborate though these considerations may seem, in symbols they reduce to the following succinct statement: if a, b and c are any rational numbers, and *a* is not zero, then there always exists a rational number x, and only one, which will satisfy the equation ax+b=c

This equation is called “linear”, and it is the simplest type in a great variety of  equations. Next to linear some quadratic, then cubic, quartic, quintic and generally algebraic equations of any degree, the degree n meaning the highest power of  the unknown x in ax^{n}+bx^{n-1}+cx^{n-2}+...+px+q=0

But even these do not exhaust the infinite variety of equations; exponential, trigonometric, logarithmic, circular, elliptic, etc., constitute a still vaster variety, usually classified under the all-embracing term transcendental.

Is the rational domain adequate to handle this infinite variety? This is emphatically not the case. We must anticipate an extension of  the number domain to greater and greater complexity. But this extension is not arbitrary; there is concealed in  the very mechanism of the generalizing scheme a guiding and unifying idea.

This idea is sometimes called the principle of permanence. It was first explicity formulated by the German mathematician, Hermann Hanckel, in 1867, but the germ of the idea was already contained in the writings of Sir William Rowan Hamilton, one of the most original and fruitful minds of the nineteenth century.

I shall formulate this principle as a definition:

A collection of symbols infinite in number shall be called a number field, and each individual element in it a number,

First. If among the  elements of the collection we can identify the sequence of natural numbers.

Second. If we can establish criteria of rank which will permit us to tell of any two elements whether they are equal, or if not equal, which is greater; these criteria reducing to the natural criteria when the two elements are natural numbers.

Third. If for any two elements of  the collection we can devise a scheme of addition and multiplication which will have the commutative, associative, and distributive properties of the natural operations bearing these names, and which will reduce to these natural operations when the two elements are natural numbers.

These very general considerations leave the question open as to how  the principle of permanence operates in special cases. Hamilton pointed the way by a method which he called algebraic pairing. We shall illustrate this on the natural numbers.

If a is a multiple of b, then the symbol a/b indicates the operation of division of a by b. Thus 9/3=3 means that the quotient of the indicated division is 3. Now, given two such indicated operations, is  there a way of  determining whether the results are equal, greater, or less, without actually performing the operations? Yes, we have the following:

Criteria of Rank. a/b=c/d if ad=bc

a/b > c/d if ad>bc

a/b < c/d if ad<bc

And we can even go further than that: without  performing the indicated operations we can devise rules for manipulating on these indicated quantities:

Addition: (a/b)+(c/d) = (ad+bc)/bd

Multiplication. (a/b).(c/d)= (ac)/(bd)

Now, let us not stipulate any more that a be a multiple of b. Let us consider a/b as the symbol of a new field of mathematical beings. These symbolic beings depend on two integers a and b written in proper order. We shall impose on this collection  of  couples the criteria of  rank mentioned above,i.e., we shall claim that, for instance:

(20/15)=(16/12) because  20 x 12 = 15 x 16

(4/3)>(5/4) because (4)( 4) >( 3)(5)

We shall define  the operations on these couples in accordance with the rules which, as we have shown above, are true for the case when a is a multiple of b, and c is a multiple of d, i.e., we shall for instance:

(2/3)+(4/5)=((2)( 5)+(3) ( 4))/((5)( 3))=22/15

We have now satisfied all the stipulations of the principle of  permanence.

1. The new field contains the natural numbers as a subfield, because we can write any natural number in the form of a couple:

1/1; 2/1; 3/1; 4/1, and so on and on.

2. The  new field criteria of  rank which reduce to  the natural criteria when a/b and c/d are natural numbers.

3. The new field has been provided with two  operations which have all the properties of addition and multiplication, to which they reduce  when a/b and c/d are natural numbers.

And, so these new beings satisfy all the stipulations of the principle. They have proved their right to be adjoined to the natural numbers, their right to be invested with the dignity of the same name “number”. They are therewith admitted, and the field of numbers comprising both old and new is christened the rational domain of numbers.

It would seem at first glance that  the principle of permanence leaves such a latitude in the choice of operations as to make the general number it postulates too general to be of much practical value. However, the stipulations that the natural sequence should be a part of  the field, and that  the fundamental operations should be commutative, associative and distributive (as the natural operations are), impose restrictions which, as we shall see, only very special fields can meet.

The position of arithmetic,as formulated in the principle of permanence, can be compared to the policy of a state bent on expansion, but desirous to  perpetuate the fundamental laws on which it grew strong. These two different objectives — expansion on the one hand, preservation of uniformity on  the other — will naturally influence the rules for admission of new states to  the Union.

Thus, the first point in the principle of  permanence corresponds to the pronouncement that the nucleus state shall set the tone of the Union. Next, the original state being an oligarchy in which every citizen has a rank, it imposes this requirement on the new states. This requirement corresponds to the second point of  the principle of superposition.

Finally, it stipulates that the laws of commingling between the citizens of each individual state admitted to the Union shall be of  a type which will permit unimpeded relations between citizens of that state and those of the nucleus state.

Of course, I do not want the reader to take this analogy literally. It is suggested in the hope  that it may invoke mental associations from a more familiar field, so that the principle of permanence may lose its seeming artificiality.

The considerations, which led up to the construction of the rational domain, were the first steps in a historical process called the arithmetization of  mathematics. This movement, which began with Weierstrass in the sixties of the 19th century, had for its object the separation of purely mathematical concepts, such as “number” and “correspondence” and “aggregate”, from intuitional ideas, which mathematics had acquired from long association with geometry and mechanics.

These latter, in the opinion of the formalists, are so firmly entrenched in mathematical thought that in spite of  the most careful circumspection in the choice of words, the meaning concealed behind these words may influence our reasoning. For  the trouble with human words is that they possess content, whereas the purpose of mathematics is to construct pure forms of thought.

But, how can we avoid the use of human language? The answer is found in the word “symbol”. Only by using a symbolic language not  yet usurped by those vague ideas of space, time, continuity which have their origin in intuition and tend to obscure pure reason —- only thus may we hope to build mathematics on the solid foundation of  logic.

Such is the platform of  this school, a school which was founded by the Italian Peano and whose most modern representatives were Bertrand Russell and Alfred North Whitehead. In the fundamental work of the latter men, the Principia Mathematica, they had endeavoured to reconstruct the whole foundation of modern mathematics, starting with clear-cut, fundamental assumptions and proceeding on principles of  logic.

I confess that I  am out  of sympathy with the extreme formalism of  the Peano-Russell school, that I have never acquired the taste for their methods of symbolic  logic, that my repeated efforts to master their involved symbolism have invariably resulted in helpless confusion and despair. This personal ineptitude has undoubtedly coloured my opinion — a powerful reason why I should not air my prejudices here.

Yet I  am certain that these prejudices have not caused me to underestimate the role of mathematical symbolism. To me, the tremendous importance of this symbolism lies not in these sterile attempts to banish intuition from the realm of  human thought, but in its unlimited power to aid intuition in creating  new forms of thought.

To recognize this, it is not necessary to master the intricate technical symbolism of modern mathematics. It is sufficient to contemplate the  more simple, yet much more subtle, symbolism of language. For, in so far as our language is capable of precise statements, it is but a systems of  symbols, a rhetorical algebra par excellence. Nouns and phrases are but symbols of classes of objects, verbs symbolize relations, and sentences are but propositions connecting these classes. Yet, while the word is the abstract symbol of a class, it has also the capacity to  invoke an image, a concrete picture of some representative element of the class. It is in this dual function of our language that we should seek the germs of the conflict which later arises between logic and intuition.

And what is true of words generally is particularly true of  those words which represent natural numbers. Because they have the  power to evoke in our minds images of concrete collections, they appear to us so rooted in firm reality as to be endowed with an absolute nature. Yet in the sense in which they are used in arithmetic, they are but a set of abstract symbols subject to a system of operational rules.

Once we recognize this symbolic nature of  the natural number, it loses its absolute character. Its intrinsic kinship with the wider domain of which it is the  nucleus becomes evident. At the same time, the successive extensions of the number concept become steps in an inevitable process of natural evolution, instead of the artificial and arbitrary legerdemain which they seem at first.

More later…

Nalin

 

 

 

 

Some non trivial factorization examples

I hope to give you a flavour of some non-trivial factorization examples using the following identity:

a^{3}+b^{3}+c^{3}-3abc = (a+b+c)(a^{2}+b^{2}+c^{2}-ab -bc-ca) Example 1. Let n be a positive integer. Factorize

3^{3^{n}}(3^{3^{n}}+1) +3^{3^{n}+1}-1

Solution. Observe that

3^{3^{n}}(3^{3^{n}}+1) +3^{3^{n}+1}-1 = a^{3}+b^{3}+c^{3}-3abc where a=3^{3^{n-1}}, b=9^{3^{n-1}}, and c=-1.

Thus, using the above factorization identity, we get the following factorization:

(3^{3^{n-1}}+9^{3^{n-1}}-1)(9^{3^{n-1}}+81^{3^{n-1}}+1-27^{3^{n-1}}+3^{3^{n-1}}+9^{3^{n-1}})

Example 2. Let a, b, c be distinct positive integers and let k be a positive integer such that ab+bc+ca \geq 3k^{2}-1.

Prove that (1/3)(a^{3}+b^{3}+c^{3})-abc \geq 3k.

Solution. The desired inequality is equivalent to

a^{3}+b^{3}+c^{3}-3abc \geq 9k.

Suppose without loss of generality, that a>b>c.

Then, since a, b, and c are distinct positive integers, we a-b \geq 1, and

(b-c) \geq 1 and a-c \geq 2.

It follows that a^{2}+b^{2}+c^{2}-ab-bc-ca = (1/2)((a-b)^{2}+(b-c)^{2}+(c-a)^{2}) \geq (1/2)(1+1+4)=3

We obtain

a^{3}+b^{3}+c^{3}-3abc = (a+b+c)(a^{2}+b^{2}+c^{2}-ab-bc-ca) \geq 3(a+b+c) so it suffices to prove that 3(a+b+c) \geq 9k or (a+b+c) \geq 3k

But, (a+b+c)^{2}=a^{2}+b^{2}+c^{2}+2ab+2bc+2ca = a^{2}+b^{2}+c^{2} - ab -bc- ca +3(ab+bc+ca) \geq 3+3(3k^{2}-1) = 9k^{2},

and the conclusion follows.

More later… Nalin