Actually, it is not *that *obvious. TANSTAAH, as science fiction author Robert A. Heinlein used to say — There Ain’t No Such Thing As A Free Lunch. But we can all dream.

I am referring here to one of the seven Millennium Prize Problems, whose solution will leave some lucky person a million dollars better off. Technically, it is known as “P=NP?” which is a pretty silly name. But, what it’s about is of vital importance: inherent limits to the efficiency of computers.

Computers solve problems by running programs, which are lists of instructions. A program that always stops with the right answer (assuming that the computer is doing what its designers think it should) is called an “algorithm”. The name honours the Arabic mathematician Abu Ja’far Muhammad ibn Musa al-Khwarizmi, who lived around AD 800 in present day Iraq. His book *Hisab al-Jabr w’al-muqabala *gave us the word “algebra”, and it consists of a series of procedures — algorithms — for solving algebraic equations of various kinds.

An algorithm is a method for solving a specific type of problem, but it is useless in practice unless it delivers the answer reasonably quickly. The theoretical limit issue here is not how fast the computer is, but how many calculations the algorithm has to perform. Even for a specific problem —- to find the shortest route that visits a number of cities in turn, say —- the number of calculations depends on how complicated the question is. If there are more cities to visit, the computer will have to do more work to find an answer.

For these reasons, a good way to measure the efficiency of an algorithm is to work out how many computational steps it takes to solve a problem of a given size. There is a natural division into “easy” calculations, where the size of the calculation is some fixed power of the input data, and “hard” ones, where the growth rate is much faster, often exponential. Multiplying two n-digit numbers together, for example, can be done in steps using good old-fashioned long multiplication, so this calculation is “easy”. (Ref: Number Theory and Cryptography by Neal Koblitz for beautiful, illustrative examples on the “number of elementary computations” required for addition, subtraction, multiplication and division). Finding the prime factors of an n-digit number, on the other hand, takes about steps if you try every possible divisor up to the square root of n, which is the most obvious approach, so this calculation is “hard”. (If you like programming, you can use the previous mentioned book of Neal Koblitz; also, “The Little Book of BIGGER Primes” by Ribenbohm; also, “How to Solve it by Computer” by R. G. Dromey — it gives a lucid explanation of computational complexity, efficiency of programs, so also you can write programs on “factoring methods”). The algorithms concerned here are said to run in *polynomial time* (class P) and *non-polynomial time *(non-P) respectively.

Working out how quickly a given algorithm runs is relatively straight forward. The hard work is to decide whether some or other algorithm might be faster. The hardest of all is to show that what you have got is the fastest algorithm that will work, and basically we don’t know how to do that. So, problems that we think are hard might turn out to be easy if we found a better method for solving them, and this is where the million dollars comes in. It will go to whoever manages to prove that some specific problems is unavoidably hard — that no polynomial time algorithm exists to solve it. Or, just possibly to whoever proves that “There Ain’t No Such Thing As A Hard Problem — though that doesn’t seem likely, the universe being what it is.

Before you rush out to get started, though, there are a couple of things you should bear in mind. The first is that there is a “trivial” type of problem that is automatically hard, simply because the size of the output is gigantic. “List all ways to rearrange the first n numbers” is a good example. However fast the algorithm might be, it takes at least steps to print out this answer. So, this kind of problem has to be removed from consideration, and this is done using the concept of a *non-deterministic, polynomial time, *or NP problem. (Note that NP is different from not-P). These are the problems where you can verify proposed *answer *in polynomial time — that is, easily.

My favourite example of an NP problem is solving a jigsaw puzzle. It may be very hard to find a solution, but if someone shows you an allegedly completed puzzle you can tell instantly whether they have done it right. A more mathematical example is finding a factor of a number (once again: refer to “How to solve it by computer” R. G. Dromey for programming examples/assignments based on factoring methods): it is much easier to divide out, and see whether some number works than it is to find that number in the first place.

The “P=NP?” problem asks whether every NP problem is P. That is, if you can check a proposed answer easily, can you *find *it easily? Experience suggests very strongly that the answer should be “no” — the hard part is to find the answer. But, amazingly, no one knows how to prove that (as yet), or even whether it is correct. And, that’s why you can* *pocket a million bucks for proving that P is different from NP, or indeed for proving that, on the contrary, the two are equal.

As a final twist, it turns out that all likely candidates to show that are equivalent (in some sense). A problem is called *NP-complete* if a polynomial time algorithm to solve that particular problem automatically leads to a polynomial-time algorithm to solve *any *NP problem. Almost any reasonable candidate for proving that is known to be NP-complete. The nasty consequence of this fact is that no particular candidate is likely to be more approachable than any of the others — they all live or die together. In short, we know *why* must be a very hard problem, but that doesn’t help us to solve it.

I suspect that there are far easier ways to make a million dollars 🙂 🙂 🙂

More later,

Nalin Pithwa

Ref: Professor Ian Stewart Cabinet of Mathematical Curiosities

Other Ref: The Millennium Problems: The Seven Greatest Unsolved Mathematical Problems of our Time by Keith J. Devlin