You are currently browsing the tag archive for the ‘TCS’ tag.

In Ottawa lesswrong meetups, we’ve started to discuss Scott Aaronson’s Why Philosophers Should Care About Computational Complexity. We read and discuss three chapters every week, and I figured it’d be nice to summarize the material. The original pdf is by itself isn’t very long at all, and contains summaries – I’d highly recommend you read that instead of this. In any case, I’ve decided to create a summary and add context to make things clearer for me. I’ll keep adding chapters as we cover them.

Chapter 1 – Introduction [ Added background. Pedagogic ]

Analogous to how cars and other mechanical inventions were developed to cater to our laziness, computers are devices onto which we offload our most boring and repetitive thought cycles. Computability theory is the study of the extent to which it is possible to identify and demarcate those boring human thoughts, “algorithms”, and have them work on other devices, “computers”. In the 1930s, people came up with different models of computation, but most surprisingly they all turned out to be equivalent! Any boring thought you might have will run the same on any computer you make. Turing machines became a popular model for analysis, but you’re free to choose your own – ants (Langton’s), game of life (Conway’s ) or lambda calculus (Church’s).

You can see why philosophers got excited – they wanted to know if all human thoughts are boring. In other words, can we come up with a thought that is not boring but well-defined? One of the first questions posed about this was Hilbert’s Entscheidungsproblem. He was a mathematician and he knew that statements about numbers are really boring. But is it possible that there are some statements about numbers that are not boring? Can we come up with such a statement? How? – simple – Figure out a way to ask the same question using numbers! – “Can you come up with a boring thought that’ll classify statements about numbers as true or false?”. Kurt Godel figured out a way to ask that question precisely by mapping it to numbers. Viola! All hell broke loose. The answer was a loud screaming “NO! Not all true statements about numbers can be proven to be true” – Godel’s incompleteness theorem. But wait. What do we mean by true? Can we even define precisely what we mean by true. This time Alfred Tarski yelled “NO! Arithmetical truth cannot be defined in arithmetic.” – Tarski’s undefinability theorem. The hardest blow came when Turing asked a similar question “Can you build a Turing machine that can tell if a given Turing machine halts”, and elegantly explained “No, you cannot” – the Halting problem. This was worse because it wasn’t about numbers anymore, or about the nature of truth – it was about actual real physical devices you can build. All these powerful negative results led to the Foundational Crisis of Mathematics. For decades to come, mathematicians and logicians provided enough fodder for generating non-boring thoughts in the minds of philosophers. They devoured it.

But that was a long time ago. Since the 1970s, when we started building real computers, we became increasingly concerned with the complexity and efficiency of algorithms – how much space and time are required for efficient computations? Computational complexity theory was born. It has now developed into a rich theory but it appears that philosophers are still dining on computability theory. We have a new restaurant at this end of the universe not yet frequented by philosophers. Scott Aaronson is our cook and I’m your waiter.

Chapter 2 – Complexity 101 [ Added background ]

If we know something is computable does it matter if it can be computed in 10^5 seconds (1 day) or 10^100 seconds? Yes. Our universe is less than 10^20 seconds old. Clearly, the problem is in the exponent. So, if you want to solve a problem of some size, n, and the most efficient algorithm to do it takes time 10^n, then you’re screwed. If the size is in the exponent, we’re not efficient. But if it takes something like n^10 or even n^100, we’re happy. If the size occurs in the base, our algorithm is efficient. The next obvious thing we’d like to know is which problems admit efficient algorithms and which don’t. Just like in computability theory where we figured out that many models of computations were equivalent, we’ve now learned that many hard problems are equivalent. They’re called NP(Non-deterministic, Polynomial time) -complete problems. It doesn’t matter which problem you figure out an efficient algorithm for – if you find for one, you’ve found for all. The efficient ones, where size occurs in the base and not in the exponent, are called P (Polynomial time) solvable problems.

I will take a moment here to explain what NP-completeness means, because it is crucial. Imagine you’re asked to solve a problem, for e.g sort a pack of cards. You may be interested to know how many cards you’d have to compare in the process – so you ask “What is the minimum number of comparisons needed?”. This is an optimization problem and the answer is a number. Other problems may have answers in various other forms – list of all cities in the shortest route etc. In order to be able to compare these problems and their algorithms, it’d be nicer to have answers that are comparable, and the simplest such form is a boolean value. So, you instead ask “Is the minimum number of comparisons needed to sort greater than 100”, to which the computer only answers True or False – these are called decision problems. Another feature of any algorithm, you might have noticed, is that it must make choices and depending on the outcome of these choices has to proceed further. Think of it as a binary tree – you start at the root, make your choice and proceed leaf-ward. Leafs contain answers – True or False. When a Deterministic Turing machine (DTM) makes a choice, it has to go down its chosen path, and if it turns out that it was a bad decision, it has to come back and go down the right path. A Non-Deterministic Turing machine (NTM), on the other hand magically always make the right choice. All problems efficiently solvable using a DTM is defined as the set P, and all problems efficiently solvable by the NTM is defined as the set NP. Good. Now to NP-Completeness. Stephen Cook in 1970s discovered that many many problems ( but not all ) in NP were equally hard and were the hardest problems in NP – these are called the NP-complete problems. What exactly is the power of this hypothetical magic in an NTM ? We don’t know – that is the P =? NP question.

After that long digression which has hopefully brought most readers on a common footing, let me return to what Scott Aaronson has to say in this chapter. First, why are we so comfortable in dividing up efficiency ranges into polynomial and exponential, and not something in between like n^logn ? A second objection is that in practice algorithms that take an exponential 1.00001^n time might run faster than n^100000, because in practice inputs are bounded. These questions are tackled in later chapters. A third point noted is that while computability theory concerns itself with logic about what can and cannot be computed, complexity theory deals with what can be done best given some resources, and in this sense it is more closely related to physical sciences. So, one can ask, if we could travel backwards in time, could be compute faster, or is there a limit physics poses on the amount of information a limited space can store.

Chapter 3 – The relevance of polynomial time

3.1 The Entscheidungsproblem Revisited

Church and Turing showed that even if you take an incomplete formal system, you cannot classify all statements into “provable in F”, “disprovable in F”, and “undecidable in F”. Godel in a famous “Lost letter” wrote to von Neumann asking if there would be a way to decide if a given statement has a proof in F of length n bits or less efficiently. i.e if a proof of a reasonable length exists, can it be found in a reasonable amount of time? Godel had anticipated the P vs NP problem.

3.2 Evolvability

Is 4 billions years time enough to construct a human body ( which presumably optimized many of its inner workings w.r.t its environment ) from a much more homogeneous environment that formed the earth? Godel thought not, but modern biology says otherwise. Basically, complexity theory is nowhere near even hinting at us an answer to the possibility of Evolution.

3.3 Known Integers

This is an interesting question – when you say you know something, what do you mean? Let us, for a moment, keep aside knowledge of the kind most people spend their lives trying to acquire – mating habits or courtship rituals of humans – and ask more specific questions, for instance, ones that mathematicians claim they know – proving something. When a mathematician claims he knows how to prove a statement to you in a system, what he’s saying is that he can, in an amount of time polynomial in the size of the statement, reduce it to something you both agree on. To make this more concrete, when you say you “know” a prime number p what you’re saying is that you have an algorithm that proves in polynomial(p) time that p is a prime. The same thing cannot be said for the prime := “next prime larger than p” even though it is well defined. Although this seems like a very interesting way to think about things, Gil Kalai opines “The opinion that once you show that a certain human activity represents an effort in P means that it is “trivialized” is a bit too far since it is quite plausible that all human activities represent efforts in P”.