You are currently browsing the category archive for the ‘Summary’ category.

I failed to post a blog entry last week, and in order to make up for it, I should post two this week. Here’s the second one.

Lesswrong is a blogging group that is “dedicated to the art of refining human rationality”. There are a lot of posts about Bayesian probability, Cognitive biases, Artificial intelligence etc, and if you’re new to it, I’d recommend starting at one of the sequences. Apart from online discussions, lesswrong community has come to realize the importance of meeting up in person. This is particularly important considering the kind of audience and participants the blog attracts – slightly geeky individuals possessing high IQ who are presumably not the most social people. I can relate.

The Ottawa meetup group started sometime in the summer with an average 6 people attending every week. For various reasons, people have stopped coming. It seems, those who were aware of it when it started were the only ones to join the group – around 10 people. We did not make any efforts to spread the word around. In any case, the average group size is 3 ( Alex, Andrew and me), and occasionally a couple of other people show up. So, here’s a summary of what goes in a typical meetup. This one was a couple of weeks ago.

Summary of Monday’s meeting –
We were supposed to meet at Andrew’s at 7.30, but it got postponed to 8. Alex was 15 minutes late, and when he arrived at 8.30 he brought a pack of “Set” cards. The three of us played one round while doing some bistro combinatorics on the game. As usual, for every game there’s a mathematician who’s studied it.

We then moved over to the couch. I had printed 3 copies of chapters 2 and 3, and surprise surprise!! only 3 people showed up. Talking about making decisions under uncertainty, I asked Alex to ask me about any outward-looking issue lesswrong deals with ( not the self-help ones ), and I said I’d reduce it to an issue in complexity. He asked something along the lines of “making decisions under uncertainty”, and I quickly interpreted it as “optimal decisions” and hand-wavingly introduced bounded rationality – the process of optimization of ones perceived “decision function” is critically ,possibly non-linearly, affected by ones intelligence. Alex then asked for some examples of problems in P and some in NP. After an embarrassing amount of thought to come up with a simple example, i gave a couple of terrible examples. But since I’m writing stuff down, here are some better ones –

NP Complete –
P – I mentioned matrix multiplication.

TODO for next monday – Clarify the meaning of decision problems and then the classes P, and NP. And basically why P=NP is the most important problem in math.

Other usual favourite topics that surfaced included the genetic influence of homosexuality, cryogenics etc. Andrew asked the following question : What if there was a machine into which you stepped, and out came an exact copy of you but your original was destroyed. I obviously pointed out the no-cloning theorem while Alex remarked that perhaps no significant things that make us us have Quantum origins. Then we proceeded into debating about how we  compute “I” – this was a long conversation. I pointed out that until very recently in human evolution any definition of “I” had a significant genetic/family/culture factor, where as now. in the context of “uploading our brains”, we’re defining “I” in a purely intellectual manner. Basically arguing about whether it’s ok for genes to ‘go jump in the lake”. This went on for a while, and Alex remarked that my thoughts race ahead of my words to the extent that the conversation seemed like a random walk. On homosexuality, I quickly mentioned that it wasn’t an ESS – homosexuals will not procreate and will die out. Alex mentioned that a recessive gay gene could exist in a person, while his homosexual brother who has two recessive gay genes can help him survive and procreate by providing resources. Andrew pointed that incidence of homosexuality is observed to be higher in the younger siblings. I wasn’t totally convinced that Andrew’s gay-producing theory and Alex’s gay-maintaining theory added up to their prevalence. We also wondered about homosexuality proportions in animals.

That’s about as much as my aching head could remember. We did not cover much of the paper, but we will continue doing it. Next week, I propose we play some card game ( Sets was fun ), sit down and go through some concepts and try to cover chapters 2 and hopefully some part of 3 in reasonable detail. “Fart sessions” as we used to, in undergrad,  refer to any undirected, dissipative discussion sessions ( ) will inextricably follow.


In Ottawa lesswrong meetups, we’ve started to discuss Scott Aaronson’s Why Philosophers Should Care About Computational Complexity. We read and discuss three chapters every week, and I figured it’d be nice to summarize the material. The original pdf is by itself isn’t very long at all, and contains summaries – I’d highly recommend you read that instead of this. In any case, I’ve decided to create a summary and add context to make things clearer for me. I’ll keep adding chapters as we cover them.

Chapter 1 – Introduction [ Added background. Pedagogic ]

Analogous to how cars and other mechanical inventions were developed to cater to our laziness, computers are devices onto which we offload our most boring and repetitive thought cycles. Computability theory is the study of the extent to which it is possible to identify and demarcate those boring human thoughts, “algorithms”, and have them work on other devices, “computers”. In the 1930s, people came up with different models of computation, but most surprisingly they all turned out to be equivalent! Any boring thought you might have will run the same on any computer you make. Turing machines became a popular model for analysis, but you’re free to choose your own – ants (Langton’s), game of life (Conway’s ) or lambda calculus (Church’s).

You can see why philosophers got excited – they wanted to know if all human thoughts are boring. In other words, can we come up with a thought that is not boring but well-defined? One of the first questions posed about this was Hilbert’s Entscheidungsproblem. He was a mathematician and he knew that statements about numbers are really boring. But is it possible that there are some statements about numbers that are not boring? Can we come up with such a statement? How? – simple – Figure out a way to ask the same question using numbers! – “Can you come up with a boring thought that’ll classify statements about numbers as true or false?”. Kurt Godel figured out a way to ask that question precisely by mapping it to numbers. Viola! All hell broke loose. The answer was a loud screaming “NO! Not all true statements about numbers can be proven to be true” – Godel’s incompleteness theorem. But wait. What do we mean by true? Can we even define precisely what we mean by true. This time Alfred Tarski yelled “NO! Arithmetical truth cannot be defined in arithmetic.” – Tarski’s undefinability theorem. The hardest blow came when Turing asked a similar question “Can you build a Turing machine that can tell if a given Turing machine halts”, and elegantly explained “No, you cannot” – the Halting problem. This was worse because it wasn’t about numbers anymore, or about the nature of truth – it was about actual real physical devices you can build. All these powerful negative results led to the Foundational Crisis of Mathematics. For decades to come, mathematicians and logicians provided enough fodder for generating non-boring thoughts in the minds of philosophers. They devoured it.

But that was a long time ago. Since the 1970s, when we started building real computers, we became increasingly concerned with the complexity and efficiency of algorithms – how much space and time are required for efficient computations? Computational complexity theory was born. It has now developed into a rich theory but it appears that philosophers are still dining on computability theory. We have a new restaurant at this end of the universe not yet frequented by philosophers. Scott Aaronson is our cook and I’m your waiter.

Chapter 2 – Complexity 101 [ Added background ]

If we know something is computable does it matter if it can be computed in 10^5 seconds (1 day) or 10^100 seconds? Yes. Our universe is less than 10^20 seconds old. Clearly, the problem is in the exponent. So, if you want to solve a problem of some size, n, and the most efficient algorithm to do it takes time 10^n, then you’re screwed. If the size is in the exponent, we’re not efficient. But if it takes something like n^10 or even n^100, we’re happy. If the size occurs in the base, our algorithm is efficient. The next obvious thing we’d like to know is which problems admit efficient algorithms and which don’t. Just like in computability theory where we figured out that many models of computations were equivalent, we’ve now learned that many hard problems are equivalent. They’re called NP(Non-deterministic, Polynomial time) -complete problems. It doesn’t matter which problem you figure out an efficient algorithm for – if you find for one, you’ve found for all. The efficient ones, where size occurs in the base and not in the exponent, are called P (Polynomial time) solvable problems.

I will take a moment here to explain what NP-completeness means, because it is crucial. Imagine you’re asked to solve a problem, for e.g sort a pack of cards. You may be interested to know how many cards you’d have to compare in the process – so you ask “What is the minimum number of comparisons needed?”. This is an optimization problem and the answer is a number. Other problems may have answers in various other forms – list of all cities in the shortest route etc. In order to be able to compare these problems and their algorithms, it’d be nicer to have answers that are comparable, and the simplest such form is a boolean value. So, you instead ask “Is the minimum number of comparisons needed to sort greater than 100”, to which the computer only answers True or False – these are called decision problems. Another feature of any algorithm, you might have noticed, is that it must make choices and depending on the outcome of these choices has to proceed further. Think of it as a binary tree – you start at the root, make your choice and proceed leaf-ward. Leafs contain answers – True or False. When a Deterministic Turing machine (DTM) makes a choice, it has to go down its chosen path, and if it turns out that it was a bad decision, it has to come back and go down the right path. A Non-Deterministic Turing machine (NTM), on the other hand magically always make the right choice. All problems efficiently solvable using a DTM is defined as the set P, and all problems efficiently solvable by the NTM is defined as the set NP. Good. Now to NP-Completeness. Stephen Cook in 1970s discovered that many many problems ( but not all ) in NP were equally hard and were the hardest problems in NP – these are called the NP-complete problems. What exactly is the power of this hypothetical magic in an NTM ? We don’t know – that is the P =? NP question.

After that long digression which has hopefully brought most readers on a common footing, let me return to what Scott Aaronson has to say in this chapter. First, why are we so comfortable in dividing up efficiency ranges into polynomial and exponential, and not something in between like n^logn ? A second objection is that in practice algorithms that take an exponential 1.00001^n time might run faster than n^100000, because in practice inputs are bounded. These questions are tackled in later chapters. A third point noted is that while computability theory concerns itself with logic about what can and cannot be computed, complexity theory deals with what can be done best given some resources, and in this sense it is more closely related to physical sciences. So, one can ask, if we could travel backwards in time, could be compute faster, or is there a limit physics poses on the amount of information a limited space can store.

Chapter 3 – The relevance of polynomial time

3.1 The Entscheidungsproblem Revisited

Church and Turing showed that even if you take an incomplete formal system, you cannot classify all statements into “provable in F”, “disprovable in F”, and “undecidable in F”. Godel in a famous “Lost letter” wrote to von Neumann asking if there would be a way to decide if a given statement has a proof in F of length n bits or less efficiently. i.e if a proof of a reasonable length exists, can it be found in a reasonable amount of time? Godel had anticipated the P vs NP problem.

3.2 Evolvability

Is 4 billions years time enough to construct a human body ( which presumably optimized many of its inner workings w.r.t its environment ) from a much more homogeneous environment that formed the earth? Godel thought not, but modern biology says otherwise. Basically, complexity theory is nowhere near even hinting at us an answer to the possibility of Evolution.

3.3 Known Integers

This is an interesting question – when you say you know something, what do you mean? Let us, for a moment, keep aside knowledge of the kind most people spend their lives trying to acquire – mating habits or courtship rituals of humans – and ask more specific questions, for instance, ones that mathematicians claim they know – proving something. When a mathematician claims he knows how to prove a statement to you in a system, what he’s saying is that he can, in an amount of time polynomial in the size of the statement, reduce it to something you both agree on. To make this more concrete, when you say you “know” a prime number p what you’re saying is that you have an algorithm that proves in polynomial(p) time that p is a prime. The same thing cannot be said for the prime := “next prime larger than p” even though it is well defined. Although this seems like a very interesting way to think about things, Gil Kalai opines “The opinion that once you show that a certain human activity represents an effort in P means that it is “trivialized” is a bit too far since it is quite plausible that all human activities represent efforts in P”.


Halloween weekend. Woke up late, read a slashdot article  “When Having the US Debt Paid Off Was a Problem” which claimed that US had a budget surplus in 2000 and could pay off its entire debt by 2012 and much more surprisingly how that would be harmful to the US economy. I don’t even pretend to understand the finance and my usual strategy is to dismiss it as a shitty unproductive zero-sum game greedy people play. But today, I had more important things to study – i.e more important things that could be put off. So, I ended up watching 15 Khan Academy videos about “Paulson Bailout“.

Here’s what I learnt…for future reference.

In order to calculate what you’re worth, you list all that you own (Assets)  and all that you owe (Debt) in some currency and the difference is the answer (Equity). The whole problem is to determine what constitutes your Asset, Debt, and figure out how to map them to, say, USD. We should learn some terminology first.

Mortgage Backed Security – People need to buy houses. They don’t have all the money. They get loans from commercial banks or wherever, and pay mortgages for 30 years covering interest and principal. Investment banks with liquid cash (C) set up a company “Special Purpose Entity” with some liquid cash and buy these mortgages. So now, homeowners pay their monthly mortgage to these SPEs. SPEs bundle up these mortgages into bundles called “Mortgage Backed Securities” and trade them on stock exchange (perhaps to make up their initial cost C). Why would people want to invest in such stocks? Because stocks pay dividend and if it is more than what your Savings account interest is, then why not? Wait – what is going on? Investing in a regular stock means that you are placing your confidence in the productivity of that company, i.e you think d = company_productivity(your money +  company’s money) – company_productivity(company’s money) > 0. Moreover, you think that d > your money, which means that investing in this company makes the society more productive. But in the case of SPEs, or MBS the how-does-my-investment-make-society-more-productive-loop is much much bigger – you invest in an Investment bank, which lends more money to people to buy houses, which makes them work harder to pay their mortgages improving society’s value. If everyone paid their mortgages, it wouldn’t be a problem. But there exists a discontinuity in the system – insolvency. Now, the game is about who’s going to hit the bound statistically speaking. So, SPEs package their MBSs into classes “tranches” – high risk (unsecured), medium risk(mezzanine), and low risk(super secured), with correspondingly decreasing dividend rates, and these are called Collateral Debt Obligations (CDO), where the risks are evaluated by rating agencies. When someone defaults, money vanishes from the unsecured CDOs first and then maybe mezzanine if needed.

US Treasury Bonds – When you lend someone some money, they give you a receipt which contains details, i.e who borrowed from whom at how much and for how long at what interest etc etc. This receipt is called a bond certificate. When you buy US Treasury bonds, it means that you’re lending the US govt. some money.

Good. So, now what does a bank’s balance sheet look like? Its assets include some liquid cash, perhaps some good corporate bonds (money lent to Microsoft perhaps :P), maybe some US treasury bonds, and maybe some “high risk” CDOs. For some reason banks have liabilities – they take short term loans, ( for ex. 3 months ), pay interest on them, and pay the principal off by taking out another loan. I don’t exactly understand why, but it seems these loans form a large part of their liabilities. For ex. today wikipedia says Goldman Sachs has about $900B of assets, but only $77B of equity, which means over $800B in loans. Lehmann Brothers had over $600B of Assets when it declared bankruptcy! Equity = (Assets – Liabilities) can be hard to estimate when there are risky CDOs. Banks bought these CDOs at a high price during the housing bubble when everyone thought that housing prices keep rising eternally which was reflected on their “book” value on balance sheets. Some people begin to default on their mortgages. This affects the “high risk” category of CDOs which are now worth far less than what the banks bought them for, i.e their “market” value on balance sheets is far less. Banks need to write them off to retain trust. This brings down their Equity. If Equity goes below zero, the bank declares insolvency ( like Lehmann Brothers did). This means all its creditors lose money. But its creditors are other banks, which had these loans listed in their assets. Now people look at these surviving banks’ balance sheets with a suspicious eye wondering if their assets represent their real market value. Banks don’t want to default, so they hold on to their liquid cash to pay off their loans when they’re due. Market slows down, and the “real” value of anything is not easy to determine. A few more banks default.

What can the government do? Well, nothing. If a society is capitalistic, it should be so when banks are profitable and when they make losses. That did not happen. Paulson bailout injected $700 billion into these banks by buying the risky CDOs at a much higher price than market value. Reverse auction – where the feds setup an auction saying they want to buy come CDOs with some cash. The company that sells it to them for the cheapest price wins it. This is a terrible idea – a company desperate enough to sell its CDOs for some quick liquid cash isn’t going to garner more trust. Who know what else it has overstated or fudged in its balance sheet? Moreover, what will a bank that receives this bailout fund do? It will hold on to its cash so it can repay its loans and not default. All these bailout funds go stagnant. Some banks die, some survive. Government debt increases, and a few weeks ago that hit the ceiling as well.

Making money off money stinks.

Currently reading