June 19, 2013

  • Clearly not the best popular book in cognitive economics

    JDN 2456462 EDT 14:41.

     

    I must respectfully disagree with the reviewer at Nature; Massimo Piattelli-Palmarini’s Inevitable Illusions is not “the best popular book in this field”. That title lies squarely on the shoulders of Thinking, Fast and Slow by Daniel Kahneman. (His name takes a long time to write, so I shall abbreviate it MPP.)

    Inevitable Illusions is decent, satisfactory; and maybe when it was written in 1994 it really was the best popular book available. But some of MPP’s explanations are awful, and a few of them are just outright wrong.

     

    It’s not an awful book; it’s very easy to read, and someone who had no exposure to cognitive economics would indeed learn some things by reading it. I do like the way that MPP emphasizes repeatedly that cognitive illusions do not undermine rationality; they merely show that human beings are imperfect at being rational. It’s odd that this is controversial (doesn’t it seem obvious?), but it is; neoclassical economists to this day insist that human deviations from irrationality are inconsequential.

     

    MPP’s explanations of the sure-thing principle and Bayes’ Law are so singularly awful and incomprehensible that I feel I must reproduce them verbatim:

    “If, after considering all the arguments pro and con, we decide to do something and a certain condition arises in that something, and we decide to do that very thing, even if the condition does not arise, then, according to the sure-thing principle, we should act immediately, without waiting.

    It’s much simpler than that. If you’d do B if A is true and also do B if A is false, then you should do B without needing to know whether A is true. To use MPP’s own example, if you’ll go to Hawaii whether or not you passed the test, then you don’t need to know whether you passed before you buy your tickets to Hawaii.

    “The probability that a hypothesis (in particular, a diagnosis) is correct, given the test, is equal to: The probability of the outcome of the test (or verification), given the hypothesis (this is a sort of inverse calculation with respect to the end we are seeking), multiplied by the probability of the hypothesis in an absolute sense (that is, independent of this test or verification) and divided by the probability of the outcome of the test in an absolute sense (that is, independent of the hypothesis or diagnosis).”

    Once again, we don’t need this mouthful. Bayes’ Law is subtle, but it is not that complicated. The probability A is true knowing that B is true, is equal to the probability B would be true if A were true, divided by the probability B would be true whether or not A were true, times the probability A is true. B provides evidence in proportion to how much more likely B would be if A were true; and then that evidence is applied to your prior knowledge about how likely A is in general. Most people ignore the prior knowledge, but that’s a mistake; even strong evidence shouldn’t convince you if the event you’re looking for is extremely unlikely.

    It’s probably easiest to use extremes. If B is no more likely to be true when A is than when A isn’t, it provides no evidence; P(B|A) = P(B) and thus P(A|B) = P(B)/P(B)*P(A) = P(A). If B is guaranteed to be true whenever A is true and guaranteed not to be true whenever A is false, then it provides perfect evidence: P(B|A) = 1, P(B) = P(A), and P(A|B) = 1*P(A)/P(A) = 1.

     

    At the end of the book, MPP answers rebuttals from “cognitive ecologists” who (if his characterization is accurate) think that we suffer no significant cognitive illusions, and that’s of course very silly. If this is not a strawman, it’s a bananaman. A more charitable reading would be that we wouldn’t suffer survival-relevant cognitive illusions in a savannah environment 100,000 years ago; but that’s a far weaker claim, and proportionately less interesting. Life was simpler back then. Nasty, brutish, and short; but simple. We might have experienced illusions in the past (if the mutations to make us do better simply did not exist), but it’s equally reasonable to say that we didn’t. The point is that we live a much more complex life now, so heuristics that worked before don’t anymore.

    MPP is of course right about that part. But he also sees illusions that aren’t really there (meta-illusion?).

    For instance, he seems deeply troubled by the fact that similarity judgments are intransitive, when in fact this makes perfect sense. Being “similar” isn’t sharing a single property; it’s sharing a fraction of a constellation of properties. Jamaica is like Cuba in that they are small island nations in the Caribbean; Cuba is like the Soviet Union in that they are Communist dictatorships. Jamaica is not like the Soviet Union, because they don’t have much in common. There is no reason we would expect this judgment to be transitive, and anyone who does think so is simply using a bad definition of “similarity”. Similarity is more like probability; and from P(A&B) = 0.6 and P(B&C) = 0.5, you can’t infer much at all about P(A&C). The probability axioms place certain limits on it, but not very strong ones. Suppose 60% of doctors are men with blue eyes, and 50% of doctors are Americans with blue eyes; how many of the doctors are American men? We could have 50% blue-eyed American men, 10% blue-eyed German men, and 40% brown-eyed American men. We could also have 10% blue-eyed American men, 50% blue-eyed German men, and 40% blue-eyed American women. So the number of American men could be anywhere from 10% to 90%.

    The fact that similarity judgments are not always symmetrical is more problematic, though even it can be explained without too much deviation from normative rationality. Why is North Korea more like China than China is like North Korea? Well, we know more about China; we have more features to compare. So while contemplating North Korea might just yield a few traits like “nation in Asia”, “Communist dictatorship”, “has nuclear weapons”—all of which are shared by China, thinking of China yields many more features we know about, like “invented paper money”, “has used movable type for centuries”, “has one of the world’s largest economies” and “has over ten thousand written characters”, which are not shared by North Korea. In our minds, North Korea is something like a proper subset of China; most things North Korea has are also had by China, but most things had by China are not had by North Korea. The only real deviation from normative rationality is the fact that we aren’t comparing across a complete (or even consistent) set of features; if we were, we’d find that the results were symmetrical.

    Another false illusion is MPP’s worry that typicality judgments are somehow problematic, as though it’s weird to say that a penguin is “less of a bird” than a sparrow or a chicken is “less of a dinosaur” than a tyrannosaurus. No, of course that makes sense; indeed, the entire concept of evolution hinges upon the fact that one can be a bit more bird-like or a bit less saurian or a bit more mammalian. These categories are fuzzy, they do blend into one another, and if they did not, we could not explain how all life descends from a common ancestor. The mistake here is in thinking that concepts should have hard-edged definitions; the universe is not made of such things. It’s a bit weirder that people say 4 is “more of an even number” than 2842, since even numbers do have a strict hard-edged definition; but obviously you’re going to encounter 4 a good deal more often, so in that sense it’s a better example.

     

    Worst of all, MPP makes a couple of errors, one of which is offhand enough to be forgiven, but the other of which is absolutely egregious—to the point of itself being a cognitive illusion.

    The minor error is on page 130: “A sheet of tickets that give us a 99 percent chance of winning will be preferred to a more expensive sheet that offers a 999 out of 1000 chance.” He implies that this is wrong; but in some cases it’s actually completely sensible. Suppose the cheap ticket costs $1.00 and the expensive ticket costs $50.00; suppose the prize is $500. Then the expected earnings for the cheap ticket are 0.99*500 – 1 = $494, while the expected earnings for the expensive ticket are 0.999*500-50 = $449.50. It does depend on the exact prices and the size of the prize; if you are risk-neutral and the prize is $10,000, you should be willing to pay up to $90 for the extra 0.009 chance. Then again, if you’re poor enough that it makes sense to be risk-averse for $10,000 (hint: you probably are not this poor, actually! If you think you are, that may be a cognitive illusion), then you might still not want to take it. Suppose your total wealth is $1,000, so $10,000 is a huge increase in your wealth and $50 is a significant cost.
    Even then, you should probably buy the expensive ticket. If utility of wealth is logarithmic, these are your expected utilities. Keep the money: log(1000) = 3. Cheap ticket: 0.99*log(11000) + 0.01*log(999) = 4.03. Expensive ticket: 0.999*log(11000) + 0.001*log(950) = 4.04. I actually think utility of wealth is less than logarithmic, so maybe you don’t want to buy the expensive ticket; but it’s at least not hard to contrive a scenario where you would.

    So maybe MPP really just meant to imply that people are risk-averse even when they shouldn’t be, or something like that. Like I said, this error is minor.

    There’s another place where I would consider it an error, but some economists would agree with him. He says that it is irrational not to always defect in a Prisoner’s Dilemma, because you’d defect if they defected and defect if they didn’t defect. Then, he applies the sure thing principle, and concludes you should defect. But that’s not how I see it at all. Yes, if they defect, you should defect; protect yourself against being exploited. But if they cooperate… why not cooperate? You don’t get as much gain for yourself, but you’re also not exploiting the other player. How important is it to you to be a good person? To not hurt others? If these things matter to you at all, then it’s not at all obvious that you should defect.

     

    MPP makes another error, however, that is much larger and by no means controversial. On page 79, he writes: “If a mother’s eyes are blue, what is the probability of her daughter having blue eyes? What is the probability of a mother having blue eyes, if her daughter has blue eyes? Repeated tests show that most of us assign a higher probability to the first than the second. But this is a mistake. A statistical correlation should be a two-way affair; it should be symmetrical.”

    Now, as it turns out, these two probabilities in particular are equal, because the human population is large and well-mixed and as such the base rate of blue eyes doesn’t vary much between generations. But as a general principle, such probabilities most certainly are not symmetrical, and indeed, the whole point of Bayes’ Law is that they are not. (Thus, I must wonder if MPP’s poor explanation of Bayes’ Law isn’t just a poor explanation, but actually reflects a poor understanding.)

    Suppose I drive a Ford Focus (as I do). Now suppose that someone somewhere is run over by a car (as is surely happening somewhere today). The probability that the car that ran them over is a Ford Focus, given that I ran them over, is very high (virtually 100%); but the probability that I ran over them, given that the car that ran them over is a Ford Focus, is far, far smaller (perhaps 0.1%). The mere fact that it was a Ford Focus that caused the injury is nowhere near sufficient evidence to conclude that I did it, for there are thousands of other Ford Focus cars on the road. But if you knew that I had done it, you’d be wise to bet that I did it in a Ford Focus, because that is what I drive. So MPP is simply wrong about this, and his error is fundamental. It’s actually called the Prosecutor’s Fallacy or the Fallacy of the Converse. It’s one of the most common and most important cognitive illusions, in fact.

    Now, correlations are actually symmetrical, but this didn’t ask for a correlation, it asked for a conditional probability. If MPP doesn’t understand the difference, that’s even more worrisome. You can’t even compute a correlation on this data, because it’s categorical; your eyes are either blue or they aren’t, they can’t be 42% blue. Correlations are for ratio data; you could ask what the correlation is between a mother’s height and her daughter’s height, and that would be symmetrical. That isn’t what we asked here though.

     

    In all, Inevitable Illusions isn’t too bad. It may be worth reading simply as a quick introduction. But if you really want to understand cognitive economics, read Kahneman instead.

     

     

Post a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *