June 26, 2013

  • Mind over Matter: What was the problem again?

    A lot of skeptics and rationalists are very, well, skeptical about the idea of "mind over matter", the idea that the mind can influence the brain and techniques such as meditation, biofeedback, and visualization can have real somatic effects.

    They do so even in the face of documented evidence that mental techniques and psychotherapy can be effective in the treatment of not only depression and anxiety (which seems obvious) but also pain after surgery, migraine, fibromyalgia, prostate cancer, heart disease, and rheumatoid arthritis.

    Why would they be resistant to this idea? My guess is that they wrongly associate it with truly ludicrous claims made by a lot of "alternative medicine" supporters, such as "all disease is caused by the mind" or "positive thinking can make you wealthy, healthy, and happy" or, my favorite, "through alternative medicine you can achieve perfect health" (yes, perfect).

    But the world's greatest idiot can say the sun is shining, and that doesn't make it dark outside.

    Skeptics seem to base their criticism of mind-body medicine on the fact that there is no spiritual realm, the world is made of physical matter—that is, materialism. The mind is the functioning of the brain, nothing more. There is no spirit, no soul.

    But here's what's weird about that: Mind-body medicine actually makes far more sense on a materialistic, identity-monist view than it does on a dualistic view. Mind over matter? The mind is made of matter! Of course matter can affect other matter!

    The shoe is on the other foot here! The main problem with dualism all along—recognized since at least Descartes—was explaining how this "immaterial substance" could interact with the physical world. Obviously it needs to, or else the fact that Descartes is writing "Cogito ergo sum" couldn't have anything to do with his mind actually thinking or believing it exists. Hand-waving explanations abounded in the Enlightenment (Leibniz, Malebranche, etc.); but ultimately this was what led us to realize that dualism is a fool's errand.

    No, it is under materialism that you can explain how the mind interacts with the body, as it obviously does. The "spiritual forces" that the likes of Deepak Chopra talk about don't even make sense on their own theory.

    Indeed, skeptics don't seem to have a problem with the body affecting the mind—they believe that alcohol makes you drunk, that being injured hurts, that sexual stimulation is pleasurable. Their problem only seems to be when you go the other direction, and allow the mind to affect the body. They are, in short, epiphenomenalists.

    But epiphenomenalism is literally the worst of both worlds! It has all of the problems of dualism, and none of the appealing features! Not only do you need to explain this weird, mysterious interaction between substances, now you need to explain why it is unidirectional, which literally nothing else in the universe is.

    Yes, literally nothing in the universe. You see, we have laws like Newton's Second Law, the Conservation of Energy and the Conservation of Momentum. No, quantum mechanics doesn't remove this; in fact, it adds to it, with the Conservation of Wavefunction Current (often called "probability current" due to the tyranny of the Copenhagen Interpretation). Every action has an equal and opposite reaction, energy in equals energy out.

    Therefore, epiphenomenalists are asking us to postulate something that physics does to non-physics (whatever "non-physics" is!), and furthermore violates some of the most basic symmetry laws upon which physics is based. Meanwhile, we identity-monists are talking about hardware-software interaction…

    Indeed, when you hear "meditation helps in the treatment of diabetes" you shouldn't hear it as "positive thinking will magically make you rich!"; instead you should hear it as "excess demand on the graphics card may lead to overheating." Excess stress in the autonomic nervous system may reduce insulin production.

    The brain is intimately connected with the body; this makes good evolutionary sense, as there isn't much point in having a brain if it's not going to help you run your body.

    Now, this doesn't prove any particular claim about mind-body medicine, and obviously they should be investigated just as we would investigate any other medication or therapy. Plenty of plausible treatments have turned out to be completely ineffective. But you shouldn't reject treatments as preposterous simply because they involve mind-body interactions—we know that mind-body interactions occur.
    And I reiterate: It is precisely under materialism, the theory that we are beings of crude matter with no luminous immaterial souls, who wither and die to become food for the worms… it is under that theory that we would expect (at least some) mind-body medicine to work.

June 19, 2013

  • Clearly not the best popular book in cognitive economics

    JDN 2456462 EDT 14:41.

     

    I must respectfully disagree with the reviewer at Nature; Massimo Piattelli-Palmarini's Inevitable Illusions is not "the best popular book in this field". That title lies squarely on the shoulders of Thinking, Fast and Slow by Daniel Kahneman. (His name takes a long time to write, so I shall abbreviate it MPP.)

    Inevitable Illusions is decent, satisfactory; and maybe when it was written in 1994 it really was the best popular book available. But some of MPP's explanations are awful, and a few of them are just outright wrong.

     

    It's not an awful book; it's very easy to read, and someone who had no exposure to cognitive economics would indeed learn some things by reading it. I do like the way that MPP emphasizes repeatedly that cognitive illusions do not undermine rationality; they merely show that human beings are imperfect at being rational. It's odd that this is controversial (doesn't it seem obvious?), but it is; neoclassical economists to this day insist that human deviations from irrationality are inconsequential.

     

    MPP's explanations of the sure-thing principle and Bayes' Law are so singularly awful and incomprehensible that I feel I must reproduce them verbatim:

    "If, after considering all the arguments pro and con, we decide to do something and a certain condition arises in that something, and we decide to do that very thing, even if the condition does not arise, then, according to the sure-thing principle, we should act immediately, without waiting."

    It's much simpler than that. If you'd do B if A is true and also do B if A is false, then you should do B without needing to know whether A is true. To use MPP's own example, if you'll go to Hawaii whether or not you passed the test, then you don't need to know whether you passed before you buy your tickets to Hawaii.

    "The probability that a hypothesis (in particular, a diagnosis) is correct, given the test, is equal to: The probability of the outcome of the test (or verification), given the hypothesis (this is a sort of inverse calculation with respect to the end we are seeking), multiplied by the probability of the hypothesis in an absolute sense (that is, independent of this test or verification) and divided by the probability of the outcome of the test in an absolute sense (that is, independent of the hypothesis or diagnosis)."

    Once again, we don't need this mouthful. Bayes' Law is subtle, but it is not that complicated. The probability A is true knowing that B is true, is equal to the probability B would be true if A were true, divided by the probability B would be true whether or not A were true, times the probability A is true. B provides evidence in proportion to how much more likely B would be if A were true; and then that evidence is applied to your prior knowledge about how likely A is in general. Most people ignore the prior knowledge, but that's a mistake; even strong evidence shouldn't convince you if the event you're looking for is extremely unlikely.

    It's probably easiest to use extremes. If B is no more likely to be true when A is than when A isn't, it provides no evidence; P(B|A) = P(B) and thus P(A|B) = P(B)/P(B)*P(A) = P(A). If B is guaranteed to be true whenever A is true and guaranteed not to be true whenever A is false, then it provides perfect evidence: P(B|A) = 1, P(B) = P(A), and P(A|B) = 1*P(A)/P(A) = 1.

     

    At the end of the book, MPP answers rebuttals from "cognitive ecologists" who (if his characterization is accurate) think that we suffer no significant cognitive illusions, and that's of course very silly. If this is not a strawman, it's a bananaman. A more charitable reading would be that we wouldn't suffer survival-relevant cognitive illusions in a savannah environment 100,000 years ago; but that's a far weaker claim, and proportionately less interesting. Life was simpler back then. Nasty, brutish, and short; but simple. We might have experienced illusions in the past (if the mutations to make us do better simply did not exist), but it's equally reasonable to say that we didn't. The point is that we live a much more complex life now, so heuristics that worked before don't anymore.

    MPP is of course right about that part. But he also sees illusions that aren't really there (meta-illusion?).

    For instance, he seems deeply troubled by the fact that similarity judgments are intransitive, when in fact this makes perfect sense. Being "similar" isn't sharing a single property; it's sharing a fraction of a constellation of properties. Jamaica is like Cuba in that they are small island nations in the Caribbean; Cuba is like the Soviet Union in that they are Communist dictatorships. Jamaica is not like the Soviet Union, because they don't have much in common. There is no reason we would expect this judgment to be transitive, and anyone who does think so is simply using a bad definition of "similarity". Similarity is more like probability; and from P(A&B) = 0.6 and P(B&C) = 0.5, you can't infer much at all about P(A&C). The probability axioms place certain limits on it, but not very strong ones. Suppose 60% of doctors are men with blue eyes, and 50% of doctors are Americans with blue eyes; how many of the doctors are American men? We could have 50% blue-eyed American men, 10% blue-eyed German men, and 40% brown-eyed American men. We could also have 10% blue-eyed American men, 50% blue-eyed German men, and 40% blue-eyed American women. So the number of American men could be anywhere from 10% to 90%.

    The fact that similarity judgments are not always symmetrical is more problematic, though even it can be explained without too much deviation from normative rationality. Why is North Korea more like China than China is like North Korea? Well, we know more about China; we have more features to compare. So while contemplating North Korea might just yield a few traits like "nation in Asia", "Communist dictatorship", "has nuclear weapons"—all of which are shared by China, thinking of China yields many more features we know about, like "invented paper money", "has used movable type for centuries", "has one of the world's largest economies" and "has over ten thousand written characters", which are not shared by North Korea. In our minds, North Korea is something like a proper subset of China; most things North Korea has are also had by China, but most things had by China are not had by North Korea. The only real deviation from normative rationality is the fact that we aren't comparing across a complete (or even consistent) set of features; if we were, we'd find that the results were symmetrical.

    Another false illusion is MPP's worry that typicality judgments are somehow problematic, as though it's weird to say that a penguin is "less of a bird" than a sparrow or a chicken is "less of a dinosaur" than a tyrannosaurus. No, of course that makes sense; indeed, the entire concept of evolution hinges upon the fact that one can be a bit more bird-like or a bit less saurian or a bit more mammalian. These categories are fuzzy, they do blend into one another, and if they did not, we could not explain how all life descends from a common ancestor. The mistake here is in thinking that concepts should have hard-edged definitions; the universe is not made of such things. It's a bit weirder that people say 4 is "more of an even number" than 2842, since even numbers do have a strict hard-edged definition; but obviously you're going to encounter 4 a good deal more often, so in that sense it's a better example.

     

    Worst of all, MPP makes a couple of errors, one of which is offhand enough to be forgiven, but the other of which is absolutely egregious—to the point of itself being a cognitive illusion.

    The minor error is on page 130: "A sheet of tickets that give us a 99 percent chance of winning will be preferred to a more expensive sheet that offers a 999 out of 1000 chance." He implies that this is wrong; but in some cases it's actually completely sensible. Suppose the cheap ticket costs $1.00 and the expensive ticket costs $50.00; suppose the prize is $500. Then the expected earnings for the cheap ticket are 0.99*500 – 1 = $494, while the expected earnings for the expensive ticket are 0.999*500-50 = $449.50. It does depend on the exact prices and the size of the prize; if you are risk-neutral and the prize is $10,000, you should be willing to pay up to $90 for the extra 0.009 chance. Then again, if you're poor enough that it makes sense to be risk-averse for $10,000 (hint: you probably are not this poor, actually! If you think you are, that may be a cognitive illusion), then you might still not want to take it. Suppose your total wealth is $1,000, so $10,000 is a huge increase in your wealth and $50 is a significant cost.
    Even then, you should probably buy the expensive ticket. If utility of wealth is logarithmic, these are your expected utilities. Keep the money: log(1000) = 3. Cheap ticket: 0.99*log(11000) + 0.01*log(999) = 4.03. Expensive ticket: 0.999*log(11000) + 0.001*log(950) = 4.04. I actually think utility of wealth is less than logarithmic, so maybe you don't want to buy the expensive ticket; but it's at least not hard to contrive a scenario where you would.

    So maybe MPP really just meant to imply that people are risk-averse even when they shouldn't be, or something like that. Like I said, this error is minor.

    There's another place where I would consider it an error, but some economists would agree with him. He says that it is irrational not to always defect in a Prisoner's Dilemma, because you'd defect if they defected and defect if they didn't defect. Then, he applies the sure thing principle, and concludes you should defect. But that's not how I see it at all. Yes, if they defect, you should defect; protect yourself against being exploited. But if they cooperate... why not cooperate? You don't get as much gain for yourself, but you're also not exploiting the other player. How important is it to you to be a good person? To not hurt others? If these things matter to you at all, then it's not at all obvious that you should defect.

     

    MPP makes another error, however, that is much larger and by no means controversial. On page 79, he writes: "If a mother's eyes are blue, what is the probability of her daughter having blue eyes? What is the probability of a mother having blue eyes, if her daughter has blue eyes? Repeated tests show that most of us assign a higher probability to the first than the second. But this is a mistake. A statistical correlation should be a two-way affair; it should be symmetrical."

    Now, as it turns out, these two probabilities in particular are equal, because the human population is large and well-mixed and as such the base rate of blue eyes doesn't vary much between generations. But as a general principle, such probabilities most certainly are not symmetrical, and indeed, the whole point of Bayes' Law is that they are not. (Thus, I must wonder if MPP's poor explanation of Bayes' Law isn't just a poor explanation, but actually reflects a poor understanding.)

    Suppose I drive a Ford Focus (as I do). Now suppose that someone somewhere is run over by a car (as is surely happening somewhere today). The probability that the car that ran them over is a Ford Focus, given that I ran them over, is very high (virtually 100%); but the probability that I ran over them, given that the car that ran them over is a Ford Focus, is far, far smaller (perhaps 0.1%). The mere fact that it was a Ford Focus that caused the injury is nowhere near sufficient evidence to conclude that I did it, for there are thousands of other Ford Focus cars on the road. But if you knew that I had done it, you'd be wise to bet that I did it in a Ford Focus, because that is what I drive. So MPP is simply wrong about this, and his error is fundamental. It's actually called the Prosecutor's Fallacy or the Fallacy of the Converse. It's one of the most common and most important cognitive illusions, in fact.

    Now, correlations are actually symmetrical, but this didn't ask for a correlation, it asked for a conditional probability. If MPP doesn't understand the difference, that's even more worrisome. You can't even compute a correlation on this data, because it's categorical; your eyes are either blue or they aren't, they can't be 42% blue. Correlations are for ratio data; you could ask what the correlation is between a mother's height and her daughter's height, and that would be symmetrical. That isn't what we asked here though.

     

    In all, Inevitable Illusions isn't too bad. It may be worth reading simply as a quick introduction. But if you really want to understand cognitive economics, read Kahneman instead.

     

     

June 7, 2013

  • Jerry Coyne is apparently a logical positivist.

    JDN 2456451 EDT 14:00.

    This is vaguely embarrassing, since logical positivism is a discredited position that is not at all necessary for scientific realism.

     

    But from this blog post on Why Evolution is True, it's clear that all the following are believed by Coyne:

    1. Science is based upon verifiable evidence.

    2. Logic, mathematics, and philosophy are not about the external world.

    3. Morality is subjective.

     

    This is pretty much classic logical positivism. Proposition 1, about verifiable evidence, isn't quite right—Popper proposed falsifiable evidence, and a more modern concept is of Bayesian evidence in a holistic theory (like Quine, or Less Wrong)—but it's actually close enough, I would say. We definitely do care about independent corroboration of observations and consilience between different lines of inquiry.

    I might even be able to forgive the idea that mathematics is not about the external world; I'm a realist, but not a Platonist (though as I understand it, most mathematicians are Platonists). I read Coyne as saying he is a logicist, which is a respectable position. He could also be a nominalist, which is another respectable position. Honestly I'm not even sure what it means to say that mathematical concepts "exist"; it seems too petty a concept for that which had to be so in any possible universe. To me, Platonists sound like they are saying that triangles could have failed to be, are things in this contingent way that trees and rocks and people are things. Perhaps what we are debating is pointless, just how an algorithm feels from inside. Where Coyne begins to lose me is when he says "within an accepted system of logic"; so, are we free to choose any logic we like?

    And then of course he jumps the shark completely with "morality is subjective". He doesn't understand what the word "objective" means, and claims to be a consequentialist without realizing that consequentialists by definition are not subjectivists. (Most consequentialists are realists, though a few, like John Rawls, are constructivists, and some are also sophisticated expressivists, like Allan Gibbard. These hair-splitting differences are not important for most purposes.) Here's what he says, so it's clear I'm not misrepresenting his position:

    It is similar with morality. Are there really “objective” moral truths, as Sam Harris seems to feel, or are there only dicta that conform to a subjective set of criteria about what is good? “Killing is wrong”, for instance, is not something I see as a “moral truth”, because in some circumstances it may be good for society (i.e., killing a terrorist about to kill others). (Note: I am a moral consequentialist.) Even things that seem more obvious, like “don’t harm innocent children” are not accepted as truths by some people, like those odious members of the Taliban who think it’s okay—indeed, good for society—to throw acid on schoolgirls who seek an education. The point is that while many of us can agree on such things, there is no universal and objective standard to appeal to, in cases involving morality and aesthetics, where everyone can agree. (If, however, you think morality consists of actions that are “good for society,” then one can in principle test moral judgments empirically. But not everyone accepts that kind of consequentialism.) There is a subjectivity in morality that does not, for instance, apply when we’re trying to find out the molecular structure of water.

     

    So here's what "objective" means: Something is objective if it is independent of what we believe about it; in short, it is possible to be wrong.

    The opposite is "subjective": Something is subjective if whatever you believe about it is true in your case; you cannot be wrong, and you also cannot apply it to others.

    Coyne seems to think that "objective" means something like absolute or even simplistic; his example of an "objective" moral rule is "killing is wrong", and he offers us an exception, "killing a terrorist about to kill others".

    There are a couple things very odd about that example:

    First of all, it's clear that Sam Harris, a dyed-in-the-wool realist consequentialist who Coyne acknowledges believes in objective moral rules, would in no way disagree. Indeed, Harris has made it quite clear that he supports the targeted killing of terrorists, and even potentially the torturing of terrorists if it secures intelligence that saves the lives of civilians or allied soldiers. It's hard to get a whole lot more in favor of killing terrorists than Sam Harris is; maybe someone like Satoshi Kanazawa, who thinks killing terrorists is worth committing genocide? I'm pretty sure you can't get any more gung-ho against terrorism than Sam Harris without ceasing to be a reasonable human being.

    Secondly, that "exception" is almost a corollary in some sense: Yes, if applied literally, "killing is wrong" would mean we shouldn't kill terrorists, because killing terrorists is an instance of killing. But really what I think we mean to say is that killing is bad, which is the whole reason why we'd kill terrorists. We are actually trying to minimize the amount of killing that goes on, which is ultimately consonant with the idea that killing is bad.

    His next example is better: Yes, things that we would consider obviously wrong, such as the genital mutilation of children (female children, at any rate; most Americans have no problem with the genital mutilation of male children), are nonetheless practices in other cultures. This is the descriptive form of cultural relativism, which is pretty much undisputed. The only disputes I've ever seen over it are as to how deep the differences run; are they different applications of the same basic principles, or do they reflect fundamentally different understandings of morality?

    But that doesn't get you far at all (I'm tempted to say it gets you nowhere, in fact) toward the normative form of cultural relativism, the notion that acts are moral or immoral only in context of a particular society. In fact, even this sort of relativism is a form of objectivism: It is an objective fact, for instance, that theft is illegal and considered wrong by American society. If we use a sort of constructivism to say that this makes theft wrong in American society, we are still making an objective claim, something one could be wrong about.

    An aside: Constructivism sounds weird in morality—and it is, it's wrong—but don't give up on the idea of constructivism in general. $20,000 US dollars is worth a new car, because we made it so as a society, with our market economy, Federal Reserve, and so on. $20,000 Monopoly dollars is not worth a new car, because we have not constructed it as such. There's nothing inherently different about the two currencies; both are marks of ink on paper. But one set of marks has the social construction to make it economically useful. You might think that this makes money a fiction; but no, money is real. (If you think it's fake, can I have all of yours?) It's a reality that we create by collective action. That's constructivism; it also works well in terms of laws, governments, cultures, organizations, and institutions. Congress really exists, because our social action creates it. In a sense, it's real because we believe in it.

    The only way to really be a subjectivist about morality, you'd need to think that moral truths are like direct experiences or aesthetic preferences. If morality were like direct experiences, murder would be wrong to you in the same sense that an apple looks red to you, or a room feels cold to you. Others might agree, but only because they happen to have similar sensory organs. A polar bear, being dichromatic (unable to see red) and accustomed to cold climates, probably sees the apple as green and thinks the room is hot. The reflection spectrum of the apple and the temperature of the room are objective facts, but your experience of them is subjective, relative to your own senses. It's important to see here that you can't be wrong about your own sensory experiences. You could lie about them (say you see it as red when you don't), or you could be mistaken in what you infer about the outside world based upon them (maybe you think the apple is itself red, when in fact the apple is white and lit by a red floodlight); but you can't be wrong about the fact that you are experiencing the sensation. This is really the difference between sensation and perception; sensation is about what you experience, while perception is about what you think the outside world is like as a result of that experience. If morality were subjective in this sense, it would mean that murder somehow is wrong in your mind, as a sensory experience you have; I can't even make sense of this, it seems like a category error.

    Alternatively, if Coyne thinks morality is like aesthetic preference, then he would be saying basically that "murder is wrong" means I find murder personally distasteful; it's not something I would do because it's so ugly and unappealing. Of course, there are lots of things I find personally distasteful that I don't think are immoral (certain sexual fetishes, say), and some things I think are immoral that I don't find personally distasteful (using a drone to bomb someone would probably be fun, actually; it certainly is in video games). An aesthetic or quasi-aesthetic theory of morality needs to account for this somehow.

    If morality is subjective in the way sensory experiences are subjective, it's strongly dependent upon our senses, and hence on our evolution. Coyne has previously talked about how morality is a result of our evolution as a species; so he might mean to imply something like this. But what he doesn't seem to realize is that there are objective facts that are also a result of our evolution as a species. The fact that the polar bear feels warm is subjective; but the fact that polar bears cannot survive at temperatures above 40 Celsius is objective. Likewise, it is an objective fact that human beings, once murdered, stay that way. It is also an objective fact that most people strongly don't want to be murdered. Are these results of our evolutionary history? Yes, I suppose they are. Maybe we could have been auto-reviving entities that don't care about their own survival, but we're not. And if you believe we are, you're wrong. That's objectivity.

    Perhaps Coyne would want to say that to the polar bear, murdering humans isn't wrong. But he's relativizing to the wrong entity; it's the victim of the murder that matters, not the perpetrator. Of course the perpetrator doesn't think it's wrong, or they wouldn't be doing it! (Some psychopaths do say things like "I know it's wrong, but I do it anyway"; but psychopaths are a weird case, and they honestly don't seem to understand what we mean by our moral language. Psychopaths cannot reliably distinguish moral rules like "don't hurt people" from social conventions like "raise your hand in class".)

    Now, I suppose Coyne could stick to his guns and say that he really thinks morality is subjective in this way. Murder feels wrong to me, maybe it doesn't feel wrong to you, and there's nothing more to be said about the matter. Neither of us is wrong, as long as we're only describing our own experiences.

    This of course immediately leads to the problem of coordinating different moral opinions; we have to run a society somehow, and we can't make murder both illegal and legal at the same time. Maybe we could find some sort of coordination mechanism like voting or randomization; but then, how do we choose which mechanism? This seems to be a moral question, which you just said are subjective.

    And let me repeat: If you're a consequentialist, you can't be a subjectivist. Consequences are objective. Someone is either dead or alive; they aren't alive if you believe they're alive. You might think that suffering is subjective, but that's only true in a sense: You can't be wrong about your own pain, but you can be wrong about someone else's. Punching someone in the face still hurts them even if you don't believe it does.

    This is, frankly, pretty obvious. Every time you make a moral argument based on consequences (which Coyne does frequently), you're assuming that morality means something more than your own subjective experiences. Why would Coyne think otherwise?

    I think it's because Coyne, like many others, doesn't understand what the word "objective" means. I remember having an argument with someone at a Skeptics in the Pub, which I allowed to get more heated than I should have. "Tell me one moral rule that's objective, and has no exceptions." I couldn't, but I tried to explain why I shouldn't have to—because "objective" simply does not mean "without exceptions". It means real. It means universal. It means you can be wrong about it.

    And it also doesn't mean "everyone agrees". Coyne of all people should realize this; his whole blog is about convincing people that evolution is true. All scientists know that evolution is an objective fact, but over 40% of Americans don't believe it. They're wrong. It's bad that they're wrong; it scares me that they're so wrong. But it casts no doubt whatsoever on the objectivity of evolution.

    Why should morality be different? Why should the fact that lots of people think gay sex is wrong even tempt you to think so? I guess in the absence of any other data whatsoever, the opinions of others can be used as a better guide than chance; but once you have data, you should definitely use that. And of course, we do: Controlling for the bigotry imposed upon them, gay people are just as happy, productive, and good at raising their children as straight people—and actually a bit more intelligent and creative. So why would you ever think it was wrong to be gay? It makes no sense at all.

    If you have some other reason to think that morality is subjective, let's hear it. Because to me, the very notion that morality could be subjective seems like a fundamental category error. You clearly don't mean the same thing when you say "wrong" that I do, if you think "wrong" can be subjective. 

June 6, 2013

  • A boring story in a fascinating world

    JDN 2456450 EDT 20:40

     

    A review of Consider Phlebas by Iain M. Banks

     

    The bad: The characters are flat and uninteresting, and often die unceremoniously, sometimes before we even get to know them. The plot is linear and centers around a 'mystery' that was never that mysterious, and doesn't even get resolved. The prose is needlessly flowery and often includes long digressions into quasi-poetic forms that are clearly meant to be confusing, often with no good reason to be confusing. The text often refuses to tell us things that the characters would obviously know already, at least until the right dramatic moment. The title makes no sense to me.

     

    The good: The worldbuilding. My goodness, the worldbuilding. Basically the entire book is an excuse to take us through this rich, bizarre, and fascinating world. The Idirans would be a fascinating alien culture unto themselves, and they are unimportant next to the Culture, whose utopian galactic society is an endless source of marvel and wonder. Some of the worldbuilding doesn't make a whole lot of sense logically—I can see no reason to make the Orbitals as big as they are; I don't understand why one would build Megaships or why they'd take years to accelerate; and Damage sounds like a brutal gladiatorial game that no civilized society would tolerate—but this is easily forgiven when the world is so rich and fascinating. The most interesting characters are all AIs; they have far more unique and interesting personalities than any of the humans, and their moral conflicts are richer as well. Of course, nothing about their behavior would actually lead you to believe that they are (as alleged) superintelligent beings with more thinking capacity than our entire planet combined; but, to be fair, that's really hard to write, without being yourself such a superintelligent being.

     

    The weird: I had thought the stories took place in a post-Singularity future, because that would actually make sense. But when you read the appendices, you learn that in fact the stories take place in the past; the events of Consider Phlebas occur sometime around 1350 AD. And yes, it's really AD; it specifically says "English language/Christian calendar". So this galactic war which destroyed 53 planets, 14,000 planet-sized Orbitals, a Ring (which I assume is an AU-radius ringworld) three Spheres (which I assume are AU-radius Dyson Spheres) and six stars... happened sometime in the Late Middle Ages.

    Now, you might be thinking: How would we know? Well, obviously people in the Late Middle Ages wouldn't have known. But today, we would, actually. Our astronomy is developed to the point where we would be able to tell, if nothing else, that there are Dyson Spheres. (We might even be able to see Orbitals and Rings.) Our biology is developed to the point where we can say definitively that Homo sapiens evolved indigenously on Earth, meaning that we could not have been some sort of offshoot 'seeded' by the Culture. While the first prokaryotes on Earth might have arrived from outer space (some scientists think so; personally I'm dubious of even that), it is very clear that we and apes came from the same planet. Which means that these other 'humans' are either not really humans or they somehow came from Earth. (There is some textual support for the 'not really humans' hypothesis: There seem to be a number of different kinds of 'humans', including some covered in fur, some much taller, some with green skin, and so on. I had thought these were post-Singularity bodymods, but they could also be read as different species from different planets, with 'human' meaning something like 'sapient biped'.)

    Apparently we're due to be Contacted in 2100 AD, though we've already been scouted covertly by a General Contact Unit in 1970. (I haven't read that story, just read about it. 1970 seems a very interesting choice: Did they detect the Apollo missions? Or was that just coincidence?) So in about 90 years we're due to meet this Culture; the General Contact Unit will reveal itself at last and make Contact. Which brings me to...

     

    The problem: There has got to be something else worth doing in your utopian society besides Contact. The whole point of being a utopia is that it's worth living there. Yet Iain Banks seems to struggle with this; he describes Contact as the raison d'être of the Culture, the unifying purpose that is the core of their existence. From the appendices: "The Culture's sole justification for the relatively unworried, hedonistic life its population enjoyed was its good works; the secular evangelism of the Contact Section, not simply finding, cataloguing, investigating and analyzing other, less advanced civilizations but—where the circumstances appeared to Contact to justify so doing—actually interfering (overtly or covertly) in the historical processes of those other cultures. […] Contact could either disengage and admit defeat—so giving the lie not simply to its own reason for existence but to the only justificatory action which allowed the pampered, self-consciously fortunate people of the Culture to enjoy their lives with a clear conscience—or it could fight."

    Indeed, Banks could not seem to write a story about utopia; instead, he wrote a story about a war with utopia. The Culture is a backdrop for a massive war that involves hundreds of billions of deaths. Virtually none of the story actually occurs within the bounds of the Culture itself; in fact, what little does is from the point of view of Special Circumstances agents, that is, the Culture's covert operations division. We are told that the universe is full of a quadrillion happy, fulfilled, virtually immortal lives; but the few dozen we actually meet are in a constant state of fear and suffering before their sudden and untimely demise.

    I can't really blame Banks for this; actually it's one of the things I've struggled with the most as an author. How do you tell interesting stories about worlds you'd actually want to live in? Is the reason utopian fiction never succeeds that there just aren't an interesting stories to tell about a happy world? But then, how good can the world be if there are no interesting stories to tell about it? Is it a defect in the human brain to thrive upon suffering, to define our reality based upon strife? Is it possible to have a happy story, and not just a happy ending?

May 29, 2013

  • To do a history of science, ask a historian who is also a scientist

    JDN 2456442 EDT 15:48.

    A review of How Experiments End by Peter Galison

    It seems so obvious in hindsight, but most things do. If you want a really good history of science, you need a historian who is also a scientist. Galison fits the bill; he has PhDs in both physics and history. How Experiments End is, as such, the finest history of science I've ever read.

    Unlike most historians who write about science, Galison has the proper respect for science. He appreciates how reason and evidence really do influence scientific decisions, and how, however imperfectly, we do gain real knowledge about the world. Yet unlike most scientists who write about science, he doesn't sugar-coat the story either; he talks about the mistakes, the false paths, the blistered egos and funding competitions. I hope I get the chance to do some of this in the history of the Physics Department… though I rather doubt it. I suspect they'll want it sanitized to be as unobjectionable and pro-Michigan as possible.

    At the very end of the book he sums it up this way: "In denying the old Reichenbachian division between capricious discovery and rule-governed justification, our task is neither to produce rational rules for discovery—a favorite philosophical pastime—nor to reduce the arguments of physics to surface waves over the ocean of professional interests. The task at hand is to capture the building up of a persuasive argument about the world around us, even in the absence of the logician's certainty."

    This strongly reminded me of one of my favorite quotations by Bertrand Russell: "Knowledge, like other good things, is difficult, but not impossible; the dogmatist forgets the difficulty, the skeptic denies the possibility. Both are mistaken, and their errors, when widespread, produce social disaster."

    Galison also appreciates how silly it is that we insist upon speaking of "the discovery of the muon" or "the invention of the superconductor" as a single event traceable to a particular year (or even day!).  Scientific progress is always more complicated than that; it takes a lot of people for a long time doing a lot of different things. I guess this is a problem in history in general though; we like to think of history as done by "Great Men"; it's easier to wrap your mind around one heroic individual than a complex system of social change.

    That said, How Experiments End has its flaws. For one thing, it's extremely technical; you need to know quite a bit of physics to understand what he's talking about. For another, it has a very plodding pace about it, with enormous depth of historical detail, much of which seems ultimately irrelevant. Only in the introductory and conclusive chapters do you really get a sense of what Galison is trying to argue from all this. In the large middle portion of the book, it's a long sequence of names, events, and technologies that all begin to blend together. I'm fairly interested in the history of science, and I still found sections boring. Someone who didn't already come to the subject with such interest would probably be put off entirely.

May 27, 2013

  • Sometimes the right answer is boring.

    JDN 2456440 EDT 11:26.

     

    I found The Investment Answer by Daniel Goldie and Gordon Murray in a rack of free books, and the little softcover is only 70 pages long, so I figured I may as well read it.
    I was not disappointed; the book is a concisely written and easily accessible introduction to behavioral finance for anyone who is looking to invest in stocks and bonds. Its answers are rather banal: Buy low, sell high; don't trust your gut, use careful analysis; invest in a diversified mix of stocks and bonds; avoid trendy new investments with unclear risks; don't try to game the system, just ride the market.

    All of these are far less exciting than the people telling you to buy this one thing—hedge funds, gold, whatever it may be—and become a millionaire overnight. Yet, they are far more likely to actually work. Passive fundamental value investing is the one strategy that really does work consistently. It doesn't produce enormous, exciting gains; but it also doesn't produce enormous, painful losses either. (Warren Buffett is actually sort of a passive fundamental value investor, though the alchemy he works with markets is one that I don't think anyone but he understands.)

    Goldie and Murray are too enamored of the so-called "efficient market hypothesis" (which, if you read its assertions carefully, is clearly just the unpredictable market hypothesis) for my taste; but they make use of it in a mainstream neoclassical way that's hard for me to object to all that much. Their basic message is: Don't try to be clever, don't try to become a millionaire overnight, just buy into companies that you expect to make profits so that you can get a share of those profits.

    They try to point out the difference between investment and speculation, which is certainly important; but their explanation leaves much to be desired.
    Here's how I would put it:

    Investment is when you buy something that makes people better at making things. You can do that indirectly through multiple steps; but ultimately it must be making someone better at making things. A crane is investment. A college education is investment. A bridge is investment. Buying stock can be an investment, if the money goes to the company and is used to finance such purchases to grow the business. Investment is nonzero-sum; it is a game that everyone can win. Buying bonds usually is an investment, because the government will spend it on things like education and infrastructure.

    Speculation is when you buy something in the hopes of selling it to someone else later for more money. Commodities, gambling, high-frequency trading, currency trading, and most hedge fund strategies are speculation. Speculation is zero-sum; someone always wins and someone else always loses.

    The problem is that most of what we call "investing" is really speculation. Those "investors" on Wall Street are really just speculators. The "trades" they make are really a sophisticated way of tricking people into giving them money. "Isn't that just capitalism?" No. Not at all, in fact. The whole point of capitalism is investment; the goal of capitalism (which admittedly it does not always succeed at) is not to decide who gets the stuff, but to make more stuff, so that everyone can have some.

    Perhaps the easiest well to tell a true investor from a speculator is to look at the time horizon of their trades. Warren Buffett makes a few trades per year. This makes sense; over a period of months or years, a company can actually change enough that you would want to reconsider whether you've invested in it. Meanwhile, there are high-frequency trading algorithms on Wall Street that trade in microseconds. Stiglitz proposed the extremely reasonable idea that we limit trades to seconds, but Wall Street would not have it. I don't think I really need to explain why a company obviously can't have changed its real profitability in a millionth of a second. I'm sure Goldie and Murray would agree.

    There is one glaring omission from The Investment Answer however: No mention whatsoever of ethical investment. The entire book is about how to make more money in markets, and it explains quite well how to do that. But it never asks the fundamental question: How are we making money? Where is this value coming from?

    And I'll admit, these are by no means easy questions. Sweatshops abuse their workers; but at the same time, they provide desperately-needed jobs in poor countries. Buying American goods will reduce our trade deficit; but it might also hurt everyone, if it ignores comparative advantage.

    The most die-hard neoclassical capitalists (which Goldie and Murray may well be) would say we should just act in our own self-interest and let the Invisible Hand play itself out. But this of course is easy to say for the rich White American males who have the power; yes, let it play itself out, in such a way that just so happens to benefit me enormously and allow millions of other people to suffer. There are simply too many externalities to take the Invisible Hand seriously; what I do affects you, and as such I must consider your interests as well as my own.

    Fortunately, ethical investment is something that Bill Gates and Warren Buffett do understand (and they clearly have no trouble making money either!), so I'm kind of hoping they'll write books about it someday.

May 7, 2013

  • Complex systems should not require complex writing.

    JDN 2456420 EDT 16:34.

     

    A review of Adaptation in Natural and Artificial Systems by John H. Holland.

    This book styles itself "an introductory analysis", and it is not very long (about 200 pages), and yet it took me enormous effort and time to get through it all. Holland appears to have no concept of mathematical elegance, for one thing; he churns through seven steps in an equation with successive approximations, but stubbornly refuses to drop coefficients that he ends up ignoring later anyway.

    We get sequences like this (I've written them in LaTeX code, and you'll need amsmath because he uses gtrsim, "approximately greater than or equal to"):

    [ n^{*} gtrsim b^2 ln left[ frac{left( b^{-1}N_1 right)^2}{8 pi n^{*} right] ]

    [ gtrsim b^2 ln left[ frac{b^{-4} N_1^2}{8 pi} cdot frac{1}{ln left( left(b^{-1} N_1 right)^2 / 8 pi right) – ln n^{*} right] ]

    [ gtrsim b^2 ln left[ frac{b^{-4}N_1^2}{8 pi left( ln N_1^2 – ln left(b^{-2} / 8 pi right) right)} right] ]

    [ gtrsim b^2 ln left[ frac{b^{-4} N_1^2}{8 pi ln N_1^2} right] ]

    What is he doing in this awkward sequence, by the way? He's deriving an approximation for the number of trials that a genetic algorithm should devote to strategies that are measured as sub-optimal, since the measurements have errors and the strategies might really be optimal after all. And it's just an approximation, which he never uses ever again. Surely there was a simpler way to write all this?

    It's not just the math, either; Holland takes a long time to explain anything, and often repeats himself on points that are obvious while glossing over more difficult ideas. He redefines terms to mean things other than they would normally mean, broadening some, narrowing others, all in a very idiosyncratic way. He formalizes everything, and then changes his own formalism halfway through, redefining something as stochastic instead of deterministic or infinite instead of finite.

    That said, the book may be worth reading if you can take it, because there are definitely some very brilliant ideas buried in this mess. Where most evolutionary biology classes will teach you that mutation is essentially arbitrary, random, the mechanics don't matter, Holland shows that the lower-level mechanics of genetics—crossover, inversion, duplication, dominance—are actually fundamentally important in the process of natural selection. They allow what Holland calls implicit parallelism, the process by which testing one organism can actually test millions of different gene combinations simultaneously.

     

    I was skeptical at first; can crossover really be that important? But by the end, Holland had me pretty well convinced. These genetic processes (which he generalizes into formal "genetic operators") allow genes that work well together to stay together, while genes that don't work together get separated. This extends the selfish-gene paradigm further than I think even Dawkins imagined; selection is not only happening at the level of genes, it is happening at the level of gene schemata, with each selection event acting to update the fitness of millions of different gene combinations simultaneously. Implicit in this is, I think, a deep explanation of sexual reproduction: Sex allows us to recombine schemata in ways that are fundamentally new—never done before—and yet at the same time already pre-selected for likelihood of success, because their parts worked fine in a successful living organism. Asexual reproduction merely copies a pre-existing organism, perhaps with a few slight modifications; it cannot generate the massive (yet controlled) novelty that sexual reproduction produces.

    Holland develops a very general model, intended to apply to a wide variety of domains; with subtle modifications it can be readily applied to biology, neuroscience, economics, and artificial intelligence. The applications to economics are particularly striking: In explaining how information can be propagated back from the payoff to the schemata that generated it, he develops a model that literally involves "consumers" making "payments" to "suppliers" resulting in "profits". This back-propagation is not perfect, of course, and many of the problems in our real-world economy can be traced in some sense to the failure of profits to back-propagate to those most responsible for producing them. 

    Many have likened evolution to capitalism in the past, but always on rather weak grounds (mostly just involving competition and payoffs); Holland actually provides an account that sounds genuinely analogous to a market price mechanism. I'd always found it hard to swallow that ATP carries "energy" between cells, since most of the energy in a cell is thermodynamic to begin with and there are a number of different ways that the body transmits energy; but ATP is clearly fundamental to cell signaling. Perhaps it is actually best to think of ATP as a form of money.

     

    In general, the book has piqued my interest in complex systems in general and adaptive algorithms in particular; and it has also given me many new ideas and insights. As a book, however, it's painful to read, and I would not recommend it to anyone who struggles in math in any way whatsoever. (If you are fully comfortable with concepts like liminf and cardinality and power sets, you may be able to slog through as I did.)

May 2, 2013

April 25, 2013

  • There is no upside to irrationality.

    JDN 2456409 EDT 21:06.

     

    Daniel Ariely is certainly a fine behavioral economist, though not quite the caliber of Daniel Kahneman (and apparently people named Daniel are drawn to behavioral economics?). His writing is easy to read but generally avoids dumbing anything down. His personal stories are compelling and moving, though one does tire of hearing over and over again about the accident that burned him. (I'm sure this was a pivotal moment in his life, but the reader doesn't need to keep hearing about it in chapter after chapter.)

    One thing that the Upside of Irrationality does not achieve, however, is its stated objective of finding an upside to irrationality. This is probably because such an undertaking is provably impossible; if something has an upside, it can't really be all that irrational.

    So what does it show then? Well, it shows that there are benefits in human life to deviations from rational-agent neoclassical economics. And it certainly makes quite a compelling case of that; we are both more trusting and more trustworthy, more loving and more forgiving, more loyal and more industrious than a neoclassical rational agent would be. Indeed, a neoclassical rational agent would be, basically, a high-functioning psychopath; intelligent, rational, but without emotion, empathy, or remorse. (Perhaps this is why high-functioning psychopaths are overrepresented among corporate executives; neoclassical capitalism is literally designed for them.)

    But rationality doesn't mean being a neoclassical rational agent. Such a being is "rational" in an instrumental sense, but it is not really substantively rational to live that way (it will not actually make you happy), and rationality certainly does not require you to be a psychopath. It is not irrational to love your family, trust your friends, or be kind to others.

    I've actually heard some neoclassical economists try to argue that their rational agents are not callous psychopaths; but then they make game theory arguments that would only make sense on the assumption that they were. They say things like, "The rational action in the Prisoner's Dilemma is to always defect," which simply isn't true unless you're a horrible person (and maybe not even then). If you care at all about harming the other player, that option doesn't even tempt you all that much, let alone seem like the obvious correct answer. (And indeed, in Prisoner's Dilemma scenarios, actual, empathic humans do much better than so-called "rational agents", or indeed, than neoclassical economists.)

April 15, 2013