November 20, 2012

  • Rationality and the game theory of nuclear war

    JDN 2456252 EDT 23:05.

     

    Be very wary of the way economists use the word “rational”. Sometimes they intend a very broad sense, rational-1, in which any goal-oriented behavior is “rational”; this is usually what they use when trying to argue that people are rational (and in that sense, we generally are). Other times they use a somewhat stricter sense, rational-2, where “rational” means that we seek our goals in an optimal fashion; this is good for making simple models (though it’s ludicrous as an actual statement about human behavior). And then, worst of all, there is rational-3, what they use when talking about game theory, which is a very narrow notion of “rational” that requires us to be selfish and short-sighted.

    Many economists seem to think that rational-3 follows from rational-2. In fact, I think they are inconsistent. An agent who was behaving in an optimally goal-oriented, rational-2 sort of way would not behave in a selfish and short-sighted rational-3 fashion.

    To use the most dramatic example possible (yet an all-too realistic one), consider nuclear war.

    The two players are NATO and the USSR. Obviously, each would prefer the other to not exist, and could achieve that by means of ICBMs. The problem is, if NATO launches nukes, the USSR will also launch nukes, and human civilization (and possibly human life) will be wiped from the face of the Earth. Alternatively, they could use diplomacy, which might keep them alive but would also leave their mortal enemy alive as well.

    This is a normal-form game, with the following payoff matrix. N is NATO, U is USSR, N is nuke, D is diplomacy; so for example NN/UD means that NATO uses nukes and the USSR uses diplomacy. The first payoff in each pair is NATO’s, the second is the USSR’s, so in 2,-1000 that means that NATO gets 2 units of utility (capitalism wins!) and the USSR loses 1000 (everyone in the Soviet Union has been killed).

     

     

    UN

    UD

    NN

    -1000,-1000

    2,-1000

    ND

    -1000,2

    1,1

     

    To make the game a bit easier to analyze, we could add a “better dead than Red” provision, on which NATO’s payoff for ending the world is slightly higher than their payoff for being vaporized and allowing the USSR to survive (and conversely).

     

     

    UN

    UD

    NN

    -999,-999

    2,-1000

    ND

    -1000,2

    1,1

     

    It doesn’t really matter either way. In both cases, there is a single unique Nash equilibrium; though in the second game this equilibrium is also reached by iterated elimination of strictly-dominated strategies. The second game is isomorphic to a Prisoner’s Dilemma, while the first game is a slightly modified form with essentially the same result.

    What is the equilibrium? Nuclear apocalypse. Absolutely no doubt about that; the only state from which neither player’s position can be improved by their own action is the state in which both superpowers unleash their nuclear arsenals and send humanity back into the Stone Age.

    If this were “rational” (rational-3), then being “rational” would have killed billions of people. We are all better off by choosing the diplomacy option, ND/UD, which indeed we did, thank goodness.

    And speaking of “thank goodness”: A lot of game theorists will try to argue that rational-3 doesn’t imply selfishness, and you can still get results like this even when the players are altruists. This is because they do not understand altruism.

    Apparently, they think that an altruistic person is someone who happens to have preferences that are other-regarding, e.g. they love their spouse and their children. Maybe that’s altruistic in a certain sense, but it’s not the really morally important sense. Indeed, the evolutionary selfishness involved should be obvious.

    No, a truly altruistic person–or perhaps I should say a truly moral person, to be clearer–is someone who analyzes decisions according to all persons affected. It is someone who is impartial, someone for whom idiosyncratic preferences do not hold sway. In effect, a moral person plays as both players at once–for that is the essence of empathy.

    That is, the truly moral decision-maker in a Prisoner’s Dilemma must always select a Pareto-efficient alternative (meaning that no one can be made better off without making someone worse off; in the above example, all the choices are Pareto-efficient except NN/UN). To choose anything else would be fundamentally immoral. In fact they would probably have other criteria as well (like the Rawls criterion, or utility-maximization, or something like that; after all, NN/UD is Pareto-efficient and good for America but it still seems unconscionable), but Pareto-efficiency is a basic minimum standard from which no plausible morality could deviate. Truly moral players would eliminate the inefficient alternative from consideration. They would not choose NN/UN, and in deciding between the other three would quickly discern that ND/UD is the fairest, most stable, and best overall choice.

    Now, one could raise objections: Humans aren’t really perfectly moral, and how do you know how moral other people are in a real-world decision, and what about the temptation to cheat, and so on. These are fair points, and they are indeed the proper direction for a science of human decision-making. But we do know this: Most of the time, most people cooperate; and it’s a damn good thing they do, for if they didn’t, most of us would be dead by now.

    We are rational-1, and we must strive to be rational-2. But rational-3 is not rationality; it is psychopathy.

Comments (4)

Post a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *