December 13, 2012

  • The real problem with Pascal's Wager

    JDN 2456275 EDT 12:06.

     

    I would say about Pascal's Wager what Bertrand Russell said about the Ontological Argument: It is much easier to see that it is wrong than to understand exactly what is wrong with it.

    In case you haven't heard it, the argument is basically like this: God either exists or he doesn't, with some finite probability that he exists. If God exists and you believe, you go to heaven, which is an infinite reward; if God exists and you don't believe, you go to hell, which is an infinite punishment. If God doesn't exist and you believe, you pay a finite cost. If God doesn't exist and you don't believe, you receive a finite reward. Therefore, you are comparing a finite probability of an infinite gain to a finite probability of a finite gain, and the infinite gain must win: Therefore believe in God, regardless of the probability.

    There are a number of good objections that can be made.

    One involves the casual handling of infinity, because Pascal died just before the formalization of calculus. Many a modern argument on Pascal's Wager has collapsed into nitpicking about cardinalities.

    Another involves the point that there isn't just one God being proposed, but many: Christian, Jewish, Muslim, Hindu, Shinto, Norse, Egyptian, Greek, and so on. Added to this can be proposals like the Perverse God who punishes believers, the Rationalist God who punishes blind faith, the Flying Spaghetti Monster, and the Invisible Pink Unicorn.

    But yesterday I realized that there's a deeper reason, one which also applies to variants such as Pascal's Mugging. This the fact that someone who trusts reasoning like this can be convinced of almost anything.

    "Give me $10,000 or I will punish you eternally."

    "If you have sex with me I will reward you infinitely."

    "Believe that the Earth is flat and you will be rewarded forever."

    "Kill yourself immediately or you will be punished forever."

    If the mere suggestion of an infinite utility is sufficient to motivate your behavior, there is hardly anything you won't be gullible enough to do. Maybe you can prevent yourself from believing contradictions by assigning probability 0 to them; but then, are you absolutely sure it's a contradiction? And maybe an omnipotent omniscient benevolent God is just such a contradiction, so Pascal's original wager would fail.

    Anyone could make you believe anything just by promising infinite reward or threatening infinite punishment. For it wasn't God making Pascal's Wager; it was Pascal, and he was basing it on the Bible, which was written by human beings even less informed than Pascal. The level of evidence is basically just "some guy said that this would happen".

    Normally we would think that someone saying X makes X more likely to be true. It certainly doesn't guarantee that it's true, but the argument doesn't depend on that, it only depends on making it a bit more likely than it was. And since most people are honest most of the time, doesn't someone saying something make it more likely to be true?

    Well, no, actually, it doesn't. Sometimes it does, indeed for most types of statements it does. For example, if someone says "I'm a lawyer", it's more likely that they are a lawyer. If they say "I'm gay", it's more likely that they are gay. If they say "I like cheesecake", it's more likely that they like cheesecake. If they say "I have a gun", it's more likely that they have a gun. Even "I'm a millionaire" makes it a bit more likely that they are a millionaire.

    This can also work for objective factual statements, like "The Earth is round" and "fish have gills"; when a lot of people say these things, that doesn't make them true, but it does give you reason to believe them.

    But there are some statements that don't become more likely when you say them. "I am a unicorn" and "I levitated yesterday" really don't provide us any evidence, since we have far more compelling support for the theory that they are lying (or joking) than for the theory that such things would actually be true. Anything so preposterous can't be supported purely by assertion, it must be verified by evidence.

    There are even statements that make themselves less likely by being said. "I am being completely sincere", "This is a completely legitimate business opportunity", "I am always honest and extremely humble". These are things you don't actually say unless you're trying to manipulate someone, things that actually tend toward refuting themselves.

    The notion of an infinite reward for doing something is somewhere between preposterous and manipulative; it's definitely no more likely just because someone says it, and given its capacity to manipulate the gullible, it may even be less likely (though it was certainly vanishingly unlikely to begin with). You should be reluctant to believe it precisely because it would have such power over you if it did.

    In fact, I think religious leaders have recognized this power, either consciously or by the evolution of memes. Threaten someone with eternal punishment or promise them eternal reward, and you can make them do anything. They will reorganize their lives around your whims, follow any rules you give them, shower you with wealth, even kill and die on your behalf. This is the hack by which you assert control of a human mind.

    Resist the hack. Recognize that you are being manipulated.

December 11, 2012

  • Holiday stress

    JDN 2456274 EDT 19:43.

     

    There's no doubt about it: Holidays like Christmas and Thanksgiving are stressful for many people. We are constantly bombarded with people complaining about how much work it is to decorate, and get the family together, prepare food, and buy gifts.

    Apparently something never occurs to people: We wouldn't have to do this. We choose to have holidays. This is particularly true in a secular society.

    In fact, I think that holidays are a good thing. We all need time to relax, time when we aren't constantly expected to work. We should take time to remind ourselves of the good things in our lives and celebrate them. Gifts and food can bring families and even whole societies together. But if it's not working... don't do it! If you feel worse after Christmas than you did before, don't celebrate Christmas. And if you choose to celebrate Christmas, you should be doing it because it makes your stress level lower, not higher.

    I think honestly this option simply doesn't occur to most people. They see holidays as an obligation, like you're somehow being a bad person if you don't celebrate them. Well, once the holiday becomes work, something you must do instead of something you want to do, you've really defeated the entire purpose of having a holiday.

    This attitude probably originally came from the time when holidays really were "holy days" and people actually believed that magical beings would punish them for failing to celebrate their birthdays. Hopefully people don't still believe that... then again, maybe some of them do. The way Bill O'Reilly gets so angry when people say "Happy Holidays" (because, you know, there's more than one holiday in this season, even for Christians; think New Year's), it really does seem like he fears that the magical man in the sky will punish him for not saying the right words on the proper day.

    Likewise, I've actually had my mother ask me: "What are you atheists celebrating on Christmas?" Doesn't she know that the date of Christmas wasn't set based on the birthdate of Jesus, but instead on the traditional date of Yule, the post-solstice celebration of pagan druids? You know, the reason we still say things like "Yuletide" and have a "Yule log" and cut down a tree? (The Bible actually explicitly forbids Yule trees: Jeremiah 10. This makes sense, seeing as they're pagan.) Doesn't she know that mistletoe comes from the celebration of Saturnalia (which also included orgies, maybe we should bring that back)? But no, I'm not celebrating Yule, and I'm not celebrating Jesus, and I'm not celebrating the solstice either for that matter (why would I celebrate axial tilt alignment?). I'm using this traditional festival to celebrate my family, remember the good things in my life, and share in the heritage of my culture. Isn't that what you're doing? Or did you really think that the creator of the universe cares how you celebrate his son's birthday (whatever that means!)?

    If so, this is very sad. I had hoped humanity had grown up at least enough to see that holidays are not about magical beings in the sky, but human beings on Earth. I had hoped that we had at least reached the point where the decorations and feasts weren't done because we felt that superhuman entities demanded it, but because they make us happy and bring our families together.

    But it certainly would explain why people feel so stressed about Christmas.

December 10, 2012

  • Charity harassment

    JDN 2456272 EDT 13:45.

     

    I donate to a fair number of charities, including GiveWell, UNICEF, Kiva, the Union of Concerned Scientists, the Secular Student Alliance, and the ASPCA. I also volunteer at the Humane Society of Huron Valley and generally give to Democratic Party candidates in major elections. I'm glad I do this; these organizations have done great things, and I'm proud to be part of that.

    That said, it gets pretty annoying to be put on the list of "people who donate to charity"; it results in you being inundated with junk mail asking for money. And many of these organizations are fine people, and I would love to give them money... it's just that I only have so much to give. And it gets really annoying sometimes to have dozens of people begging you for money.

    I understand why they do this; they have some sort of behavioral marketing model which says that they get more donations on average if they harass their regular donors for additional cash. I'm sure it's a lot cheaper than sending out mailings to millions of people who never donate anything.

    Still, I feel like it creates a disincentive to donate, and thus may actually discourage people who would have donated occasionally. It won't discourage regular donors like me, but there aren't enough of those. You really need to get casual donors, people who give $25 a year or even $5 a year. More is better of course, but something is a lot better than nothing.

    Actually, charities are doing pretty well: Total annual charitable contributions by private individuals in the US are over $200 billion. That's about $650 per person in America. Granted, that's a tiny fraction of our GDP of $15 trillion (1.3%), but it's more than most places, so we're doing something right.

    It may be time, in fact, to start talking about how to give efficiently, since lots of people give to charities that just aren't that effective. GiveWell is an excellent source for this, and in fact it's because of them that I may withdraw my Kiva portfolio once it gets repaid this cycle. Kiva has made a lot of claims about how amazingly effective they are... and the data just doesn't back them up on that. They're better than nothing, to be sure; but other charities (like the Against Malaria Foundation) seem to be having a bigger impact in lives saved per dollar. I'm also thinking I might want to give more to scientific research than I presently do. The one thing I'm sure of is I'll keep giving to the UCS; global warming is going to be the humanitarian disaster of the 21st century.

    So let's think about these questions: How can we encourage people to donate without making them feel harassed? How can we attract a wider pool of donors? How can we use donated money more efficiently to accomplish the most good?

December 6, 2012

  • The Curse of Knowledge strikes again

    JDN 2456268 EDT 14:56.

     

    A review of Five Golden Rules by John Casti.

     

    It's a problem that plagues many nonfiction writers. Steven Pinker called it the Curse of Knowledge; Less Wrong refers to it as Inferential Distance. The problem is this: You know what you know, but you don't know what other people don't know. So it's hard to explain things without going over people's heads or seeming condescending.

    Five Golden Rules is supposed to be a book about cutting-edge mathematics for people who don't know a lot of mathematics. It is in fact a book about cutting-edge mathematics for people who do know a lot of mathematics. I got quite a bit out of it, but I've studied abstract algebra and real analysis. Also, some of the topology still confused me, perhaps because I've never formally studied topology. How do you cut a hole in a surface and then stitch the hole closed with a Moebius strip? I'm still confused by higher-dimensional non-orientable surfaces. Also, I still don't quite get the deeper philosophical implications of Godel's incompleteness theorems. I've heard everything from "It undermines rationality itself" to "It's basically trivial". I assume the truth is somewhere in between? (I actually lean more towards the "basically trivial" side of things; a lot of paradoxes really seem like they are more statements about language than they are about truth. "It is raining in Bangladesh but Patrick Julius doesn't know that." "Patrick Julius cannot coherently assert this sentence." Both of these sentences could very well be true, but I can't assert them if they are. Is this a problem?) Casti is quite noncommital on what he thinks Godel implies.

     

    The basic format of the book centers around five seminal branches of 20th century mathematics: game theory, topology, computer science, singularity theory, and linear optimization. If you already have the basic knowledge of each field, you can get a lot out of the way Casti ties everything together with the passion of a real working mathematician. The joy he feels from exploring mathematics can be felt through the words.

    But if you don't at least know calculus, this book is going to make very little sense to you. He tries to make it non-mathematical, but fails really quite miserably. It's a much more pleasant read than your average math textbook, but it requires a comparable level of background knowledge.

    It's unfortunate really; I'd love to have a book that explains these deep mathematical concepts to people who don't know a lot of math. Unfortunately, Five Golden Rules isn't that book. Instead, it's a useful synthesis and a pleasant read for those of us who already have the necessary background.

December 5, 2012

  • How capitalism distorts holidays

    JDN 2456267 EDT 11:11.

     

    It doesn't just feel that way; the Christmas season actually starts earlier every year. This year, the season literally started before Thanksgiving. We didn't really have Thanksgiving, except as part of Christmas. Include New Year's and the week after, and we now have a Christmas season that lasts almost two months.

    Don't get me wrong; I like Christmas, even as an atheist. Family rituals and gift-giving are very important in almost every culture, and most cultures celebrate a holiday like this around the Winter Solstice because the Northern Hemisphere is getting cold and dark, and for most of the history of our species that was a dangerous thing. (Now, it's not really, at least for the middle class in the First World.) It's a time to reflect upon our place in the universe, a time to huddle close together, a time to share and love. It's actually more Yule (and shopping) than it is Christian; anyone who wants to put the Christ back in Christmas should be asked if they want to put the Thor back in Thursday. (I would suggest that we retire the deforestation tradition; the world needs more trees, not fewer, and now more than ever. How about your Christmas tree is planting a tree somewhere else?)

    But it's really weird that Christmas keeps expanding. Most other holidays don't do this; we don't see the President's Day season start a month early. I guess we do see "dads and grads" starting early, but I always assumed that was because graduation dates vary between schools. Thanksgiving certainly doesn't expand. Halloween does a little, as does Easter.

    Actually, maybe it's a more general phenomenon after all: Holidays expand in proportion to their shopping potential. For Easter and Halloween, you do a bit of shopping, for decorations and candy. For Father's Day and Mother's Day, you buy a few gifts. But Christmas is the one time when you buy dozens of gifts, and almost everyone does; so as a result it's the holiday that retail stores love the most, and hence the holiday they have the largest incentive to expand as much as possible.

    Still, maybe we should be asking, as a society: Do we like this? Do we want retail stores like Macy's and Wal-Mart controlling our culture to this degree? Did you realize that Rudolph the Red-Nosed Reindeer is a wholesale invention of (now-defunct) Montgomery Ward? Don't you associate Santa Claus with Coca-Cola? And doesn't that terrify you?

    If there is anything insidious about capitalism, surely it's this. Jingles and commercials infest our brains with memes. Marketing campaigns shift our holidays and redefine our mythology. We used to define our culture based on tradition and religion, which, to be fair, are not all that great to begin with; but now we define it based on... profit? On whatever makes the most money for shareholders of faceless corporations?

    There is certainly an upside to capitalism, not least the enormous wealth it provides us. But there's also a downside, if our cultural values are increasingly defined by Black Friday competition instead of Thanksgiving gratitude. Maybe we can separate the good from the bad; but to do that, we must first be aware that both exist.

December 3, 2012

  • Radicals and Revolution

    JDN 2456265 EDT 10:08.

     

    I do not consider myself a radical or a revolutionary. It could be argued that I have some radical ideas, because I would like to see many things about society change, some in fairly drastic ways. And I could be considered a radical of the Saul Alinsky school: "true revolutionaries do not flaunt their radicalism. They put on suits and infiltrate the system from within."
    But what differentiates me from the people I would think of as radicals is that I don't see our current state as all that bad. I've heard people say things like "Anything but this," "anywhere but here," and I think they seriously lack imagination. The world could be a lot worse than it is.

    Just in the 20th century, the Nazis could have won WW2, or the USSR could have won the Cold War. Nuclear war could have wiped out 99% of the human population and destroyed civilization as we know it.

    Go further back, and democracy might never have been invented. The scientific method might never have been discovered. Feudal monarchy sustained by divine right of kings was in place for over a thousand years, what makes you think it couldn't still be around now? We could have been wiped out by many a plague, especially if we'd never invented antibiotics.

    The modified capitalism we have today is surely a flawed system in many ways. But can you really not see that it's better than most of the alternatives? Better than subsistence farming, better than a barter system, better than feudalism, better than mercantilism, better than Stalin's Communism; and those are just the systems that human societies have used, not counting all the combinatorically vast space of possible social systems, most of which are so unstable they fall apart before you can implement them. Maybe there is a form of socialism (or even communism?) that would be better, but you have to actually specify how that would work and how we get there from here. In fact, I think that the best form of socialism is one that includes a large swath of capitalism, like what they have in Sweden and Denmark today. But hey, maybe you've got a better idea. Here's the thing: It has to actually be fleshed out as a workable system, because that's what we're comparing to, actual workable systems. You can't compare the flaws of real-world capitalism with the ideal utopia of imaginary socialism. (The ideal utopia of imaginary capitalism is also quite wonderful; it's also, well, imaginary.)

    Does the world need to change? Yes. But it needs to change slowly. Now, you might ask, why slowly? Given that we want to get from A to B, isn't it best to go as fast as possible?

    Well, it would be, if we knew exactly how to get there. If you could write down in detail the precise list of changes we need to make from our current state to get to the best possible human society, and then actually implement all those changes all at once, yes, that would be amazing. But you can't.

    Instead, we have to work by trial-and-error. We have to make a few tweaks here and a few tweaks there, and find out what works and what doesn't. It's possible to be too conservative, not making any changes at all. But it's also possible (indeed, far easier), to be too radical, making changes rapidly and haphazardly before their consequences are understood.

    Social change is often analogized to biological evolution, and that's no accident; there are a number of similarities (though a number of differences as well, watch out). Modern evolutionary genetics makes one thing very clear: Don't mutate too fast. You want to mutate, yes. But you want to mutate very slowly, making very tiny changes at a time. That way, natural selection has the opportunity to keep you on a good course toward improved fitness. If you mutate too fast, you destroy your current equilibrium and may not survive at all. (Indeed, there's an easy way to mutate really fast: Stand next to a nuclear explosion, just outside the blast radius, so you only get the radiation. You'll find it isn't good for your fitness.) The optimal mutation rate for long-term fitness is extremely slow.

    To see why, suppose you're trying to change a mouse into an elephant, and all you can do is resize one bone, muscle, or organ at a time. (Sound ridiculous? Well, it's actually where elephants come from.) If you tried to enlarge the head to elephant size right away, the whole animal would fall apart under its own weight. If you enlarge the legs first, now it can't walk. Instead, what you have to do is make the legs 1% bigger, then the head 1% bigger, then the lungs 1% bigger, and so on, and keep cycling through this process many, many times. To make the animal a million times bigger, you need to increase each part by 1% a total of 1,390 times. (Probably not as many as you thought!) That will take a few million years, but it can be done. Has been, in fact.

    Similarly, changing a society requires a complex system of interlocking parts to be changed. Not as complex as a mouse, perhaps; but still pretty complex. Because humans are intelligent (unlike genes), we can even get away with a more than 1% change along one dimension--like when we gave women the right to vote, that's actually a 100% increase in the voting population. We can coordinate some changes so we don't have to do them all strictly sequentially. But still, we can only change a few things at a time, and we can only change them a moderate amount. Try to change too rapidly, and the system becomes unstable; if you're looking for a single explanation for why Communism in the USSR and PRC became so terrible, this seems like a pretty good one. Trying to change everything at once means giving way too much power to a small number of people--and then, predictably, that power gets abused. I hope the revolutions in the Middle East don't go this way, but it looks like they will actually; because once again, we didn't reform piece by piece, we tried to do a revolution all at once.

    I can understand the temptation. So much is wrong with the world, shouldn't we try to change it all as fast as we can? But no, we have to take it step by step. Otherwise, we might make it worse. And never doubt that it could be a lot worse.

November 30, 2012

  • Asymmetric Julius-Hofstadter equilibrium

    JDN 2456262 EDT 15:06.

     

    I made a mistake in my earlier post in which I proposed the idea of Julius-Hofstadter equilibrium. You can't actually diagonalize the payoff matrix, at least not for certain games. Even some symmetric games can't be diagonalized.

    For instance, consider the classic "Chicken" game, which actually I think should be called the "Right-of-Way" game because it only describes "Chicken" if you're aggressive and irresponsible, whereas it describes the right-of-way at an intersection no matter who you are.

    You arrive at an intersection at the same time as another car on the perpendicular road. You have two options, Go and Stop. If you go and the other driver stops, you're best off: You get to drive on toward your destination. If you stop and the other driver goes, it's not too bad: You have to wait for them, but then you'll be able to go. If you both stop, it's really annoying; both cars sit there and nobody knows what to do. But if you both go, you'll collide, ruin everyone's day, possibly get hurt, and definitely have to pay your insurance deductible.

     

     

    G

    S

    G

    0, 0

    10, 4

    S

    4, 10

    3, 3

     

    There are two Nash equilibria for this game, one in which you go and the other driver stops, and one in which you stop and the other driver goes. It's to your advantage to make a public commitment to going as soon as possible; that way, you force the other driver to stop. And indeed, this is generally what happens, and even though we're supposed to have regulations that specify which driver goes first, a lot of people can't remember them and don't use them. The ideal solution would be to have road signs or something that remind people of the rules. Because this is a Nash equilibrium, once the rule is specified, everyone has an incentive to obey it. The problem is that the equilibrium isn't unique, so you have to choose which one somehow.

    This game cannot be diagonalized. If you followed the procedure I outlined in the previous post, you'd get the result that both cars stop. I've actually seen this happen at real intersections, because people can't remember the regulations and no one is bold enough to make the public assertion of commitment. (Usually it's because I don't have the right of way but the other person thinks I do. Eventually I just act as if I do, to break the deadlock.)

    Instead, we must redefine the Julius-Hofstadter equilibrium slightly. A superrational agent does not assume that other superrational agents behave the same; they assume that other superrational agents think the same. Not the naive Golden Rule, but Kant's Categorical Imperative.

    Instead of diagonalizing, they restrict the set of payoffs to Pareto-efficient alternatives. Knowing that both players will do this, they generate the same set, and thereby don't have to worry about choosing any Pareto-inefficient results (as neoclassically "rational" agents often will).

    In fact, I think we can go further, and say that any Julius-Hofstadter equilibrium must be Strongly Rawls, if I may use that as an adjective. A payoff is Rawls if it maximizes the payoff for the worst-off player. It is Strongly Rawls if it is Rawls and excluding the minimum yields a remaining set that is Rawls, and so on for the entire set. (This is also sometimes called leximin.)

    A Julius-Hofstadter equilibrium is Pareto-efficient and Strongly Rawls. (Actually, Strongly Rawls implies Pareto-efficient.) The basic concept remains the same: A choice is a Julius-Hofstadter equilibrium if all players have no incentive to change multilaterally.

    A Strong Nash Equilibrium is a Julius-Hofstadter equilibrium, but the converse is not true. A Julius-Hofstadter equilibrium always exists, while a Strong Nash Equilibrium typically does not. It's pretty simple to prove that a Julius-Hofstadter equilibrium always exists in a finite game. First, suppose that there is no Pareto-efficient alternative. Then, for all alternatives x, there must exist some x' for which all payoffs are at least as high and at least one payoff is greater. This implies that the sum of payoffs in x' is greater than the sum of payoffs in x. If we rank-order alternatives by the sum of their payoffs, this is a well-ordering since it's just comparing real numbers. Since every x is Pareto-dominated, we have for all x, there exists x' > x. But this contradicts the assumption that the game is finite. Therefore, there is at least one Pareto-efficient alternative. Within that set of Pareto-efficient alternatives, make a new well-ordering such that y' >= y if the minimum payoff of y' is at least as large as the minimum payoff of y. Again, this is a finite set, so the well-ordering must produce a maximum. That maximum is Rawls, and therefore a Julius-Hofstadter equilibrium.

    In the Right-of-Way game, there are two Rawls alternatives, the same as the Nash equilibria. And indeed, the Julius-Hofstadter equilibria of this game are identical to the Nash equilibria for this game. That's no accident; in this game, there really are two equally good solutions, corresponding to different right-of-way regulations. "Right goes first" is the usual rule in the US, but "left goes first" would be equally valid; the key is to be consistent and (this is the real problem) universally known. There's also another problem, which is when two drivers seeking to turn left arrive at the same time from opposite directions on the same road. I actually don't think we have good regulations for this; I think we should just make something up, like "north and east go first". Actually, there's a very good general solution which is in wide use: It's called a traffic light.

     

    This new definition of the equilibrium can be easily extended to any normal-form game, even asymmetric games. For example, here's a completely arbitrary game that bears no relation to any real situation I can think of (I've bolded the best response for each player):

     

    A

    B

    C

    D

    2, 9

    1, 7

    3, 4

    E

    0, 4

    5, 1

    1, 6

    F

    9, 2

    4, 3

    1, 2

     

    This game has no dominant strategies. In fact, it has no pure Nash equilibria. There's a mixed Nash equilibrium somewhere, which I know simply because Nash's Theorem proves that there always is. I have no idea what it is actually, and don't particularly care to find out.

    Instead, we'll find the Julius-Hofstadter equilibria. First, we exclude any Pareto-dominated alternatives. 0,4 and 1,2 and 1,6 are all dominated by 1,7. That leaves the following:

     

     

     

    A

    B

    C

    D

    2, 9

    1, 7

    3, 4

    E

     

    5, 1

     

    F

    9, 2

    4, 3

     

     

    Of these Pareto-efficient alternatives, only 4,3 and 3,4 are Rawls.

     

     

    A

    B

    C

    D

     

     

    3, 4

    E

     

     

     

    F

     

    4, 3

     

     

    Therefore, there are two Julius-Hofstadter equilibria. Which one should we choose? Well, I don't know. And it really doesn't matter, because they are morally equivalent. Indeed, they must be, by definition. If either of the 3s were increased, the other 3 wouldn't be Rawls anymore. If either the 3 or the 4 were decreased, the result would not be Pareto-efficient anymore. And if the 4 were increased, the other 4 would no longer be Pareto-efficient. The two must be the same, except for being assigned to opposite players.

    This is a general result. A game's Julius-Hofstadter equilibria are always identical except for a permutation of the players. To see this, consider a game with n players, and two different Julius-Hofstadter equilibria of that game, A and B. By definition, A and B are Strongly Rawls. Order the players by a permutation i -> j such that A_i are in increasing order and B_j are in increasing order as well. We now prove that for i=j, A_i = B_j.

    [I wouldn't include this in a formal paper of course, but pronouncing the previous equation out loud I realized it says, "Artificial Intelligence equals a blowjob." Might want to change that notation later? Of course, that statement has some truth: One of the obvious applications of advanced robotics is sex toys.]

    Because A and B are Rawls, A_1 = B_1.

    Therefore, exclude i=j=1 from consideration and look at i=j=2.

    Because A and B are Strongly Rawls, the set A-A_1 is Rawls and the set B-B_1 is Rawls. Now the smallest element in each is A_2 and B_2, which are maximal, and thus the same.

    Continue until A_n = B_n. Therefore, aside from this permutation of players i -> j, A = B.

    What happens when we get a result like this, where there are two Julius-Hofstadter equilibria that differ by a permutation? We need to select one somehow, as we did for the Right-of-Way game earlier. There really can't be any moral reason behind that choice, so we choose, in essence, arbitrarily. We must choose however; if both players don't know what to do, we can end up in an outcome that is not Rawls or one that is even Pareto-inefficient.

    If one group of people is systematically given one player slot and another group is systematically assigned to another, then we may want to alternate in some way, to ensure fairness. But in most circumstances this isn't a problem--as in the Right-of-Way game, where you're as likely to be on the right as you are to be on the left. Pick one, stick with it, make sure everyone knows the rule. Is "right goes first" inherently better than "left goes first"? Is "green means go" inherently better than "red means go"? No. But having a consistent rule is better than not having a consistent rule, and all we have to do is ask whether some other rule would be better. (For "right goes first", the answer is of course "green means go". But for "green means go"? There might really not be anything better than that.)

    I have thus proved that for any finite game, at least one Julius-Hofstadter equilibrium always exists, and that if multiple equilibria exist, they are all equivalent except for a permutation of the players. This fits on a deep level with superrationality and Kantian morality, which are fundamentally about the idea that permutations of individuals don't change results.

    Nash equilibrium does have one nice feature that Julius-Hofstadter equilibrium lacks, which is that Nash equilibria are self-enforcing. If you can alter the game so that the Julius-Hofstadter equilibrium is a Nash equilibrium, that's great; and processes like reciprocity can often make this happen. But Julius-Hofstadter equilibrium has a very major advantage over Nash equilibrium: It is always morally right. That seems fairly important.

November 29, 2012

  • The Dragonsoul Saga: A Balance Broken

    eJDN 2456261 EDT 17:59.

     

    I bought this book as a souvenir from Gen Con, along with some art prints and a solar panel that only produces about 10% as much power as it would need to be the primary source for my phone. It's published by a tiny independent press called Imagined Interprises, and no, "interprise" is not a word in any dictionary I could find.

    The book was... pretty good, I guess. It has nothing really exceptional about it. It's obviously heavily influenced by Tolkien (but who isn't?), and its relatively sympathetic depiction of the orcs is along a similar vein to Warcraft III. It follows most of the standard fantasy tropes, and does so well but not exceptionally so. The young hero is but an innkeeper when his mysterious power is discovered, leading to a grand adventure; the rogue with a heart of gold redeems herself by becoming a healer; great battles are fought between men, elves, dwarves, and orcs.

    One thing I found disappointing really has more to do with American society than the book itself: The book pans away from all of its sexual content to the point of having a kindergarten-like innocence toward sexuality; but then the violent content, oh boy, the blood and guts are everywhere and the battlefields are filled with the sour stench of spilled entrails. Men are cleaved in two and axes are covered with blood. But when contemplating his love interest, the hero never so much as gets an erection, because that, obviously, would be naughty.

    A more direct concern with the book in particular is that it doesn't really... go anywhere. Things happen, the stage is set for larger events... but nothing is really consummated by the end of the first book. I understand that it's meant to be the start of a trilogy, so we wouldn't expect all the loose ends to be tied up. But as it is, hardly any loose ends are tied up, and we go through 300 pages and then feel like... "so, uh, when does the story start?"

    The closest thing to a climax is a battle near the end, in which humans, dwarves and elves in a great fortress defend against overwhelming numbers of orcs. (Sound familiar?) And certainly things do happen, progress is made. The world-building is very rich, the character development is fairly compelling. But the plot? It feels like the first act instead of a story within itself.

    I haven't decided whether I want to read the other two books. On the one hand, I do kinda want to see where it goes, how it all comes together. On the other hand, I'm not sure I want to invest another 600 pages in this thing, and I don't want to reward the practice of writing a trilogy as though it's a single very long book. Each book should have a story of its own, which ultimately all ties together; no book should be simply introductory material for another book.

    For the Terlaron series, I've gone radically the other direction; it can be argued that different books in the series aren't even the same genre, aside from the basic framework of SF. First Contact is a romance, Deterrent is more like an alien invasion flick (probably not the way you think, and there's another genre it fits pretty well that I'm not mentioning to avoid spoilers). On to Infinity is planned to be a coming-of-age story with a very scientific bent. One Good Reason is planned to be a war drama. There are two more planned novels I haven't even titled yet, but one is a spy thriller and the other is part historical drama and part fantasy. There will be more, including at least one military space opera. Yes, they are all set in the same world. Yes, they all tie together in various ways. Almost all of them have some element of romance, and some kind of quest; several are about prejudice in various forms. But they are very much independent stories about independent individuals.

    That, I'll admit, is going a bit far. It's maybe a little crazy, to be honest; it's the sort of thing I thought no one else had ever done before, until I saw Cloud Atlas and said, hey, maybe I'm crazy like a fox after all. (The stuff about reincarnation was a bit much, but otherwise it's very much the kind of story I'm trying to tell with Terlaron.)

    You don't have to go quite so far; it's certainly possible to have a series of novels that are about the same characters as they grow and go through smaller quests on the way to a much larger objective--like Harry Potter. But The Dragonsoul Saga: A Balance Broken goes too far in the other direction. It's not really a story, just the first act of a story. Along with the positive influence from Tolkien, it also shares some of Tolkien's weaknesses, like his overwrought detail in world-building and disorganized plot structure. (By the end of Return of the King, I just wanted it to all to stop. The Peter Jackson movie trilogy ended much better actually.)

    Is it worth reading? I suppose so. But probably only if you plan on reading the whole set.

     

November 28, 2012

  • An alternative to Nash equilibrium

    JDN 2456260 EDT 11:16

     

    There is a problem with neoclassical game theory. Those who teach and use it would be the last to admit it, but still, the problem is there.

    As I discussed a few days ago, the problem is that the "rationality" of neoclassical game theory can lead to results which are bad for everyone. In the extreme, yet all-too realistic example I mentioned, it could literally destroy human civilization.

    It also leads to the paradoxical result that a group of "irrational" individuals can together fare much better than a group of "rational" individuals playing the same game. It can be advantageous to be "irrational." "Rational." You keep using that word... I do not think it means what you think it means.

    Douglas Hofstadter proposed a solution, one which I would now like to extend using the same framework as Nash equilibrium. For that reason, I humbly propose it be called Julius-Hofstadter equilibrium.

    Hofstadter's proposal was called "superrationality": The idea is that we know (by the Aumann Agreement Theorem) that two perfectly-rational agents will agree on all their beliefs about the world. If their preferences are the same, this means that their behaviors will be the same. (I will extend to non-symmetric games in a moment.)

    Hence (this is the key insight), they must set their behaviors based on that knowledge. Payoffs that require two players to think differently simply aren't valid; no superrational agent would ever get such a result.

    Formally, this means we diagonalize the payoff matrix. All non-diagonal entries go to zero and get ignored. Then, we seek an equilibrium which is like a Nash equilibrium, but with one change. A Nash equilibrium asks whether each player has an incentive to change unilaterally. A Julius-Hofstadter equilibrium instead asks whether all players have an incentive to change multilaterally.

     

    For example, consider this standard Prisoner's Dilemma:

     

     

    C

    D

    C

    4,4

    1,5

    D

    5,1

    2,2

     

    The neoclassical solution is to say that because defection is a dominant strategy for both places, they will both select it and stay there at Nash equilibrium. The result is that everyone gets a payoff of 2, when mutual cooperation would yield a payoff of 4 for everyone.

    But when we diagonalize the matrix first, this is what we get instead:

     

     

    C

    D

    C

    4,4

    0,0

    D

    0,0

    2,2

     

    Now this is a pure coordination game (indeed, all symmetric games reduce to pure coordination games), and there are now two Nash equilibria, one for mutual cooperation and one for mutual defection.

    In fact, there is only one Julius-Hofstadter equilibrium, which is mutual cooperation. Why? Because if each player knows that the other player thinks as they do, then when they switch to cooperation, they know the other player will as well. This is strictly dominant, because it raises their payoff from 2 to 4.

    There are games that have multiple Julius-Hofstadter equilibria, however. Any Nash equilibrium of a diagonalized game is a Julius-Hofstadter equilibrium as long as there is no other Nash equilibrium that has a higher payoff. All the Julius-Hofstadter equilibria of a game have equal payoffs.

    For example, this coordination game has four equal Julius-Hofstadter equilibria:

     

     

    A

    B

    C

    D

    A

    1,1

    0,0

    0,0

    0,0

    B

    0,0

    1,1

    0,0

    0,0

    C

    0,0

    0,0

    1,1

    0,0

    D

    0,0

    0,0

    0,0

    1,1

     

    What do we do if the payoffs aren't symmetric? What then would a superrational agent do? Stay tuned for a later post.

November 20, 2012

  • Rationality and the game theory of nuclear war

    JDN 2456252 EDT 23:05.

     

    Be very wary of the way economists use the word "rational". Sometimes they intend a very broad sense, rational-1, in which any goal-oriented behavior is "rational"; this is usually what they use when trying to argue that people are rational (and in that sense, we generally are). Other times they use a somewhat stricter sense, rational-2, where "rational" means that we seek our goals in an optimal fashion; this is good for making simple models (though it's ludicrous as an actual statement about human behavior). And then, worst of all, there is rational-3, what they use when talking about game theory, which is a very narrow notion of "rational" that requires us to be selfish and short-sighted.

    Many economists seem to think that rational-3 follows from rational-2. In fact, I think they are inconsistent. An agent who was behaving in an optimally goal-oriented, rational-2 sort of way would not behave in a selfish and short-sighted rational-3 fashion.

    To use the most dramatic example possible (yet an all-too realistic one), consider nuclear war.

    The two players are NATO and the USSR. Obviously, each would prefer the other to not exist, and could achieve that by means of ICBMs. The problem is, if NATO launches nukes, the USSR will also launch nukes, and human civilization (and possibly human life) will be wiped from the face of the Earth. Alternatively, they could use diplomacy, which might keep them alive but would also leave their mortal enemy alive as well.

    This is a normal-form game, with the following payoff matrix. N is NATO, U is USSR, N is nuke, D is diplomacy; so for example NN/UD means that NATO uses nukes and the USSR uses diplomacy. The first payoff in each pair is NATO's, the second is the USSR's, so in 2,-1000 that means that NATO gets 2 units of utility (capitalism wins!) and the USSR loses 1000 (everyone in the Soviet Union has been killed).

     

     

    UN

    UD

    NN

    -1000,-1000

    2,-1000

    ND

    -1000,2

    1,1

     

    To make the game a bit easier to analyze, we could add a "better dead than Red" provision, on which NATO's payoff for ending the world is slightly higher than their payoff for being vaporized and allowing the USSR to survive (and conversely).

     

     

    UN

    UD

    NN

    -999,-999

    2,-1000

    ND

    -1000,2

    1,1

     

    It doesn't really matter either way. In both cases, there is a single unique Nash equilibrium; though in the second game this equilibrium is also reached by iterated elimination of strictly-dominated strategies. The second game is isomorphic to a Prisoner's Dilemma, while the first game is a slightly modified form with essentially the same result.

    What is the equilibrium? Nuclear apocalypse. Absolutely no doubt about that; the only state from which neither player's position can be improved by their own action is the state in which both superpowers unleash their nuclear arsenals and send humanity back into the Stone Age.

    If this were "rational" (rational-3), then being "rational" would have killed billions of people. We are all better off by choosing the diplomacy option, ND/UD, which indeed we did, thank goodness.

    And speaking of "thank goodness": A lot of game theorists will try to argue that rational-3 doesn't imply selfishness, and you can still get results like this even when the players are altruists. This is because they do not understand altruism.

    Apparently, they think that an altruistic person is someone who happens to have preferences that are other-regarding, e.g. they love their spouse and their children. Maybe that's altruistic in a certain sense, but it's not the really morally important sense. Indeed, the evolutionary selfishness involved should be obvious.

    No, a truly altruistic person--or perhaps I should say a truly moral person, to be clearer--is someone who analyzes decisions according to all persons affected. It is someone who is impartial, someone for whom idiosyncratic preferences do not hold sway. In effect, a moral person plays as both players at once--for that is the essence of empathy.

    That is, the truly moral decision-maker in a Prisoner's Dilemma must always select a Pareto-efficient alternative (meaning that no one can be made better off without making someone worse off; in the above example, all the choices are Pareto-efficient except NN/UN). To choose anything else would be fundamentally immoral. In fact they would probably have other criteria as well (like the Rawls criterion, or utility-maximization, or something like that; after all, NN/UD is Pareto-efficient and good for America but it still seems unconscionable), but Pareto-efficiency is a basic minimum standard from which no plausible morality could deviate. Truly moral players would eliminate the inefficient alternative from consideration. They would not choose NN/UN, and in deciding between the other three would quickly discern that ND/UD is the fairest, most stable, and best overall choice.

    Now, one could raise objections: Humans aren't really perfectly moral, and how do you know how moral other people are in a real-world decision, and what about the temptation to cheat, and so on. These are fair points, and they are indeed the proper direction for a science of human decision-making. But we do know this: Most of the time, most people cooperate; and it's a damn good thing they do, for if they didn't, most of us would be dead by now.

    We are rational-1, and we must strive to be rational-2. But rational-3 is not rationality; it is psychopathy.