February 12, 2013
-
Fear not your own axioms.
JDN 2456336 EDT 13:14.
I got into a very strange Facebook argument yesterday (you may call it a siwoti), with someone I actually agree with about 95%. We both believe in reason and science, we largely agree on moral and political issues. Yet this argument somehow became fierce.
The question was this: Is it possible to make moral arguments without axioms or assumptions? I insisted that it was not, but that this isn’t really a problem, because some axioms are much safer bets than others. But he would not have any of this; he insisted that axioms in general are to be avoided at all costs, and he said he had a theory of morality which required no axioms at all.
Naturally, I thought this unlikely; what would a theory look like, with no assumptions? How could you even get it off the ground? Maybe you could do mathematics that way… but then, even mathematics depends upon some assumptions (like the Axiom of Empty Set and the Axiom of Equality), doesn’t it? Are those things we can simply define to be true, without tying them to the real world in any way? If so, how does mathematics have practical applications?
It took me over an hour to even get him to state his theory. He was whining the whole time about how I wouldn’t give it a fair hearing. I actually placed a bet, appointing a mutual friend as an arbitrator; if the theory worked or if I treated it unfairly, he would receive a 10% stake in all future proceeds from The Science of Morality (or $20 cash today if he preferred that, though he’d be silly to take that; my book option is surely worth hundreds, if not thousands, though perhaps he’s just that risk-averse). He still kept whining about how he didn’t think I’d treat his ideas fairly, but finally I did coax him into revealing his brilliant theory.
What was it? Why, it was just the standard evolutionary account of moral emotions, which I’m already thoroughly familiar with and use rather extensively in The Science of Morality. It’s basically Frans de Waal and E.O. Wilson stuff, with perhaps a bit of Richard Joyce. My interlocutor’s claim was that this involves no assumptions at all, which is just frankly so ludicrous I don’t even know how to interface with it.
I pointed out off the top of my head half a dozen assumptions it depends upon. Some were the fundamental axioms of science itself (like “we are not systematically deceived, e.g. by Cartesian demons” and “mathematics is consistent”), which I can see why he wouldn’t count, but he really should. Even worse, in the form he stated it, his theory depended upon a couple of assumptions that most people, including myself, disagree with (like “the behaviors that maximize genetic fitness are the same as the behaviors that maximize moral goodness”). The theory was rife with assumptions, mostly reasonable but a few quite dubious. I pointed this out, and he did the argumentative equivalent of a rage-quit, disappearing from the discussion, but not before he had accused me of being a scientific anti-realist and a postmodernist simply because I pointed out that science depends on certain normative principles.
But the fact is, science does depend on certain normative principles. That is not reason to doubt science, it is reason to trust those normative principles. Honesty, openness, autonomy, beneficence; they teach these ethics in science classes for a reason, as the practice and principle of science depends upon them.
I think my friend’s behavior is symptomatic of a larger problem, which is that rationalists have largely conceded the moral domain to religion. Even basic assumptions of morality that no one sane would disagree with, like “Suffering is bad” and “The Holocaust was wrong”, are treated as special assumptions that can’t be trusted. Moral arguments depend on such assumptions, so moral arguments can’t be trusted either. Meanwhile, we can’t admit that science is dependent upon similar axioms, or else we’ll make science subject to the same failures. My friend even said, “You’re one of those people who thinks that science is a house of cards.” (My reply was ignored: “No, I’m one of those people who doesn’t think that depending on assumptions makes you a house of cards.”)
Yes, science depends on assumptions. Factual assumptions, like “the world is not an illusion”, and also normative assumptions, like “we should be rational”. And yes, in principle, one could doubt these assumptions. But in fact, I don’t know anyone who does. So it’s stupid to worry about hypothetical doubters who don’t actually exist and wouldn’t survive long if they did.
It’s really just Aristotle and the tortoise again: “What if I don’t believe in logic?” Well, then, uh… you’re not going to get much done, now are you? And it’s clearly not worth talking to you, for much the same reason it’s not worth talking to a rock.
“What if I don’t think suffering is bad?” Then perhaps I should make you suffer until you do? (This was Avicenna’s medieval solution.)
These are not good arguments. Yes, they are logically valid; but there’s more to life than logical validity. In fact, a lot of the best arguments aren’t logically valid in the strict sense; they are just highly, highly probable. For example, I cannot prove that the Earth is round; but you can, you know, go up and look at it if you have a big enough rocket (or download photographs from people who have done so before you). I cannot prove that all life on Earth shares a common ancestor, but the probability of this not being true was estimated at 1e-2860; yes, that’s a decimal point, followed by 2859 zeroes, followed by a 1. I think it’s a pretty safe bet.
As I tried to explain to my friend, we have better things to do than worry about such probabilities. The chance that I am a brain in a vat, or that the world is a simulation, or that Cartesian demons deceive us all, may not be zero–but it might as well be, for all the action potentials it’s worth expending on them (too many have been expended already).
Morality is not an abstract exercise for philosophers. It is an urgent engineering problem, upon which literally the fate of humanity rests. We stand upon a precipice, our species more fragile than it has been since the bottleneck 70,000 years ago. With nuclear weapons that already exist, or nanotechnology, biological weapons, or artificial intelligence that could very soon, our whole species could be eliminated in a matter of hours. With global warming or antibiotic-resistant bacteria, we could die out slower, but no less permanently. That’s not to mention the possibility of asteroid impacts and supervolcanoes which have always been a threat (and may have caused that very bottleneck). Actually for the first time, we might have the chance of defending ourselves against such things (perhaps those nuclear weapons will be our salvation, if they deflect an asteroid trajectory or stabilize a magma chamber); but only if we make a dedicated effort to do the necessary research and make the necessary preparations. Even if we avoid such catastrophes, we already know that millions of people are dying from preventable poverty and illness and millions more will die from global warming.
I don’t mean to frighten you–well, actually I suppose I do. I mean to frighten you with the real dangers of the world, not the silly paranoia that prevails in our 24-hour news cycle. I mean to impress upon you the real urgency of moral science, the billions of lives that hang in the balance if we do not solve these problems quickly enough. We can’t afford to waste time arguing about whether suffering is really bad or genocide is really wrong or maybe poverty is a good thing. There are plenty of things we don’t know, so let’s stop wasting precious time on things we already do.
And in the process, we must fear not our own axioms. When someone says, “How do you know suffering is bad?” or “What if the Holocaust wasn’t a bad thing?” we shouldn’t take them seriously and try to engage with what is essentially a troll; we should say, “Seriously? That’s your argument? You’re going to waste our time on that?“
This is how we respond in science when someone asks an equally pointless question, like “How do you know evidence is the way to find truth?” or “What if mathematics is inconsistent?” It is time we respond the same way in morality.
And no, this does not mean that you can assume whatever you want and everything is up for grabs. When someone actually makes an assumption that is legitimately in question–something like “maximizing genetic fitness is the same as maximizing moral goodness”–we have a right and indeed a duty to question that assumption. Often arguments come with hidden assumptions that aren’t stated, which can make them all the more insidious. For decades, economists used models that assumed perfect information, which is frankly the most ridiculous thing I’ve ever heard (it is tantamount to assuming that everyone on Earth is omniscient!), but it was hidden in the mathematics, so people thought they were proving much more than they were. In fact, Myerson and Satterthwaithe showed that if you remove this ridiculous assumption, you find that not only aren’t markets always perfectly efficient, in fact, they never are. It’s a really mind-blowing theorem; you’d think there’d be some way around the asymmetric information, but there isn’t. It’s possible to do better by some methods than others (obviously we did better in say 1992 than in say 1929), but there is absolutely no way to guarantee efficiency. The Invisible Hand isn’t just invisible; it’s imaginary.
So yes, if you actually have a serious objection to one of my moral assumptions, by all means make it. But it should be a sincere and serious objection, not this sort of sophistic “But how do you know for sure?” that does nothing but stall the debate. If you actually think that the world is a simulation, well, you probably need clozapine. But if you’re doubtful that science is a good source of moral truth, now that’s something we could talk about.
It’s not true that there are no stupid questions. There are no stupid sincere questions, and that makes all the difference.