October 4, 2009

  • Placing a value on a human life

    I’ve recently been viewing the online video course “Harvard’s Justice with Michael Sandel”; in Episode 2 the concern is about cost-benefit analysis and utilitarian calculus more generally.

    The first example offered is of Philip Morris arguing that the world is better off because of all the old people who die from cigarettes, and thus all of the money saved in health care, housing and so on. This is obviously absurd, and we are rightly appalled.

    The second example offered is more ambiguous; it is about Ford trying to decide what to do about a recognized design defect in one of their vehicles. They calculated that it would cost $100 million to recall the affected vehicles and repair the defect; they calculated that the 200 lives saved would only be worth $40 million. This too seems cold, but I am forced to ask: What exactly was Ford to do? Recall any conceivable safety defect at any conceivable cost? What if this meant spending $50 billion to have a 50% probability of saving only a single life? What if it meant spending the entire world GDP to eliminate a 0.1% chance of one person’s suffering? On such a practice, how could any company stay in business? How could any government remain stable? How could even a single human being survive, given that any potential for gain also entails potential for loss?

    Clearly we must assign finite value to human lives. To assign infinite value is hardly different from assigning zero value; if human joy and suffering are valued at transfinite cardinals, no action is any more or less justified than any other. I think the problem is in fact that we try to assign monetary values; we rightly sense that there is something very odd and arbitrary about saying “1 human life is worth $200,000.” After all, if we all agreed to count our dollar bills at ten times their current value, and multiply all prices and expenses accordingly, nothing in our economy would change; yet under a monetary valuation of humanity we would suddenly be better off. Hence, there is something deeply wrong with a monetary valuation of humanity!

    Instead, I propose that we evaluate in terms of human lives and human feelings—and only in terms of human lives and human feelings. The value of “$200,000″ is fundamentally meaningless; the value of “two innocent people will die” is part of the core of morality itself. When Ford did their cost-benefit analysis, they should have asked not how much money the recall would cost, but how much happiness—who would suffer who would previously not have, who would die for lack of resources. If they had determined that the recall would require too many people to suffer unemployment and poverty to justify the few who would be saved by the repaired defects, how could we fault them? (In fact this is unlikely; in truth they were rating their own stock options higher than the 200 lives the defects destroyed, and this is why their actions were immoral—not the fact that they placed a finite value on human suffering.)

    Am I a utilitarian? Perhaps, though perhaps not. I am certainly a consequentialist, but among the consequences I value are things like broken promises and violated autonomy. I agree that it is worse, morally worse, to act without consent or to break a contract—and I am not sure that a strict utilitarian calculus can take this into account. But given the complexity of human life and the necessity of making hard decisions, I do not see how we can value anything at infinity—at some point the lives saved must justify the promises broken, the suffering ended must outweigh the autonomy violated.

    At what point? I’m not sure, let’s work on that. But we get nowhere by pretending that these choices are impossible to make—for they are necessary to make, and without them we would all die.

Post a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *