Monday, May 26, 2014

Inside the Utility Closet

Scott Alexander, who writes at Slate Star Codex, is a pretty swell guy as far as godless rationalists go, and he makes many careful and compelling arguments.  However, he fails to make a good case for one of his fundamental beliefs, namely, utilitarianism.  He advances his argument in The Consequentialism FAQ, a question and answer style discourse on the primacy of consequences to proper moral reasoning.

Alexander begins by arguing that any moral system must be grounded in moral intuitions.  He writes, "Moral intuitions are important because unless you are a very specific type of philosopher they are the only reason you believe morality exists at all."  When faced with competing intuitions, Alexander says that "We must reach a reflective equilibrium among our various moral intuitions, which may end up assigning some intuitions more or less weight than others, and debunking some of them entirely."  His justificiation for this equilibrium: "It's my moral intuition that we should. Isn't it yours?"

Alexander contrasts his intuition-based system with ethical philosophies grounded in metaphysics, which present absolute ethical principles that exist independent of intuitions.  He attempts a refutation of metaphysical ethics with a hypothetical story that is too long to reproduce in full, so I shall summarize it.

If a man acquired a mystical artifact that exempted him from any sort of transcendent metaphysical morality, he would still feel guilty about doing evil.  We are stuck with intuitions about morality regardless of metaphysical exemptions, so morality seems exactly the same with or without metaphysics.  Therefore, metaphysics should have nothing to do with morality.

Alexander really should have run this scenario by someone else before he decided to publish it as his entire argument against metaphysical ethics.  The problem with the scenario is that it is quite literally nonsense.  Metaphysical reality cannot be switched off for an individual.  Every major philosophy that I can think of that posits some essential transcendent morality also binds that morality to the metaphysical reality of man.  A person exempt from morality would thus also be exempt from existence.  A man not bound to metaphysical morality is just as absurd as a square triangle, and an argument against metaphysical morality based on such a man is as faulty as an argument against the omnipotence of God based on His inability to create such a triangle.

Alexander fails to defeat metaphysically grounded ethics, but he soldiers on and attempts to construct an ethical system based on intuition.  The offset paragraphs with the bolded headings are from the FAQ:
Why should we assign a nonzero value to other people?
I was kind of hoping this would be one of those basic moral intuitions that you'd already have. That to some degree, no matter how small, it matters whether other people live or die, are happy or sad, flourish or languish in misery.
Well, I suppose sociopaths are out of luck here, but that extreme exception aside, we are still with left the question of how exactly to apply our moral intuition that people have value.  Alexander attempts to address this.
Why might morality fail to assign value to other people?
Morality might fail to refer to other people if it only refers to itself, or if it refers to selfish motives like avoiding guilt, procuring “warm fuzzies", or signaling [showing off].
What do you mean by a desire to avoid guilt?
Suppose an evil king decides to do a twisted moral experiment on you. He tells you to kick a small child really hard, right in the face. If you do, he will end the experiment with no further damage. If you refuse, he will kick the child himself, and then execute that child plus a hundred innocent people.
The best solution is to somehow overthrow the king or escape the experiment. Assuming you can't, what do you do?
There are certain moral philosophers who would tell you to refuse. Sure, the child would get hurt and lots of innocent people would die, but it wouldn't, technically, be your fault. But if you kicked the child, well, that would be your fault, and then you'd have to feel bad about it.
But this excessive concern about whether something is your fault or not is a form of selfishness. If you sided with those philosophers, it wouldn't be out of a concern for the child's welfare - the child's getting kicked anyway, not to mention executed - it would be out of concern with whether you might feel bad about it later. The desire involved is the desire to avoid guilt, not the desire to help others.
I find it hard to believe that Alexander really thinks anyone would feel more guilt about kicking the child than they would about the death of all those people.  If anything, exclusive concern about guilt would lead people to kick the child.  We may reasonably say that the death of the innocents is not our fault, but human emotions do not work strictly according to reason, much as rationalists such as Alexander might wish they did.  Those who would consider themselves responsible for the child kicking but not the deaths would almost certainly feel guilty for the deaths anyway.  It is metaphysical morality that Alexander must truly grapple with, but he sets that aside in favor of a hypothetical person with a bizarre kind of scrupulosity.
What do you mean by “warm fuzzies"?
This term refers to the happy feeling your brain gives you when you've done the right thing. Think the diametric opposite of guilt.
But just as guilt is not a perfect signal, neither are warm fuzzies. As Eliezer puts it, you might well get more warm fuzzy feelings from volunteering for an afternoon at the local Shelter For Cute Kittens With Rare Diseasess than you would from developing a new anti-malarial drug, but that doesn't mean that playing with kittens is more important than curing malaria.
If all you're trying to do is get warm fuzzy feelings, then once again you're assigning value only to your own comfort and not to other people at all.
Here Alexander makes what is perhaps the best argument against his own position.  What exactly is the distinction between the supposedly selfish "warm fuzzies" and supposedly selfless moral intuitions?  To his credit Alexander recognizes this problem.
Are you sure it's ever possible to value other people? Maybe even when you think you are, you're valuing the happy feelings you get when you help other people, which is still sorta selfish if you think about it.

Even if that theory is correct, there's a big difference between promoting your own happiness by promoting the happiness of others, and promoting your own happiness instead of promoting the happiness of others.

People who use a guilt-reduction or signaling-based moral system will end up making harmful decisions: they will make choices that hurt other people in order to benefit themselves. People who try their best to help other people for fundamentally selfish reasons still help other people as much as possible, and this seems to deserve the label “altruistic" and the praise that goes with it as much as anything does.
Alexander may see the problem, but he fails to present a coherent solution, and instead is satisfied with shifting the goalposts.  He claims that there is a big difference between promoting one's own happiness and promoting the happiness of others in order to make oneself happy, but offers no evidence for this.  What exactly is the difference?  Sure, it makes a difference to someone else if I return the wallet he dropped or take it for myself, but a morality based entirely on intuitions has nothing to do about my potential beneficiary or victim and everything to do with me.  If I intuit my own need for some blackjack and hookers, and that intuition is stronger than my intuitions about the immorality of theft, then I should take that sucker's wallet.

The great moral philosopher Bender Bending Rodríguez

Alexander's FAQ concludes with a case for utilitarianism, which he considers to be the best way to achieve equilibrium of intuitions.  He writes,
Morality should be about improving the world. There are many definitions for “improving the world", but one which doesn't seem to have too many unpleasant implications is satisfying people's preferences. This leads to utilitarianism, the moral system of trying to satisfy as many people's preferences as possible.
He goes on to explore the different strains of utilitarian thought and their practical application, but his basic definition shall suffice for our purposes. 

Considering Alexander's argument as a whole, one must grant that it is logically valid.  His conclusion that utilitarianism is the ideal moral system follows from his premises.  However, his argument is entirely unsound.  The most generous thing that could be said for his argument is that several of the key premises might be true but that Alexander offers no significant support for them.  Most egregious are his failures to properly address metaphysically grounded morality, and to differentiate between "warm fuzzies" and moral intuitions.  Without these premises, Alexander's thesis fails. 

This post is intended not as an attack on Scott Alexander but as a critique of utilitarianism.  Alexander is not presenting his own home-brewed case for utilitarianism, but rather the standard argument for that moral system, and it is because he presents it so accurately and concisely that I picked his argument to criticize.  Every argument for utilitarianism of which I am aware runs into the same problems that Alexander does; problems that they also fail to overcome.  As for the rest of us, we should work to discern real morality, a morality that treats people as ends in themselves rather than bundles of preferences.

1 comment:

  1. Good critic.It shows many of utilitarianisms flaws. Let me build on 1 point.
    Despite Alexander's claims to the contrary, utilitarianism is 1 of the most selfish phliosophies out there. It can be used to justify abortion, embryonic stem cell research & euthenasia. Much of Margaret Sanger's eugenics philosophy is based on it. Here definition of 'human weeds" would say let those I define thus are those living in many of the areas where malaria is common die of malalria & get the warm fuzzies from helping cute kittens instead. Some years ago I saw a video of someone who was opposed to using DDT to kill the mosquitos in the part of Africa where she, a Sanger superior type white, lived because it might affect some birds. Never mind the original problem has been shown to be an overuse of DDT, not a proper use. She gets her birds while the black African children, read Sanger's "human weeds" died.

    ReplyDelete