Difficult Questions

(follow-up to Utils for Everyone)

Classical utilitarianism has a problem.

I agree that value (experiential/use value, not exchange value) is reducible to happiness and absence of suffering, but I wouldn’t say that this implies happiness should be maximized in the universe as some abstract quantity.  The strongest objection I’ve encountered to utilitarianism is that, on a practical level, happiness and suffering just aren’t things that can be quantified.  For one, there’s the fundamentally inescapable problem that each of us is imprisoned in our own consciousness, so we can’t know what the subjective experience of any other person is.  Even if I can be reasonably certain that other people suffer in response to mostly the same stimuli that I do, I can never know how much they suffer.  It’s entirely possible that when someone gets a paper cut, they might react with the same degree of minor distress that I exhibit, but they could be feeling suffering equivalent to my own experience of, say, gallstones.  In this hypothetical, it’s not as if this person is just suppressing a more drastic response in order to project the image of not being “weak” – rather, for them, the more intense feeling is juxtaposed with an externally observable pain response that is relatively mild.

And I could never truly know this was the case, if every experience of suffering for this hypothetical person were scaled up or down in this manner.  They would never think to tell me, “This paper cut feels like a gallstone,” because (assuming they had experience with a gallstone) their concept of “feeling like a gallstone” would be associated with even worse pain.

That’s a disturbing notion, although it doesn’t seem unique to utilitarianism, since you could replace “happiness” with “the experience of freedom,” etc. for other philosophies, setting aside the fact that even non-utilitarian philosophies still place some value on happiness.  For all practical purposes, there’s no way I possible could know if these differences in subjectivity exist, and I suppose the reasons I have for believing that consciousness has chemical roots (however bizarre those roots are) should grant me comfort.  That is, it’s clear that neurochemistry could give rise to different pain thresholds (deviations from the normal correlation between stimulus and reaction), but why would neurochemistry produce different experiences coincident with the same reaction?

(Speculative side-notes that aren’t as relevant to utilitarianism but still interesting to me:  It’s possible for any one of us to suppose that when every other human and animal acts in a manner that mirrors our own pain responses, they are analogous to a robot that “feels pain” in scare quotes, in the sense that it reacts to noxious stimuli but doesn’t actually experience anything unpleasant.  Of course, it would certainly be very weird if only you, or I, were actually conscious and other humans with highly analogous physiology lacked the same subjective consequences of a nervous system.  But then the question becomes, at what point does the analogy break down?  Just how complex does a nervous system need to be to generate consciousness rather than mere harm-avoidant behavior?  Although we should rightly be skeptical that the robots in the article above are experiencing anything, do we have any reason to believe subjectivity can only be generated by patterns of organic matter?  What makes carbon so special, in that respect?)

Moreover, perhaps even more disturbing is the fact that each of us can hardly quantify or accurately compare our own degrees of happiness and suffering.  Given the choice between paper cuts or gallstone stomachaches, I couldn’t begin to tell you what the equal proportion is, much less how distributing those experiences across a lifetime would affect that ratio.  Put in more practical terms, I can’t at all say how large a salary I would need to be paid to accept a career doing work that is far more boring or “useless” than the careers in science and education I’m currently considering.  I can’t say how many parties in college would justify the loss of 0.5 points in my GPA, and all the social and professional capital those points might entail.  I can’t say, regarding the decision several years down the road of whether to be a parent, how many smiles on a possible future child’s face I’d need to see to outweigh the unpleasantness of diaper changing, being woken up at 3 A.M. by cries, the fear of that child endangering their fragile body or psyche, etc.  (And that’s just considering my experiences, never mind the ethical conundrums involved in weighing the prospects of the child’s experiences throughout their entire life, or the mother’s pain of pregnancy and childbirth.)

The good news is that we can make some estimates.  I’m reasonably certain my future self would not thank me if I burned my hand on the stove, and that the world would be a shittier place if I messaged someone on Facebook just to tell them they’re a worthless waste of space who will never amount to anything.  But there are larger, more ambiguous cases that don’t have such easy answers, and it’s worth asking whether they have any answers that can be evaluated objectively given the starting assumption that we want people to be happy more and to suffer less.

If we can’t quantify happiness/value in the sense that we quantify GDP or calories of food available on the planet, is this game over?  And if it is, does that mean we should scrap utilitarianism in favor of the categorical imperative, Aristotle’s virtues, social contract theory, relativism, divine command theory, natural rights, or something else?

My inclination is that the answer to both of those questions is no.  The view of many philosophers of science as of late (Kuhn returns!), for example, is that science can’t discover the capital-T Truth with absolute objectivity, and our scientific paradigms are inevitably clouded by status quo bias and limited by the extent of our sense perception. While this should give us reason to be humble about scientific claims, does this mean we ought to throw science out the window entirely?  Of course not.  Approximations are imperfect, but their practicality is indisputable.

Even if this quantifiability problem were the death knell of utilitarianism, I see no reason to think this would vindicate Kant and company.  It is no less paralyzing for the deontologist that the rules and duties of their morality may conflict and need to be weighed against each other, or that the full list of these rules and duties may be elusive to human reason.  The alternative moral theories need to stand on their own merits if they are to replace utilitarianism, and if not, we’re left with moral nihilism (which, depending on your interpretation, isn’t so terribly disheartening as long as you acknowledge that people still have strong desires that you may respect, “morality” be damned).

The moral calculus in its perfect form is impossible.  I don’t dispute that.  Nonetheless, I don’t see throwing up our hands in despair as an option.  It is in our best interest as conscious persons to estimate this calculus when possible – anything less would be a counterproductive shot in the dark, as far as I can tell.  What good, after all, does satisfying a categorical imperative do?

But even if we assume for the sake of argument that value can be quantified to some practical degree, what exactly do I mean by “to be happy more and to suffer less”?

Do we strive for the most total happiness (subtracting suffering) in the universe, regardless of its distribution among distinct sentient beings?  I think this approach is mistaken, simply as a moral premise.  Maximizing happiness in this sense just isn’t something I care about, nor do I think it’s something that most people truly care about either.  I stated earlier that the fundamental principle of everything I value is the betterment of the experience of all sentient beings, as much as possible.  A consequence of this is that I think there’s a qualitative difference between, to use the crude language of “utils” for units of happiness, a world of 100 people in which each person lives a life whose measure of happiness is equivalent to 100 utils, and an alternative in which 20 people each have 400 utils while the other 80 each have 25 utils.  This, despite the equality of total happiness between them.  Sorry for the math.

I’m not certain whether one is more preferable than the other.  I can say that the perspective of the Veil of Ignorance is important here, that is, the question of the preferable world becomes at least partly a question of which world I’d prefer if I didn’t know which person I’d be in the given world (but even this neglects the consideration of the other people, of course).  That’s a question whose answer depends on the subject’s tolerance for risk, either for themselves or for others.

For my part, I’m rather risk-averse, and depending on how exactly we envision this scale of happiness, I’d venture to say the equal world proposed above is far preferable to the alternative.  If this is a scale with diminishing returns, then the equal world is obviously better.  It’s analogous to Bill Gates giving $1,000 to a poor single parent – the same quantity has more value in the latter’s hands than the former’s.

But perhaps this is too easy.  When it comes to money, there’s almost certainly an upper limit to the use-value of a massive amount.  Gates probably literally has more money than he could spend on anything he wanted.  We have to think in terms of something with intrinsic use-value, which is practically limitless.  Does that even exist?  What if there existed a person who would always gain far more happiness from a given resource than anyone else would?  Could a world with less total happiness than another world still be better, based on a certain distribution?  What do we do when considering the distribution of happiness to potential people whom we could choose to bring into existence?

Those are the sorts of questions I want to address next time.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s