Error 404 – “Ought” Not Found

(follow-up to Difficult Questions)

I know I said I’d get into some particular moral thought experiments last time.  But it occurred to me that those thought experiments would be meaningless at best if I didn’t first consider this question:

Why should we care about any of this?

No really, what’s at stake in all of this ethical navel-gazing, if we already generally have an idea of how to be “good people”?

Well, I introduced this series of posts in the context of politics, broadly speaking.  You can see the implications of these fundamentals of ethics in the American libertarian movement, for example.  Many libertarians and so-called anarcho-capitalists believe that to the extent that there is a state, that state should defend and be limited by the Non-Aggression Principle.  Basically it’s the “my fist ends where your nose begins” principle, applied not just to physical violence but also to property, hence libertarians’ staunch opposition to taxation and welfare.  Libertarians tend to reject utilitarianism to the extent that it has no room for strict rules of that sort.  The point of this particular post isn’t to defend or refute that perspective (although I personally find it unconvincing and it often has consequences that libertarians wouldn’t like).  I’m just giving an example of philosophy informing politics.

Lest you think it’s only libertarians who would care about this, at the heart of the Democratic Party’s defenses of welfare (not to say only Democrats defend this) lies a sort of Rawls-based belief that justice mandates redistribution, if an unequal distribution leaves the least well-off people in society worse than they would be under equality (or at least something closer to equality).  This, too, is a “deontological” rule that admits of no exceptions in which the ends might justify the means, as is the belief that no calculation of expected lives saved by ending World War II could justify the bombing of many innocent Japanese citizens.  In debates about abortion, even, the pro-choice rejoinder to the pro-life claim that abortion is a violation of a child’s right to life isn’t always utilitarian (that is, a rejection of a full-stop “right to life” for fetuses considered incapable of suffering, particularly when weighed against the suffering of the mother).  Rather, the pro-choice movement has tended to pit one deontological axiom against another, namely the unalienable right to bodily autonomy.

This isn’t to say a utilitarian couldn’t agree with the conclusions of libertarians, welfare defenders, or pacifists.  Far from it!  But the consequentialist/deontologist split does seem to have some practical implications, in that the latter philosophy is what motivates people to say that the effects of a certain policy or action could never, under any circumstances, override certain absolute obligations.  The pretense of liberal society seems to be that nothing is absolute, and absolutism is dogmatism, but in practice this society rejects this view.

Even in thought experiments, ask someone if they, as a hypothetical surgeon, would kill a healthy patient to distribute their organs to five patients on the brink of death who would be saved by such organs, and they’ll overwhelmingly be repulsed by such a notion.  There are some consequentialist justifications for this view, actually, such as the consideration of the long-term suffering of creating a society in which people fear the prospect of getting killed at the hospital, as well as the concern that the surgeon can’t be certain that the dying patients actually will be saved.  But these are rather weak post hoc rationalizations, it seems, and the justifications people give when they immediately respond to this dilemma are overwhelmingly deontological.  They generally say, in my experience, that there’s something inherently wrong in the act of killing an innocent person, using them as an instrument even for a supposed “greater good.”

I share that intuition, make no mistake.  But that doesn’t mean I trust it completely.  When I actually reflect on this thought experiment, I honestly can’t think of what it would mean for something to be “inherently” wrong, wholly independent of its consequences for actual people’s experiences.  In the Stanford Encyclopedia of Philosophy’s article on deontology, for example, deontology is described as prioritizing the “Right” over the “Good” – indeed, “If an act is not in accord with the Right, it may not be undertaken, no matter the Good that it might produce.”  For the life of me, I just do not see how this could possibly make sense.  What would it mean for something to be worth doing, if not as a means to the Good or a source of the Good in itself?  What even is this “Right”?

I’ve said this before in my first few posts on these matters, but it needs to be said again.  Intuitions are important as far as they go, and the psychological (not to mention social!) damage of violating them is often quite enormous.  If I convince myself through reason that something I once considered acceptable is actually in conflict with my values, then over time intuition can serve as a shortcut of sorts for future moral choices.  Still, the intuition that something is wrong isn’t in itself a reason not to do it, in my estimation.

If there are reasons doctors shouldn’t kill one patient to save five, and there may well be, we should be able to articulate those reasons in terms that go beyond, “That’s just wrong!”  Otherwise, how could we tell a genuinely immoral act from something that is just deemed wrong by the prejudices of the time, but which is actually harmless if not obligatory?  Morality would just be a set of expressions of “boo!” and “yay!”  It’s fine for morality to be reducible to preferences, indeed that’s the only thing it can be reducible to if it is to have any persuasive force of “ought” whatsoever – but I see no reason to think any preferences can be deemed absolutely valuable, with no room for revision or debate.

You could retort that the value I place on happiness and the absence of suffering is just an intuition, too, and you’d have a point there.  If you asked me why I care about happiness (that of myself and that of other sentient beings), I couldn’t exactly tell you, except that that’s the only thing that I can say is valuable without speaking nonsense, according to my own intuitions.  Still, there seems to me a clear difference between (a) the intuition that slavery is the will of God and acceptable, and (b) the intuition that happiness is worth pursuing and suffering is worth avoiding, all else equal.

A friend of mine once posed the question, “What would you consider evidence for a moral theory?”

If accordance with intuition were the only evidence we could find for a moral theory, I’d hesitate to call it a “theory” at all.  Perhaps a useful way of systematizing intuitions, sure – this seems to have been Aristotle’s project in the Nicomachean Ethics.  But it wouldn’t predict anything, now would it?

If not intuition, then what?  I suppose we should expect that a group of people who consistently apply a certain moral theory to their decision-making, and the people affected by their decisions, will report after some time that their lives are better than they were under the same circumstances before they tried out such a theory.  But achieving the “same” circumstances is enormously difficult.  And the rub also lies in “some time.”  When exactly is that time?  It would seem alienating indeed if all of morality demanded that we sacrifice every modicum of joy from the next 1,000 years so that everyone who lives after that era will be in indefinite paradise.  But clearly we live with some respect for delayed gratification, denying ourselves certain foods to preserve the health of our future selves, saving money for retirement, and such.  At what point could you be reasonably confident that the moral theory is “working”?

I don’t think those are impossible questions to answer, and as I’ve suggested in earlier posts, this “making lives better” litmus test (as glib as it may be) is I think the right starting point.  If that’s the case, perhaps ethics is nothing more or less than a systematic way of figuring out which types of actions promote or thwart that end, and in what ways they do so.  One might well ask whether that even merits the name “ethics,” as opposed to just “prudence,” “wisdom,” or “practical reason.”  If that’s the case, so be it.  Leave categorical imperatives for the birds.

There’s more to discuss, but I want to end this post with a request for your thoughts.  A while back, I made a survey on ethics.  Please take it!  For science!

Advertisements

Parsing Papers 4 – The Neurobiology of Pain Sensitivity (Part IV)

(follow-up to Parsing Papers 3 – The Neurobiology of Pain Sensitivity (Part III))

Google “action potential” and you see a graph that looks a bit like a mountain with flat land on the left side and a valley on the right.

It’s a common enough icon of neuroscience (second only to gratuitous blue brains), but what does it mean?

Since we figured out this resting membrane potential business last time, now we can consider what happens when that baseline of the neuron gets disrupted.  For starters, it helps to know that neurobiologists decided to call a cell at resting potential (roughly -70 mV) “polarized,” so any change in the opposite direction, that is, toward a positive potential, is depolarization.  It also turns out that at rest, the cell has a higher inner potassium concentration than outer, and vice versa for sodium.  The way I remember this is that since the chemical symbols of sodium and potassium are Na and K, respectively, it’s as if sodium says “Nah” to the cell and so it mostly goes outside, while the potassium says “Kay” and stays in.

Sounds dumb, but you’ll remember it, and that’s what matters.

Let’s extend that party analogy to action potentials and see if it doesn’t make you hate the concept of parties.

Recall that we said sodium ions were the relatively immobile drunkards at this scene of debauchery, who could not easily open the door between rooms A and B, while potassium ions were sober and crossed over with ease.  Because of a combination to attraction to the music in room A and repulsion from each other’s sweat-drenched meatbags, these two groups of people settled into an equilibrium.

You might be wondering why the sober people would tend to stay more in A and drunks in B.  Well, that’s all thanks to our friend, the sodium-potassium pump.  Let’s call him Drake.  He’s sort of an intra-party bouncer.  He doesn’t want to completely kick out the drunks because they’re not puking on anything.  But they’re making fools of themselves enough by singing obnoxiously in room A that Drake figures the rest of party would thank him if he kept most of them out of there.  Every once in a while, some sober people will drift out of A as drunks wander in, but Drake responds to this by bouncing some drunks into B and sober folk into A.

Now, since Drake really wants to do a favor to those in room A who don’t want to hear tone-deaf renditions of “Hot in Herre” at 120 decibels, he kicks out 3 drunks for every 2 sober people he brings in.  This frees up a little space in A for someone to wander from B back to A, and this person might be either drunk or sober – in terms of concentration, a drunk person is more likely because of Drake’s bouncing, but also, remember, drunks can’t open doors so easily.  It’s hard to say what exactly will happen, but on the aggregate, as Drake does his thing, the imbalance in drunk vs. sober concentration in these two rooms will become established along with the imbalance in total population caused by the music, mentioned above.  The actual sodium-potassium pump, of course, doesn’t exchange 3 sodium ions for 2 potassium ions out of a sense of purpose.  Rather, this protein expends some chemical energy to make this exchange simply because of the affinities of these ions for the protein in its “in” and “out” conformations.  The exact physics of this is more complicated, but not super relevant to this larger idea.

If this were the whole story, the neuron/party would be a little more dynamic, but not terribly interesting.  Let’s throw another wrinkle in.  This is where the analogy is going to get a bit…forced…but bear with me.

It turns out there isn’t just one doorway between A and B.  A few of these are regular ol’ doors – mostly drunk-proof.  In a neuron, the physical significance of this distinction is just that there are more and “wider” channels through the membrane that can allow potassium to pass, than sodium.  Some are guarded by Drake and his clones.

Others are passageways without actual doors, but there’s some vomit on the floor at each of these passageways.  So not only can drunks get through because they don’t need to turn doorknobs, but in fact only drunks will go through these since they’re beyond the point of being grossed out by a puny pool of vomit.  However, besides Drake, this party has a few other internal bouncers who stand around these doorways – hence these doorways represent channels that only let sodium through, but they are closed even to sodium when the membrane potential is very low (including at baseline) or very high (above roughly +40 mV).  Normally, they can keep the drunk people out of A pretty well, but suppose that when the group in A gets unusually large (i.e. the potential gets higher, less negative because more positive ions are inside the cell), the drunks in B get particularly desperate to join them out of a strong sense of FOMO.  So they barrel through the guards, come hell or high water.  Some voltage-gated sodium channels have opened.

There’s a positive feedback loop that can result here!  If some drunk folk burst through one passage and the crowd swells more, the others in B are going to want to join in even more (translation: the cell’s potential gets even less negative because of the influx of sodium ions that more channels open).  They break through too.

Room A gets boppin’.  This is a state of extreme depolarization.

For this to happen, however, the trigger that caused the mowing down of the first poor guard needs to be significant enough (i.e. consist of a large enough influx of positive charge, from whatever source) to counteract another force, namely, the outward flux of sober people/potassium.  These folks aren’t as concerned as the drunks are about joining a massive group for that sweet, sweet collective effervescence.  If the population of room A swells, they’re going to rush out (by the same forces of repulsion that we discussed in examining the source of the resting membrane potential).

Suppose there are some doors with locks – totally drunk-proof, but also the sober people won’t bother using these doors to get through unless they’re in a particular hurry to leave a very overstuffed room A.  These are voltage-gated potassium channels – analogous to the sodium ones, except these open when potential gets very high rather than close.  Importantly, the potential doesn’t have to be that high for these channels to stay open, only to start opening.  They close at exceptionally low potential.  How exactly this occurs physiologically is complicated, and beyond the scope of this post.

So, if for whatever reason a small group of people are in room A beyond the usual baseline, some drunks might rush in, but some sober people will also rush out through the regular doors.  They don’t have to overcome the resistance of any guards, so under these circumstances, the sober outflow will win out over the drunk inflow, and room A will reach baseline, so that the drunks left in B won’t bother trying to knock down the guards to follow suit (let’s suppose the guards can pick themselves up reasonably fast).  The only difference in outcome here is that there are comparatively more drunk people in A than before, and more sober people in B.  Drake will take care of that.  This is a “failed initiation,” a case in which the stimulus of added positive charge to the neuron is too weak to cause an action potential.

For any bump in population of room A below a certain threshold (this is assuming drunk humans are more deterministic beings than they actually are, but hopefully it gets the point across), this restoration of the natural order will occur.  Nothing special.

But if that bump is above the threshold, it’s raining drunks.  Positive feedback does its thing, as described above, and room A just keeps getting more and more stuffed (depolarized by a massive influx of sodium) until two things happen.

On one hand, the guards get steamrolled by the drunks to such a pitiful degree that they call out for reinforcements, and the doorless doorways get blocked completely by these helpers.  No more drunk people are getting into A, period – the voltage-gated sodium channels have reached their upper limit of the window in which they’d stay open.  Critically, once these channels close at this limit, there’s a stretch of time in which they won’t open up again no matter what, not even if the potential dips below that limit again.  It’s as if the reinforcements at the doorway keep watch to make sure things have cooled down.  This is called the absolute refractory period.

On the other, the awful drunk singing and crammed space is sufficiently unbearable in the packed room that the sober folk find the keys to the locked doors and haul ass en masse from A to B.  Such is the opening of voltage-gated potassium channels.

So now room A gets continuously depopulated (drunks mostly can’t come in to counterbalance the outflow of sober people, recall), and this keeps going until the sober people find the circumstances in A tolerable again.  Notably, that point is when the population of A is below baseline (this is called hyperpolarization), since in this rush where many doors are open to the sober folk, they just keep exiting until there’s plenty of comfortable space in A (that is, until the potential is sufficiently low that the voltage-gated potassium channels close).  Physically, hyperpolarization is just a result of the relatively large permeability of potassium through the membrane, while sodium barely passes through at all by comparison.

It’s at this point that the absolute barrier posed by the guards at the doorways breaks down, since most of the drunks are inside A anyway and the guards figure that the coast is clear.  Some guards remain, but they are as vulnerable to a rush of drunkards as before.  That is, the voltage-gated sodium channels remain closed, but they are capable of being reopened now, and the absolute refractory period ends.  As time goes by, the frantic nature of this sober exodus dies down, and people begin trickling back into A because the music is still pretty nice, until baseline is achieved once more.  Before that baseline is reached, it would take an even greater disturbance (depolarizing stimulus) to trigger another action potential than in the initial case.  Hence we call this window between hyperpolarization and the resting membrane potential the relative refractory period.

And there you have it.  That’s the cycle, in all its glory.  I use that word half-facetiously, but it really is an elegant process in my opinion.  I hope you agree.

Now let’s see if we can use this knowledge to interpret the paper I seem to have forgotten about in this post…

Difficult Questions

(follow-up to Utils for Everyone)

Classical utilitarianism has a problem.

I agree that value (experiential/use value, not exchange value) is reducible to happiness and absence of suffering, but I wouldn’t say that this implies happiness should be maximized in the universe as some abstract quantity.  The strongest objection I’ve encountered to utilitarianism is that, on a practical level, happiness and suffering just aren’t things that can be quantified.  For one, there’s the fundamentally inescapable problem that each of us is imprisoned in our own consciousness, so we can’t know what the subjective experience of any other person is.  Even if I can be reasonably certain that other people suffer in response to mostly the same stimuli that I do, I can never know how much they suffer.  It’s entirely possible that when someone gets a paper cut, they might react with the same degree of minor distress that I exhibit, but they could be feeling suffering equivalent to my own experience of, say, gallstones.  In this hypothetical, it’s not as if this person is just suppressing a more drastic response in order to project the image of not being “weak” – rather, for them, the more intense feeling is juxtaposed with an externally observable pain response that is relatively mild.

And I could never truly know this was the case, if every experience of suffering for this hypothetical person were scaled up or down in this manner.  They would never think to tell me, “This paper cut feels like a gallstone,” because (assuming they had experience with a gallstone) their concept of “feeling like a gallstone” would be associated with even worse pain.

That’s a disturbing notion, although it doesn’t seem unique to utilitarianism, since you could replace “happiness” with “the experience of freedom,” etc. for other philosophies, setting aside the fact that even non-utilitarian philosophies still place some value on happiness.  For all practical purposes, there’s no way I possible could know if these differences in subjectivity exist, and I suppose the reasons I have for believing that consciousness has chemical roots (however bizarre those roots are) should grant me comfort.  That is, it’s clear that neurochemistry could give rise to different pain thresholds (deviations from the normal correlation between stimulus and reaction), but why would neurochemistry produce different experiences coincident with the same reaction?

(Speculative side-notes that aren’t as relevant to utilitarianism but still interesting to me:  It’s possible for any one of us to suppose that when every other human and animal acts in a manner that mirrors our own pain responses, they are analogous to a robot that “feels pain” in scare quotes, in the sense that it reacts to noxious stimuli but doesn’t actually experience anything unpleasant.  Of course, it would certainly be very weird if only you, or I, were actually conscious and other humans with highly analogous physiology lacked the same subjective consequences of a nervous system.  But then the question becomes, at what point does the analogy break down?  Just how complex does a nervous system need to be to generate consciousness rather than mere harm-avoidant behavior?  Although we should rightly be skeptical that the robots in the article above are experiencing anything, do we have any reason to believe subjectivity can only be generated by patterns of organic matter?  What makes carbon so special, in that respect?)

Moreover, perhaps even more disturbing is the fact that each of us can hardly quantify or accurately compare our own degrees of happiness and suffering.  Given the choice between paper cuts or gallstone stomachaches, I couldn’t begin to tell you what the equal proportion is, much less how distributing those experiences across a lifetime would affect that ratio.  Put in more practical terms, I can’t at all say how large a salary I would need to be paid to accept a career doing work that is far more boring or “useless” than the careers in science and education I’m currently considering.  I can’t say how many parties in college would justify the loss of 0.5 points in my GPA, and all the social and professional capital those points might entail.  I can’t say, regarding the decision several years down the road of whether to be a parent, how many smiles on a possible future child’s face I’d need to see to outweigh the unpleasantness of diaper changing, being woken up at 3 A.M. by cries, the fear of that child endangering their fragile body or psyche, etc.  (And that’s just considering my experiences, never mind the ethical conundrums involved in weighing the prospects of the child’s experiences throughout their entire life, or the mother’s pain of pregnancy and childbirth.)

The good news is that we can make some estimates.  I’m reasonably certain my future self would not thank me if I burned my hand on the stove, and that the world would be a shittier place if I messaged someone on Facebook just to tell them they’re a worthless waste of space who will never amount to anything.  But there are larger, more ambiguous cases that don’t have such easy answers, and it’s worth asking whether they have any answers that can be evaluated objectively given the starting assumption that we want people to be happy more and to suffer less.

If we can’t quantify happiness/value in the sense that we quantify GDP or calories of food available on the planet, is this game over?  And if it is, does that mean we should scrap utilitarianism in favor of the categorical imperative, Aristotle’s virtues, social contract theory, relativism, divine command theory, natural rights, or something else?

My inclination is that the answer to both of those questions is no.  The view of many philosophers of science as of late (Kuhn returns!), for example, is that science can’t discover the capital-T Truth with absolute objectivity, and our scientific paradigms are inevitably clouded by status quo bias and limited by the extent of our sense perception. While this should give us reason to be humble about scientific claims, does this mean we ought to throw science out the window entirely?  Of course not.  Approximations are imperfect, but their practicality is indisputable.

Even if this quantifiability problem were the death knell of utilitarianism, I see no reason to think this would vindicate Kant and company.  It is no less paralyzing for the deontologist that the rules and duties of their morality may conflict and need to be weighed against each other, or that the full list of these rules and duties may be elusive to human reason.  The alternative moral theories need to stand on their own merits if they are to replace utilitarianism, and if not, we’re left with moral nihilism (which, depending on your interpretation, isn’t so terribly disheartening as long as you acknowledge that people still have strong desires that you may respect, “morality” be damned).

The moral calculus in its perfect form is impossible.  I don’t dispute that.  Nonetheless, I don’t see throwing up our hands in despair as an option.  It is in our best interest as conscious persons to estimate this calculus when possible – anything less would be a counterproductive shot in the dark, as far as I can tell.  What good, after all, does satisfying a categorical imperative do?

But even if we assume for the sake of argument that value can be quantified to some practical degree, what exactly do I mean by “to be happy more and to suffer less”?

Do we strive for the most total happiness (subtracting suffering) in the universe, regardless of its distribution among distinct sentient beings?  I think this approach is mistaken, simply as a moral premise.  Maximizing happiness in this sense just isn’t something I care about, nor do I think it’s something that most people truly care about either.  I stated earlier that the fundamental principle of everything I value is the betterment of the experience of all sentient beings, as much as possible.  A consequence of this is that I think there’s a qualitative difference between, to use the crude language of “utils” for units of happiness, a world of 100 people in which each person lives a life whose measure of happiness is equivalent to 100 utils, and an alternative in which 20 people each have 400 utils while the other 80 each have 25 utils.  This, despite the equality of total happiness between them.  Sorry for the math.

I’m not certain whether one is more preferable than the other.  I can say that the perspective of the Veil of Ignorance is important here, that is, the question of the preferable world becomes at least partly a question of which world I’d prefer if I didn’t know which person I’d be in the given world (but even this neglects the consideration of the other people, of course).  That’s a question whose answer depends on the subject’s tolerance for risk, either for themselves or for others.

For my part, I’m rather risk-averse, and depending on how exactly we envision this scale of happiness, I’d venture to say the equal world proposed above is far preferable to the alternative.  If this is a scale with diminishing returns, then the equal world is obviously better.  It’s analogous to Bill Gates giving $1,000 to a poor single parent – the same quantity has more value in the latter’s hands than the former’s.

But perhaps this is too easy.  When it comes to money, there’s almost certainly an upper limit to the use-value of a massive amount.  Gates probably literally has more money than he could spend on anything he wanted.  We have to think in terms of something with intrinsic use-value, which is practically limitless.  Does that even exist?  What if there existed a person who would always gain far more happiness from a given resource than anyone else would?  Could a world with less total happiness than another world still be better, based on a certain distribution?  What do we do when considering the distribution of happiness to potential people whom we could choose to bring into existence?

Those are the sorts of questions I want to address next time.

Parsing Papers 3 – The Neurobiology of Pain Sensitivity (Part III)

(follow-up to Parsing Papers 2 – The Neurobiology of Pain Sensitivity (Part II))

So when we left off, the authors had used ISH to confirm that 183C was not expressed in the neurons where Cre cut out the floxed 183C gene (i.e. where “recombination” occurred), and that the Tomato marker was expressed in these neurons.

You might be wondering, “Golly, based on those links from last time, it doesn’t seem like these transcription factors the authors used to control Cre expression were very specific.  Wouldn’t they show up in many cell types?”  And you’d be right.  According to the supplementary figures, the authors mitigated that problem by also doing ISH on the expression of other RNAs found to correlate with the particular cell types in question.  For instance, even though TH-Cre in theory recombines many catecholaminergic cells as discussed before, the authors checked for fluorescence of TrkA in the same experiment, and TrkA is a giveaway for nociceptive neurons.  This is a common technique in molecular biology, testing for colocalization of different RNAs (or other cell components).

Why didn’t they simply cut to the chase and place Cre expression under the control of a TrkA promoter?

Because TrkA isn’t a transcription factor.  Bummer.

Still, now we can be reasonably confident that when the authors say that abolishing expression of the 183C microRNAs in certain cell types has such-and-such effect on mice, they’re inferring an actual causal link.  We’ll keep this concern in mind when examining their behavior experiments, because if a scientist isn’t careful, they can confuse correlation with causation.  When possible, it’s always a good idea to ask, “By messing with this variable in this experiment, could the authors have unwittingly messed with some other part of the system that is really responsible for the effect they’re observing?”

Side note:  The authors say that Wnt1-Cre effects “all sensory neurons,” but in context this appears to be an abbreviation of “all DRG sensory neurons,” as they claim earlier in the same paragraph, “Wnt1-Cre recombined all neurons of the DRG.”  This seems like a reasonable interpretation of their language on my part, but if I’m mistaken, this would have significant implications for the meaning of their data.  Which goes to show just how imperative it is that scientists use clear and unambiguous language, especially in their primary literature, but I digress.

The supplement file tells us what exactly the authors did to test different kinds of sensitivity in their mice – to a “light touch, cold, heat, or pinprick” stimulus, every mouse responded about the same regardless of whether it was a control (genetically normal or “wild type”) or had recombination in certain neurons.  The “mechanical” stimulus, which refers to poking the mouse’s paw with a device called a von Frey hair, showed a significant increase in sensitivity among the Wnt1-Cre and TH-Cre mice, which correspond to all DRG neurons and DRG nociceptors, respectively.  These tests are kind of mundane, but the important point about them, in the case of the mechanical stimulus, is that they quantitatively determined how much force was required for each mouse to withdraw the paw to which the force was applied, for “at least three out of five consecutive stimuli.”  This was a threshold-based test, but they also compared the intensity of the mice’s responses at a constant level of force.  The quantification scheme for the latter test is pretty entertaining (not to say I could come up with something better, lest the authors accuse me of libel, but it’s an unorthodox scale from my experience in reading papers): “0-no response; 0.5-gently movement of hind paw, awareness of stimuli; 1-clear withdrawal; 2-more robust withdrawal, or repetitive withdrawal combined with shaking or licking.”

This is the point in analysis of scientific data where we have to have (an abridged version of) The Talk.

The one about p-values.

This article and this site explain (better than I ever could) the statistics behind p-values and why they are terrifying if you think about them too much.  The most I’ll say on this matter is that while “p < 0.05” has become synonymous with “statistically significant enough,” the probability that an effect with a p-value of slightly less than 0.05 is due to chance is still disconcertingly high.  I won’t comment on every p-value reported in this paper, but if you check them in the figures for yourself, you can keep in mind the relative confidence you should have in each conclusion the authors draw.  There’s nothing magical written into the universe that makes any particular p-value objectively deserving of the designation “significant.”  The best we can hope for in science, as far as I understand it, is a probabilistic sliding scale of confidence in the predictions we might make on the basis of certain experiments.  The proof of the pudding is in the tasting, of course.  If we draw mistaken conclusions from experiments, these mistakes will manifest in the ways we apply those conclusions eventually.  For the purposes of public policy, however, getting a handle on the degree of confidence in a prediction we’re warranted in having is a practical necessity.

Anyway, the authors’ next experiment is more of a classic in neurobiology – electrophysiological analysis.  If you’re at least somewhat familiar with neuroscience, you know what an action potential is.  But how exactly does it work?  It all comes down to concentrations of sodium and potassium in a neuron.

Because one of my best friends in high school once made a memorable analogy to a party in order to explain the electron transport chain in AP Bio, I’m going to try that for the concept of resting membrane potential, which one must first understand to understand the ways of the action potential.

Suppose you’re at a ragin’ shindig in an apartment that for some reason has only two rooms.  Other than that architectural oddity, it’s a good time.  The music is full of sick beats, there’s a nice balance of familiar friends and new people to meet, and no one is throwing up (yet).  It’s probably more fun than reading this.

But there are a lot of people.  A lot.  So many people that, to begin with, the main room that people initially settle in because it’s the room with the entrance (let’s call this room A) becomes so packed that people are literally jostling among each other, pushed this way and that.

Gross.  This is not a sustainable situation.  Of course, people aren’t moving only due to collisions with each other.  They meander around by their own internal force, to an extent.  People sometimes walk from one room to the other indiscriminately.  A combination of this approximation to a random walk and the collisions with a sea of people leads you, our noble hero, through the doorway from this room to the other (room B).  Several others are driven by the same forces into room B, and while occasionally people may walk back to room A sometimes, you notice that there’s an aggregate trend of flow from room A to room B.

Congratulations!  You and your fellow partygoers have simulated diffusion!

Now, barring any social forces or food/drink incentives, eventually you might expect these inward and outward flows of people from one room to another to reach a roughly equal number in each room.  It’s important to note that this isn’t so much one “force” as it is a sum of forces, along with sheer probability.  Even if no one at the party were literally bumping into each other because room A was so packed, it’s simply more likely, all else equal, that a person will move from the more packed room to the more spacious room than the other way around.  This “all else equal” business might be difficult to swallow, since we’re talking about human beings after all, but we’ll get to that, and the good news about sodium and potassium ions is that they aren’t human beings.

Now, as it happens, room A has one little advantage.  Most of the fun elements of the party (people and sustenance) can be transported from one room to the other without a problem.  But the sick sound system is stuck in room A.  There are heavy amps and whatnot that no one wants to haul through the doorway, and even if they did, there’s just one music setup, so regardless of whether it’s in room A or room B, the room with the music setup is going to have a slight pull.  Assuming people want to be able to hear the host’s fire Soundcloud tracks well and dance.

So the motion of the majority of the partygoers is now driven by two major driving forces.  The first is this human diffusion discussed above, and the second is attraction to the music.  Before room B has a chance to match the population of room A, some people in room A, who initially share the sentiments of the original participants in the exodus to room B, find themselves at the threshold between the rooms realizing, “You know what, it’s not that packed in there.  I don’t want to be too far from the sweet, sweet stylings of Sufjan Stevens [that’s what the kids these days play at parties, right?].  I’ll stay here.”

This party thus achieves an equilibrium with an imbalance of humans in room A versus room B, because the pull of the music outweighs some of the repulsion to other sweaty people.  Notice that this effect doesn’t depend on the individual preferences of any given partygoer.  It’s not as if the people in room B necessarily like space more and music less than the people who’ve stayed in room A.  They just found the overpopulation of room A to be too intolerable even when weighed against their love of music.  If, at the equilibrium point, a handful of people from room A trickled into room B anyway, the people wandering around room B might wander back into room A and not come back, since there is now enough room for them.  Even if you assume all people at the party are equally drawn to music and repulsed by invasions of personal space, it’s possible for one room to become more populated than the other because the former includes the immobile music source.

If this makes sense, you’re close to understanding the resting membrane potential of a neuron.  The people are potassium and sodium ions, which have a positive charge.  The potassium ions are mechanically able to pass in and out of the cell fairly easily, through channels embedded in the neuron’s membrane; sodium also has channels for this passage, but they don’t let sodium through as easily as potassium can pass through its channels.  You might think of potassium as the more sober people, who have the coordination to move from one room to another without much difficulty, while sodium represents the partygoers who have had so much to drink that turning a doorknob is a mental challenge, although they could still do it with enough effort.  The music setup is a collection of negatively charged ions, which cannot permeate the membrane.

These ions diffuse according to principles similar to the ones I sketched for “human diffusion,” although of course ions are not living beings, so they jostle according only to fundamental physical forces.  This is one of the dangers of human-based analogies for non-human scientific phenomena, and I resent science educators who framed diffusion in terms of particles “wanting” to move from higher to lower concentration.  But as I mentioned above, even if you assumed the motion of humans were as simple as that of particles, it’s a matter of probability and basic mechanics that diffusion will do its thing.  The attraction of the mobile positive ions to the negative ions that are confined to the inside of the neuron (they still move, but not out of the cell without some extra help by proteins) is self-explanatory, if you know literally anything about electrostatics.

Okay, so what exactly is the resting membrane potential, then?  Well, once the equilibrium (the “resting” part of this term) of concentration-based diffusion/repulsion and electricity-based attraction is reached, we’re left with a cell that has some positive ions outside of it, floating around near the outer surface of the membrane, and of course some on the inside because they’re attracted to the negative ions.  “Potential” is more or less a measure of the tension (using this term loosely, not in the literal mechanical sense) posed by this separation of positive ions from negative ions, due to the physical “barrier” of the diffusive tendency of the positive ions to exit the cell.  The exact way this balances out when considering the different degrees to which sodium and potassium can pass through the membrane is a bit more complicated, but we’ll consider that detail only when necessary.  The important point is that the neuron is predominantly permeable to potassium, and there are some negative ions inside to which it is completely impermeable.

When this tension of positive/negative separation at equilibrium, the resting membrane potential, is disrupted either by a sensory stimulus or by a signal from another neuron, that’s an action potential.  We’ll look at how exactly that works next time!

Parsing Papers 2 – The Neurobiology of Pain Sensitivity (Part II)

(follow-up to Parsing Papers 1 – The Neurobiology of Pain Sensitivity (Part I))

Now that the authors have established that 183C is in the DRG neurons of the mouse embryo, they’d like to determine what exactly these microRNAs do.  To do that, they need some fancy genetic techniques that we’ll need to get a handle on to understand these experiments.

This is some complicated stuff (at least by my standards; my genetics class was painful).  There will be jargon, but I’ll explain it, and it’s really quite fascinating if you take your time with it.  So let’s start with the general picture and then work our way down to the details.  The authors’ strategy was, first, to genetically engineer some mice that would not express 183C in certain types of neurons.  How do you get that kind of control?  Essentially, you place some markers before and after the DNA sequence coding for 183C, called loxP sites.  Tasty on bagels.  There’s an enzyme (Cre) that can bind loxP sites quite well, and cut out the DNA in between the loxP sites (called the “floxed” gene, I can’t make this stuff up).  If you can give your mice the gene for Cre, and ensure that gene only gets expressed (produces the enzyme) in the neurons you want, then you’re good to go.

To do that, you hijack a neat little property of genetics.  DNA doesn’t just get transcribed into RNA and then translated into protein constantly.  That would be inefficient and almost certainly kill you and these poor mice.  Instead, a gene typically needs to be bound at some site (called a promoter) to a protein called a transcription factor, which gives RNA the go-ahead to assemble using the DNA as its template (or it might inhibit this transcription process, or adjust its rate).  Different cells, at different times in their cell lifetimes, have different transcription factors lying around, so if you can design your mouse mutant such that the sequence coding for Cre comes right after a promoter that corresponds to a transcription factor unique to your preferred cells, you can do your science thing.

The authors don’t explain exactly which cell types each of their promoters of choice correspond to, but the Jackson Laboratory clarifies some of the known expression patterns of the transcription factors these authors use for their genetic control.  Wnt1-Cre mutants are predicted to abolish expression of floxed 183C in the “midbrain and developing neural tube.”  Initially I wasn’t sure how to rationalize this, considering that evidently the DRGs develop from the neural crest, not the neural tube – however, I took a peek at the source that the authors referenced as their precedent for the Wnt1-Cre construct, which notes that while Wnt1 is restricted to the midbrain in early development, it later manifests in the dorsal spinal cord, which is where we’d expect DRGs.  TH-Cre corresponds to “catecholaminergic cells” (cells that produce a class of neurotransmitters including dopamine, epinephrine, and norepinephrine).  The authors’ source for this one is behind a paywall that my university journal subscriptions can’t overcome, alas.  And TrkB<sup>CreERT2/+</sup> is a mess.  To get even more control over these mice’s genes (on the temporal level), the Cre protein is modified so that it can only access the nucleus (and thus wreak its havoc on floxed genes) when the mouse is given a drug called tamoxifen – that’s what the “ERT2” part indicates.  The “TrkB” part means that Cre gets expressed in TrkB-rich cells, which are “low-threshold mechanosensory” neurons that are “lightly myelinated.”  Translation?  These are neurons that give the mouse its sense of touch, and they are extremely sensitive to stimuli; this myelination business refers to the degree to which the neurons are “insulated” by glia, so lightly myelinated neurons are less insulated and do not conduct action potentials relatively fast.  The explanation behind how exactly that works is fascinating, but not something I think is essential for this post.

On top of this, just to make sure the effects on the microRNA they were looking for were results of Cre’s removal of floxed DNA in the cell types they wanted, the authors threw in another strain of mice with the ROSA26<sup>Tomato</sup> (NSFW) mutation.  The babies of these mice and the strains of mice with those Cre mutations described above have the neurons of interest (those modified by Cre) fluorescing red.  Hence “tomato.”  Biologists are weird and I love them.

The qPCR technique we considered last time could in theory establish the complete absence of expression of 183C, if you used some controls (that is, if you ran qPCR on 183C in the cells of interest relative to a control RNA, and you compared the mutant with Cre and floxed 183C gene against mice without that combination).  But the downside of qPCR is that it only tells you the average expression level across the entire tissue sample you analyze.  You don’t learn anything about how the RNA expression varies across the landscape of the tissue of interest, which can be a problem if the cell types you want to analyze don’t necessarily form discrete clusters that you can harvest (like the DRG).

So instead they used “in situ hybridization” (ISH), which is a nice little nod to the Latin nerds out there.  You take sections of the mouse’s nervous system, “fix” it chemically to hold the RNA in place and make the cells more accessible to complementary DNA or RNA fluorescent probes (similar to the ones used in qPCR, so it’s all coming full circle here), and throw those probes in.  Then use a microscope to see the magic.

Parsing Papers 1 – The Neurobiology of Pain Sensitivity (Part I)

And now for something completely different.

I’d like to take a bit of a break from the philosophy-babble to try something that will hopefully be a handy resource for readers interested in science, and help my own efforts to become an effective scientific writer along the way.

The primary literature of the natural sciences is…opaque, to say the least.  This is actually a point that Kuhn addresses briefly in the book I’d recommended earlier, The Structure of Scientific Revolutions.  As comforting as it is to know that professional scientists are working on problems of such depth that their terrifying jargon is practically necessary, we would also prefer to understand what in god’s name they’re talking about.

So, in the posts of this Parsing Papers series, I’m going to read articles that strike my fancy from scientific journals, and make my most honest effort to explain/translate them in a manner that’s understandable to a lay adult reader without sacrificing accuracy (as some well-known science popularizing websites do, coughiflsciencecough).  I’ll also include some of my own reflections on the papers’ content.  The obvious disclaimer here is that since I’m an undergrad, I’m hardly an expert myself.  I can’t promise these summaries will be perfect, although I’ve taken several biology courses whose express purpose (besides teaching specific material) was to turn me into a paper-reading machine.  Veterans of my Snapchat feed are well aware of this.  At any rate, if it seems like I’m belaboring a point at a greater length than the actual paper, my rationale for this is that I don’t think conciseness is a virtue if it comes at the expense of clarity.

Another reason these posts will be long (each paper will require multiple posts) is that many papers can’t be deeply understood without a lot of background knowledge about the sorts of experimental methods that the authors take for granted.  If you want presentations of scientific research but don’t care about how the researchers actually gather and interpret their data, well, this isn’t the series for you.  It might seem daunting, but I assure you it’s so worth it, to have the fog cleared so you can see for yourself what the logic behind these studies is.  Apologies to any bio major friends who read this stuff and feel like it’s old hat – you can skip the review as necessary, or read it to tell me if I screwed up somewhere (my preferred option).

This first paper is at least loosely related to my first two posts, since any good lover of happiness and hater of suffering could benefit from an understanding of how exactly humans’ (and other animals’) bodies generate the experience of suffering – which often comes in the form of pain.  So understand we shall!

Here’s the paper for this post and its sequels, Pain Sensitivity – Peng et al 2017, and the supplementary info, Pain Sensitivity Supplement.  Normally it would be behind a paywall (which I can bypass because I have legal access through my university’s proxy), but the journal in which this paper was published (Science) evidently permits sharing of papers for educational nonprofit purposes, which describes this post as far as I’m aware.

The first few sentences of this paper are actually fairly down-to-earth, although the authors gloss over some important terms, especially the distinction between nociceptive and neuropathic pain.  This handy resource explains it quite well.  Nociceptive pain is your typical, generally acute pain, a direct result of an injury or inflammation.  Neuropathic pain is chronic and caused by higher-level damages to the pathways of the nervous system that process pain.  So, if the nervous system usually has some means of suppressing pain, a dysfunction in that suppression mechanism could result in this chronic, “neuropathic” pain.

With that in mind, the background the authors give is as follows.  In order for an organism to be aware immediately that it’s been injured, sensory neurons called nociceptors, whose cell bodies cluster in groups called dorsal root ganglia (DRGs), need to transfer a stimulus from the injured tissue to the spinal cord.  This is the nociceptive pain pathway, very roughly speaking.  Prior to their research discussed in this paper, the authors had already known from other literature that there are correlations between certain genetic differences among organisms, and their sensitivity to nociceptive and neuropathic pain.  Strangely, they only really give examples of the latter.  The “mechanical allodynia” to which they refer is a fancy term for the presence of a certain degree of pain in response to weaker stimuli than are usually required for that much pain.  The preexisting literature had linked types of neuropathic pain, such as allodynia, to regulation of gene expression by microRNAs (miRNAs).

What are those?  Well, suffice it to say, they’re one of the reasons the mantra that grade- and high-school biology courses loved to shove down your throat (no, not that one), “DNA makes RNA makes protein,” is an over-over-over-over-oversimplification of the matter.  Not all RNA (now there’s a hip new hashtag) makes protein, in fact miRNAs prevent other RNA from making protein (with the help of a protein, actually, at least in animal cells).  This is why, when I refer to “genetic differences,” this doesn’t simply refer to variation in DNA.  Biologists have adjusted the concept of “gene” to also include non-coding RNAs like miRNA.  As a cool side note, these little buggers are involved in a technique that’s widely used in molecular biology and genetics to study gene functions, by seeing what happens when the expression of these genes is hindered.  Fun fact:  I lost a Thanksgiving break to a lab report about that technique.  And nematodes.  It was the strangest mix of absolutely awful and absolutely fascinating.

Okay, back to the neuroscience.  This paper focuses on a family of miRNAs called the miR-183 cluster (I’ll abbreviate this as 183C since, confusingly enough, the miR-183 cluster includes the miR-183 RNA in particular along with a couple others), and the authors’ goal was to investigate “how and in which cell types the miR-183 cluster contributes to basal and neuropathic pain,” with mice as their model organism.  Since, you know, creating mutant strains of humans for a scientific study is illegal.  They found that in the DRG, 183C was expressed as early as 10.5 days into embryonic development, with increased expression as development continued, but only a very minor level in adults.

How did they figure this out?  With a method called quantitative polymerase chain reaction (qPCR), which works like this:  Take a sample of cells from the region of the organism (in this case, DRG), and at the time of development, that you want to study.  You treat this sample with an excess of special DNA sequences called probes, labeled at both ends and complementary to each DNA sequence of interest (including a control, which you know is going to be present in the sample cells from the literature).  By “complementary,” I mean that for every A, this sequences substitutes a T; T goes to A; G goes to C; and C goes to G.  Hence the complementary strand perfectly binds to the template strand.  (But wait!  Aren’t we trying to analyze RNA?  Indeed we are, so like a good biologist, prior to this latter step you treat the sample with an enzyme that produces, or “reverse-transcribes,” the complementary DNA from the RNA of interest.  Which is, in this case, 183C.)  By a mechanism that I’ll explain in a moment, each probe can provide a fluorescent signal (with a different color for each sequence of interest) only when its complementary sequence is replicated.  This is great news, because you want to only see the DNA corresponding to 183C, along with the control RNA whose abundance in the DRG you can compare with 183C (this is important – this method is relative).

How do you do that?  Well, if there’s one thing DNA does best, it’s multiplying.  But it doesn’t do so willy-nilly.  It needs short fragments of either RNA or DNA (in natural replication, it’s generally RNA that gets replaced with DNA, while in PCR, biologists usually cut to the chase and just use DNA) to bind to each strand and serve as the starting point for the polymerizing enzyme (fittingly called polymerase).  These are called primers.  Now, mad scientist that you are, you can use this to your advantage, because other mad scientists before you have come up with ways to construct whatever primer sequence of DNA your little old heart could possibly desire.  Your little old heart desires sequences that are complementary to the endpoints of the samples you want to study (183C and the control).  And you can order these with whatever grant money you happen to have lying around.

Great.  You’ve got your fresh supply of shiny new primers, and you throw them into your sample along with the polymerase and the building blocks of DNA (nucleotides, the ever-beloved A, T, C, and G).  But you’ve got another problem on your hands, and that’s getting the primers onto the DNA.  In natural replication, the primers can attach to the DNA because the DNA gets unzipped by an enzyme when the time is right.  You could do this artificially if you wanted to, in theory, but it’s more practical – and more controllable – to literally just heat up the sample so much that the DNA strands come apart (the “denaturation” step).  Now you lower the temperature enough that the primers can bind to their complementary sites on the sample DNA (the “annealing” step).  Fortunately, if you’re following standard protocol like a good Kuhn-fearing normal scientist, your polymerase was taken from a bacteria species that loves the heat, so the polymerase works in temperatures just hot enough that you can “turn it on” at will but not so hot that the DNA would come apart again.  When you jack up the temperature to that ideal range, the polymerase takes the nucleotides floating around in this reaction mixture and stitches them onto only the DNA marked with primers (the “elongation” step).  Since this DNA in question is just the DNA complementary to 183C and the control, and your primers were (ideally) fixed to the ends of these sequences, then viola, you’ve got twice as much of your desired samples as you started with.  One cycle is complete.  Heat up the sample enough to split the DNA again, and repeat.

That’s all well and good, but what does this tell you?  Absolutely nothing, until you remember those probes we tossed in earlier.  In each cycle, during the annealing step, some probes will bind the target DNA along with the primers (they won’t be in competition because the probe sequence is complementary to some middle fragment of the target DNA sequence, rather than the ends).  Now, the neat thing about the probe is that the label on one end “quenches” the fluorescence of the label on the other while the probe is intact (don’t ask me how; that’s beyond the scope of this explanation).  When the polymerase reaches the probe as it does its elongation business, the probe gets broken down so that it’s no longer in the way, and – here’s the kicker – the fluorescent label is no longer quenched.  So the amount of fluorescence serves as a proxy for amount of target DNA replicated in each cycle, and remember, you have different colors corresponding to 183C versus the control.

Turns out that the more tech-savvy folk in the biological community have found ways to convert fluorescence into quantifiable data.  So you can set a threshold level of fluorescence that is significantly stronger than the background (all the other DNA that hasn’t been replicated), and monitor the qPCR process to determine how many cycles it takes for each target DNA to reach that threshold.  In theory, if the amount of 183C RNA (which, for this experiment’s purposes, has been converted to DNA) expressed in the DRG cells is, for example, half the amount of the control’s expression, then it will take 1 more cycle to see fluorescence for the former than for the latter; if it’s a quarter of the amount of the control, it’ll take 2 more cycles; and so on.

SO COOL.  Anyway.  Now that that’s cleared up, in the next post we can get back to neuroscience.  Again.  (You now see what I meant earlier about how much background it takes to interpret just one paper.  Thanks for sticking around, if you made it this far.)

Utils for Everyone

(follow-up to Cross-Purposes)

Before I continue this whole adventure, I should clarify that whenever I speak of “happiness,” you should read this as “happiness and the relief/absence of suffering” unless otherwise stated.  I’ve heard interpretations of suffering that argue that suffering itself is not antithetical to happiness – even suffering that doesn’t serve to increase happiness at a later time, e.g. getting your wisdom teeth extracted – but I honestly don’t find these views convincing.  They seem to be ad hoc attempts to glamorize suffering.  Even in the case of masochism, for example, the person who enjoys pain isn’t enjoying suffering in any non-colloquial sense.  So I’ll proceed on the premise that the value of conscious experience exists on a spectrum with happiness on one end, and suffering on the other, unless someone can persuade me otherwise.

Okay, so: beyond happiness?

Yes, if we’re only considering my happiness.  Undoubtedly, much of the reason connections with other people are preferable to me over their absence is that they make me happy, but this isn’t the sole reason.  I’m not happy literally every second I share with others, obviously.  In a less trivial sense, sometimes being a good (read: preferable to others) friend, brother, son, etc., requires that I sacrifice (not the best term, since it makes me sound like a whiny martyr, which is not my intent, but I can’t think of a better word at the moment) my own happiness.  Even long-term happiness, perhaps – although this is impossible to know for sure, since, you know, I can’t predict the future.  It doesn’t make me “happy” (relative to alternative choices I could make with my time) to listen to friends’ struggles when they need consolation, to buy my family gifts for holidays, to drive my brother to school or sports practices, or to do work that other people could do instead of me.  Similar cases can be said for the other elements I listed in the last post: productive engagements, aesthetics, physical and emotional health, and intellectual stimulation.  If anyone wants clarification of these cases, I’ll happily (*snicker*) oblige when I get the chance.

And for the most part, that’s okay.  Because as far as I can ascertain, these losses in my own happiness make these other people happier, probably to a greater degree than they make me unhappy.  This doesn’t mean that every choice I make in anticipation of increasing my or other people’s happiness is successful at doing this.  There are probably many habits of mine, or even significant life choices, that in fact fail to improve anyone’s happiness, and they may seem intuitively desirable to me to the extent that I don’t know this fact.

This last clause is important, because I can foresee objections to my happiness-based philosophy on the grounds that if people really cared about happiness so much, they wouldn’t do [insert activity that is currently considered important to a fulfilling human life here].  It’s possible that these objectors are right and there truly are elements of a “better” life independent of happiness.  I haven’t heard every defense of such a position, so I won’t be so arrogant to suppose they’re all false.

But when I reflect on instances of my past in which I’ve found that something I thought would make me or someone else happy did not in fact make my life feel “better,” the problem was that such things did not make me or anyone else happy in any lasting sense.  The lesson of the hedonic treadmill (which I’ve heard used as an objection to the value of happiness) isn’t that happiness is insufficient, but rather that the methods by which we typically pursue happiness often fail to provide it.  The solution is to get off the treadmill, not to stop running.

So, if there’s some cherished behavior or social structure that we may feel is threatened by the idea that happiness is at the root of all value in life, it might behoove us to ask ourselves whether we’re simply mistaken in thinking that such a behavior or structure is worth cherishing.  My contention is not that happiness is always what we do in fact pursue, but that it’s what we should pursue if we want to make our lives better, as experience demonstrates.  No, this doesn’t mean we should throw out anything that isn’t fun or pleasant to think about.  Dismantling anything in human society is a serious decision that shouldn’t be taken lightly – and mind you, the project of happiness isn’t all destructive.  For every social institution that thwarts happiness and deserves to fall – with the consent of the society itself, obviously, which is why Light Yagami of Death Note infamy is not a hero (among other reasons; incidentally, I highly recommend that series) – there is likely some other source of happiness we have yet to discover.  All of which is my long-winded way of saying that intuition is cheap.  Homophobes undoubtedly find relationships outside their heteronormative mold to be intuitively unpleasant, but we’d be a sorry species indeed if we considered such bigoted intuitions worth respecting.  It may seem intuitive that donating money to a poor nation should help alleviate poverty, but this approach evidently hasn’t worked so well.  Just because something feels wrong or right, it does not follow that it actually is wrong or right (i.e. deleterious or helpful, respectively, to the betterment of people’s lives).

Of course, there’s something to be said for the happiness that comes with acting according to what feels right.  Even if you know what the best course of action is, for effecting actual positive changes in people’s lives, your conscience may feel uneasy about it.  This is an obstacle that is, again, not unique to the utilitarian.  Kant, anti-utilitarian extraordinaire, would have you tell the truth to Nazis about who’s hiding in your attic against your conscience, because, as my former humanities professor put it, “Kant doesn’t care what you want.”  I’d argue that the utilitarian perspective does care what you want, but it doesn’t care what you think you want, to put it in a pithy slogan.

There’s still a lot to clarify about my position on the basis of morality.  Lest anyone accuse me of endorsing Bentham’s Panopticon or Singer’s repulsive insinuation that consent doesn’t matter for mentally disabled people, I’m not.  I am under no obligation to worship or defend any famous utilitarian philosophers, nor should my conception of utilitarianism be conflated with any stereotypical portrayals of this term.  (While the legacy of the Panopticon and Bentham’s economic views is deplorable, it’s still remarkable that he championed women’s, slaves’, children’s, and gay people’s rights in the 18th century, albeit from a position of privilege.)  In future posts, I’ll address some of the nuances of my position, especially in response to classic objections like the “utility monster,” “experience machine,” and the organ-harvesting doctor.