So when we left off, the authors had used ISH to confirm that 183C was not expressed in the neurons where Cre cut out the floxed 183C gene (i.e. where “recombination” occurred), and that the Tomato marker was expressed in these neurons.
You might be wondering, “Golly, based on those links from last time, it doesn’t seem like these transcription factors the authors used to control Cre expression were very specific. Wouldn’t they show up in many cell types?” And you’d be right. According to the supplementary figures, the authors mitigated that problem by also doing ISH on the expression of other RNAs found to correlate with the particular cell types in question. For instance, even though TH-Cre in theory recombines many catecholaminergic cells as discussed before, the authors checked for fluorescence of TrkA in the same experiment, and TrkA is a giveaway for nociceptive neurons. This is a common technique in molecular biology, testing for colocalization of different RNAs (or other cell components).
Why didn’t they simply cut to the chase and place Cre expression under the control of a TrkA promoter?
Because TrkA isn’t a transcription factor. Bummer.
Still, now we can be reasonably confident that when the authors say that abolishing expression of the 183C microRNAs in certain cell types has such-and-such effect on mice, they’re inferring an actual causal link. We’ll keep this concern in mind when examining their behavior experiments, because if a scientist isn’t careful, they can confuse correlation with causation. When possible, it’s always a good idea to ask, “By messing with this variable in this experiment, could the authors have unwittingly messed with some other part of the system that is really responsible for the effect they’re observing?”
Side note: The authors say that Wnt1-Cre effects “all sensory neurons,” but in context this appears to be an abbreviation of “all DRG sensory neurons,” as they claim earlier in the same paragraph, “Wnt1-Cre recombined all neurons of the DRG.” This seems like a reasonable interpretation of their language on my part, but if I’m mistaken, this would have significant implications for the meaning of their data. Which goes to show just how imperative it is that scientists use clear and unambiguous language, especially in their primary literature, but I digress.
The supplement file tells us what exactly the authors did to test different kinds of sensitivity in their mice – to a “light touch, cold, heat, or pinprick” stimulus, every mouse responded about the same regardless of whether it was a control (genetically normal or “wild type”) or had recombination in certain neurons. The “mechanical” stimulus, which refers to poking the mouse’s paw with a device called a von Frey hair, showed a significant increase in sensitivity among the Wnt1-Cre and TH-Cre mice, which correspond to all DRG neurons and DRG nociceptors, respectively. These tests are kind of mundane, but the important point about them, in the case of the mechanical stimulus, is that they quantitatively determined how much force was required for each mouse to withdraw the paw to which the force was applied, for “at least three out of five consecutive stimuli.” This was a threshold-based test, but they also compared the intensity of the mice’s responses at a constant level of force. The quantification scheme for the latter test is pretty entertaining (not to say I could come up with something better, lest the authors accuse me of libel, but it’s an unorthodox scale from my experience in reading papers): “0-no response; 0.5-gently movement of hind paw, awareness of stimuli; 1-clear withdrawal; 2-more robust withdrawal, or repetitive withdrawal combined with shaking or licking.”
This is the point in analysis of scientific data where we have to have (an abridged version of) The Talk.
The one about p-values.
This article and this site explain (better than I ever could) the statistics behind p-values and why they are terrifying if you think about them too much. The most I’ll say on this matter is that while “p < 0.05” has become synonymous with “statistically significant enough,” the probability that an effect with a p-value of slightly less than 0.05 is due to chance is still disconcertingly high. I won’t comment on every p-value reported in this paper, but if you check them in the figures for yourself, you can keep in mind the relative confidence you should have in each conclusion the authors draw. There’s nothing magical written into the universe that makes any particular p-value objectively deserving of the designation “significant.” The best we can hope for in science, as far as I understand it, is a probabilistic sliding scale of confidence in the predictions we might make on the basis of certain experiments. The proof of the pudding is in the tasting, of course. If we draw mistaken conclusions from experiments, these mistakes will manifest in the ways we apply those conclusions eventually. For the purposes of public policy, however, getting a handle on the degree of confidence in a prediction we’re warranted in having is a practical necessity.
Anyway, the authors’ next experiment is more of a classic in neurobiology – electrophysiological analysis. If you’re at least somewhat familiar with neuroscience, you know what an action potential is. But how exactly does it work? It all comes down to concentrations of sodium and potassium in a neuron.
Because one of my best friends in high school once made a memorable analogy to a party in order to explain the electron transport chain in AP Bio, I’m going to try that for the concept of resting membrane potential, which one must first understand to understand the ways of the action potential.
Suppose you’re at a ragin’ shindig in an apartment that for some reason has only two rooms. Other than that architectural oddity, it’s a good time. The music is full of sick beats, there’s a nice balance of familiar friends and new people to meet, and no one is throwing up (yet). It’s probably more fun than reading this.
But there are a lot of people. A lot. So many people that, to begin with, the main room that people initially settle in because it’s the room with the entrance (let’s call this room A) becomes so packed that people are literally jostling among each other, pushed this way and that.
Gross. This is not a sustainable situation. Of course, people aren’t moving only due to collisions with each other. They meander around by their own internal force, to an extent. People sometimes walk from one room to the other indiscriminately. A combination of this approximation to a random walk and the collisions with a sea of people leads you, our noble hero, through the doorway from this room to the other (room B). Several others are driven by the same forces into room B, and while occasionally people may walk back to room A sometimes, you notice that there’s an aggregate trend of flow from room A to room B.
Congratulations! You and your fellow partygoers have simulated diffusion!
Now, barring any social forces or food/drink incentives, eventually you might expect these inward and outward flows of people from one room to another to reach a roughly equal number in each room. It’s important to note that this isn’t so much one “force” as it is a sum of forces, along with sheer probability. Even if no one at the party were literally bumping into each other because room A was so packed, it’s simply more likely, all else equal, that a person will move from the more packed room to the more spacious room than the other way around. This “all else equal” business might be difficult to swallow, since we’re talking about human beings after all, but we’ll get to that, and the good news about sodium and potassium ions is that they aren’t human beings.
Now, as it happens, room A has one little advantage. Most of the fun elements of the party (people and sustenance) can be transported from one room to the other without a problem. But the sick sound system is stuck in room A. There are heavy amps and whatnot that no one wants to haul through the doorway, and even if they did, there’s just one music setup, so regardless of whether it’s in room A or room B, the room with the music setup is going to have a slight pull. Assuming people want to be able to hear the host’s fire Soundcloud tracks well and dance.
So the motion of the majority of the partygoers is now driven by two major driving forces. The first is this human diffusion discussed above, and the second is attraction to the music. Before room B has a chance to match the population of room A, some people in room A, who initially share the sentiments of the original participants in the exodus to room B, find themselves at the threshold between the rooms realizing, “You know what, it’s not that packed in there. I don’t want to be too far from the sweet, sweet stylings of Sufjan Stevens [that’s what the kids these days play at parties, right?]. I’ll stay here.”
This party thus achieves an equilibrium with an imbalance of humans in room A versus room B, because the pull of the music outweighs some of the repulsion to other sweaty people. Notice that this effect doesn’t depend on the individual preferences of any given partygoer. It’s not as if the people in room B necessarily like space more and music less than the people who’ve stayed in room A. They just found the overpopulation of room A to be too intolerable even when weighed against their love of music. If, at the equilibrium point, a handful of people from room A trickled into room B anyway, the people wandering around room B might wander back into room A and not come back, since there is now enough room for them. Even if you assume all people at the party are equally drawn to music and repulsed by invasions of personal space, it’s possible for one room to become more populated than the other because the former includes the immobile music source.
If this makes sense, you’re close to understanding the resting membrane potential of a neuron. The people are potassium and sodium ions, which have a positive charge. The potassium ions are mechanically able to pass in and out of the cell fairly easily, through channels embedded in the neuron’s membrane; sodium also has channels for this passage, but they don’t let sodium through as easily as potassium can pass through its channels. You might think of potassium as the more sober people, who have the coordination to move from one room to another without much difficulty, while sodium represents the partygoers who have had so much to drink that turning a doorknob is a mental challenge, although they could still do it with enough effort. The music setup is a collection of negatively charged ions, which cannot permeate the membrane.
These ions diffuse according to principles similar to the ones I sketched for “human diffusion,” although of course ions are not living beings, so they jostle according only to fundamental physical forces. This is one of the dangers of human-based analogies for non-human scientific phenomena, and I resent science educators who framed diffusion in terms of particles “wanting” to move from higher to lower concentration. But as I mentioned above, even if you assumed the motion of humans were as simple as that of particles, it’s a matter of probability and basic mechanics that diffusion will do its thing. The attraction of the mobile positive ions to the negative ions that are confined to the inside of the neuron (they still move, but not out of the cell without some extra help by proteins) is self-explanatory, if you know literally anything about electrostatics.
Okay, so what exactly is the resting membrane potential, then? Well, once the equilibrium (the “resting” part of this term) of concentration-based diffusion/repulsion and electricity-based attraction is reached, we’re left with a cell that has some positive ions outside of it, floating around near the outer surface of the membrane, and of course some on the inside because they’re attracted to the negative ions. “Potential” is more or less a measure of the tension (using this term loosely, not in the literal mechanical sense) posed by this separation of positive ions from negative ions, due to the physical “barrier” of the diffusive tendency of the positive ions to exit the cell. The exact way this balances out when considering the different degrees to which sodium and potassium can pass through the membrane is a bit more complicated, but we’ll consider that detail only when necessary. The important point is that the neuron is predominantly permeable to potassium, and there are some negative ions inside to which it is completely impermeable.
When this tension of positive/negative separation at equilibrium, the resting membrane potential, is disrupted either by a sensory stimulus or by a signal from another neuron, that’s an action potential. We’ll look at how exactly that works next time!