And now for something completely different.
I’d like to take a bit of a break from the philosophy-babble to try something that will hopefully be a handy resource for readers interested in science, and help my own efforts to become an effective scientific writer along the way.
The primary literature of the natural sciences is…opaque, to say the least. This is actually a point that Kuhn addresses briefly in the book I’d recommended earlier, The Structure of Scientific Revolutions. As comforting as it is to know that professional scientists are working on problems of such depth that their terrifying jargon is practically necessary, we would also prefer to understand what in god’s name they’re talking about.
So, in the posts of this Parsing Papers series, I’m going to read articles that strike my fancy from scientific journals, and make my most honest effort to explain/translate them in a manner that’s understandable to a lay adult reader without sacrificing accuracy (as some well-known science popularizing websites do, coughiflsciencecough). I’ll also include some of my own reflections on the papers’ content. The obvious disclaimer here is that since I’m an undergrad, I’m hardly an expert myself. I can’t promise these summaries will be perfect, although I’ve taken several biology courses whose express purpose (besides teaching specific material) was to turn me into a paper-reading machine. Veterans of my Snapchat feed are well aware of this. At any rate, if it seems like I’m belaboring a point at a greater length than the actual paper, my rationale for this is that I don’t think conciseness is a virtue if it comes at the expense of clarity.
Another reason these posts will be long (each paper will require multiple posts) is that many papers can’t be deeply understood without a lot of background knowledge about the sorts of experimental methods that the authors take for granted. If you want presentations of scientific research but don’t care about how the researchers actually gather and interpret their data, well, this isn’t the series for you. It might seem daunting, but I assure you it’s so worth it, to have the fog cleared so you can see for yourself what the logic behind these studies is. Apologies to any bio major friends who read this stuff and feel like it’s old hat – you can skip the review as necessary, or read it to tell me if I screwed up somewhere (my preferred option).
This first paper is at least loosely related to my first two posts, since any good lover of happiness and hater of suffering could benefit from an understanding of how exactly humans’ (and other animals’) bodies generate the experience of suffering – which often comes in the form of pain. So understand we shall!
Here’s the paper for this post and its sequels, Pain Sensitivity – Peng et al 2017, and the supplementary info, Pain Sensitivity Supplement. Normally it would be behind a paywall (which I can bypass because I have legal access through my university’s proxy), but the journal in which this paper was published (Science) evidently permits sharing of papers for educational nonprofit purposes, which describes this post as far as I’m aware.
The first few sentences of this paper are actually fairly down-to-earth, although the authors gloss over some important terms, especially the distinction between nociceptive and neuropathic pain. This handy resource explains it quite well. Nociceptive pain is your typical, generally acute pain, a direct result of an injury or inflammation. Neuropathic pain is chronic and caused by higher-level damages to the pathways of the nervous system that process pain. So, if the nervous system usually has some means of suppressing pain, a dysfunction in that suppression mechanism could result in this chronic, “neuropathic” pain.
With that in mind, the background the authors give is as follows. In order for an organism to be aware immediately that it’s been injured, sensory neurons called nociceptors, whose cell bodies cluster in groups called dorsal root ganglia (DRGs), need to transfer a stimulus from the injured tissue to the spinal cord. This is the nociceptive pain pathway, very roughly speaking. Prior to their research discussed in this paper, the authors had already known from other literature that there are correlations between certain genetic differences among organisms, and their sensitivity to nociceptive and neuropathic pain. Strangely, they only really give examples of the latter. The “mechanical allodynia” to which they refer is a fancy term for the presence of a certain degree of pain in response to weaker stimuli than are usually required for that much pain. The preexisting literature had linked types of neuropathic pain, such as allodynia, to regulation of gene expression by microRNAs (miRNAs).
What are those? Well, suffice it to say, they’re one of the reasons the mantra that grade- and high-school biology courses loved to shove down your throat (no, not that one), “DNA makes RNA makes protein,” is an over-over-over-over-oversimplification of the matter. Not all RNA (now there’s a hip new hashtag) makes protein, in fact miRNAs prevent other RNA from making protein (with the help of a protein, actually, at least in animal cells). This is why, when I refer to “genetic differences,” this doesn’t simply refer to variation in DNA. Biologists have adjusted the concept of “gene” to also include non-coding RNAs like miRNA. As a cool side note, these little buggers are involved in a technique that’s widely used in molecular biology and genetics to study gene functions, by seeing what happens when the expression of these genes is hindered. Fun fact: I lost a Thanksgiving break to a lab report about that technique. And nematodes. It was the strangest mix of absolutely awful and absolutely fascinating.
Okay, back to the neuroscience. This paper focuses on a family of miRNAs called the miR-183 cluster (I’ll abbreviate this as 183C since, confusingly enough, the miR-183 cluster includes the miR-183 RNA in particular along with a couple others), and the authors’ goal was to investigate “how and in which cell types the miR-183 cluster contributes to basal and neuropathic pain,” with mice as their model organism. Since, you know, creating mutant strains of humans for a scientific study is illegal. They found that in the DRG, 183C was expressed as early as 10.5 days into embryonic development, with increased expression as development continued, but only a very minor level in adults.
How did they figure this out? With a method called quantitative polymerase chain reaction (qPCR), which works like this: Take a sample of cells from the region of the organism (in this case, DRG), and at the time of development, that you want to study. You treat this sample with an excess of special DNA sequences called probes, labeled at both ends and complementary to each DNA sequence of interest (including a control, which you know is going to be present in the sample cells from the literature). By “complementary,” I mean that for every A, this sequences substitutes a T; T goes to A; G goes to C; and C goes to G. Hence the complementary strand perfectly binds to the template strand. (But wait! Aren’t we trying to analyze RNA? Indeed we are, so like a good biologist, prior to this latter step you treat the sample with an enzyme that produces, or “reverse-transcribes,” the complementary DNA from the RNA of interest. Which is, in this case, 183C.) By a mechanism that I’ll explain in a moment, each probe can provide a fluorescent signal (with a different color for each sequence of interest) only when its complementary sequence is replicated. This is great news, because you want to only see the DNA corresponding to 183C, along with the control RNA whose abundance in the DRG you can compare with 183C (this is important – this method is relative).
How do you do that? Well, if there’s one thing DNA does best, it’s multiplying. But it doesn’t do so willy-nilly. It needs short fragments of either RNA or DNA (in natural replication, it’s generally RNA that gets replaced with DNA, while in PCR, biologists usually cut to the chase and just use DNA) to bind to each strand and serve as the starting point for the polymerizing enzyme (fittingly called polymerase). These are called primers. Now, mad scientist that you are, you can use this to your advantage, because other mad scientists before you have come up with ways to construct whatever primer sequence of DNA your little old heart could possibly desire. Your little old heart desires sequences that are complementary to the endpoints of the samples you want to study (183C and the control). And you can order these with whatever grant money you happen to have lying around.
Great. You’ve got your fresh supply of shiny new primers, and you throw them into your sample along with the polymerase and the building blocks of DNA (nucleotides, the ever-beloved A, T, C, and G). But you’ve got another problem on your hands, and that’s getting the primers onto the DNA. In natural replication, the primers can attach to the DNA because the DNA gets unzipped by an enzyme when the time is right. You could do this artificially if you wanted to, in theory, but it’s more practical – and more controllable – to literally just heat up the sample so much that the DNA strands come apart (the “denaturation” step). Now you lower the temperature enough that the primers can bind to their complementary sites on the sample DNA (the “annealing” step). Fortunately, if you’re following standard protocol like a good Kuhn-fearing normal scientist, your polymerase was taken from a bacteria species that loves the heat, so the polymerase works in temperatures just hot enough that you can “turn it on” at will but not so hot that the DNA would come apart again. When you jack up the temperature to that ideal range, the polymerase takes the nucleotides floating around in this reaction mixture and stitches them onto only the DNA marked with primers (the “elongation” step). Since this DNA in question is just the DNA complementary to 183C and the control, and your primers were (ideally) fixed to the ends of these sequences, then viola, you’ve got twice as much of your desired samples as you started with. One cycle is complete. Heat up the sample enough to split the DNA again, and repeat.
That’s all well and good, but what does this tell you? Absolutely nothing, until you remember those probes we tossed in earlier. In each cycle, during the annealing step, some probes will bind the target DNA along with the primers (they won’t be in competition because the probe sequence is complementary to some middle fragment of the target DNA sequence, rather than the ends). Now, the neat thing about the probe is that the label on one end “quenches” the fluorescence of the label on the other while the probe is intact (don’t ask me how; that’s beyond the scope of this explanation). When the polymerase reaches the probe as it does its elongation business, the probe gets broken down so that it’s no longer in the way, and – here’s the kicker – the fluorescent label is no longer quenched. So the amount of fluorescence serves as a proxy for amount of target DNA replicated in each cycle, and remember, you have different colors corresponding to 183C versus the control.
Turns out that the more tech-savvy folk in the biological community have found ways to convert fluorescence into quantifiable data. So you can set a threshold level of fluorescence that is significantly stronger than the background (all the other DNA that hasn’t been replicated), and monitor the qPCR process to determine how many cycles it takes for each target DNA to reach that threshold. In theory, if the amount of 183C RNA (which, for this experiment’s purposes, has been converted to DNA) expressed in the DRG cells is, for example, half the amount of the control’s expression, then it will take 1 more cycle to see fluorescence for the former than for the latter; if it’s a quarter of the amount of the control, it’ll take 2 more cycles; and so on.
SO COOL. Anyway. Now that that’s cleared up, in the next post we can get back to neuroscience. Again. (You now see what I meant earlier about how much background it takes to interpret just one paper. Thanks for sticking around, if you made it this far.)