“One doesn’t expect to replicate a result which is published in the newspaper rather than a science journal.” (Karen Street, in a comment at Alexandria)
This is how we travel from Carey’s village. First I ride out of the village on the back of her bicycle. Then we crowd into a matatu. These are filled first by fitting in everyone who can possibly squeeze into the seats along the side, and then by fitting everyone possible into the aisle (bent over, because they’re not actually tall enough to stand in). Sometimes someone will hang onto the back, outside. We make our way across the country, by matatu, and by bus, and by train. Once, we are stopped, and taken off the bus, then loaded back on again, once the driver has paid the appropriate bribe for overloading the bus.
Inland changes to coast, the Luhya language for Swahili. Women’s clothes become more colorful, and attitudes toward cameras more jaded. (In Carey’s village, people run to stand in the picture; on the coast, people want money before they’ll let you photograph them.)
The year is 1986, and Carey and I are on our way to see her college roommate, Joanna Mountain, who, like Carey, has chosen to join the Peace Corps after college, and who, like Carey, has been stationed as a teacher in Kenya.
Decades later, when, diagnosed with cancer, I decide it’s time to fulfill a long standing desire to get consumer DNA testing, 23andme is the obvious choice, partly because it’s the one company that combines health related testing with genealogical information, but also in large part because it’s the company that has, as its research director, Joanna Mountain.
If there’s one name I trust in genetics, it’s Joanna Mountain’s, and that for the simplest of reasons: Joanna Mountain is the only person in genetics whose career I’ve actually followed for the past thirty years. I celebrated my twenty-first birthday with my sister and her roommate Joanna in the trailer park section of student housing at Stanford, where Carey, Joanna, and I all got our undergraduate degrees. I still have photos of Joanna from her stint in the Peace Corps after graduation. I’ve seen her once in a very blue moon since then, when visiting Carey, but more often heard from Carey about what Joanna was doing, through her return to Stanford for a Ph.D. in genetics, her postdoctoral research at Berkeley, and beyond. And so I have a good enough idea about Joanna Mountain to trust her intelligence, her character, and her grasp of science. If there’s any single person I’d be willing to be corrected by, on matters of nature, nurture, and genetics, it would be Joanna.
Most of us, in fields in which we aren’t ourselves expert, make our judgments at least partly in this way. We pick people, from among those who know more than us, whom we can trust, and we pick them, in part, based on whom we know, on old school ties, family relationships, or friendships. It was my physicist brother who convinced me not to adopt my friends’ oppositon to nuclear power, because the environmental risks of coal were actually much greater. I consult my mother (who taught at a medical school) about medical matters, so I know just what to ask my doctors and when I might want a second opinion, and she, in turn, uses me for computer tech support. Or we pick, from among the available experts, people who are suggested by our friends, Gary Kleck on guns perhaps if you’re suspicious of gun control, or Paul Krugman on economics if you prefer your criticism of Obama to come from the left. Or you go by which institutions you trust (and so, for instance, Cavalli-Sforza gains points with me, if I’m looking for an expert in genetics, because of his affiliation with Stanford).
But this method of figuring out whose knowledge and expertise to trust has its obvious limitations. One is that there’s a limit to how much I can verify in this way. I don’t have Joanna on my speed dial (she’s my sister’s friend, not mine directly), and, even if I did, she can’t reasonably be expected to answer all my questions. More important, you don’t know Joanna Mountain, and have no reason to accept anything she says based on any testimonial from me. This problem gets worse when we get to experts (however well credentialed) whom we have chosen based on our political beliefs. Ask my fellow bloggers at Alexandria whether they agree on the merits of Paul Krugman.
Fortunately, we have a process for discerning truth in science that gets us beyond such personal testimonials, the process known as peer review. You don’t have to have met Joanna Mountain back when she was teaching in a village near Mombasa to decide whether to believe what she has to say about mitochondrial DNA, or population divergence times, or global sequence diversity of BRCA2. Instead, we have a process, where her articles are submitted to journals like Human Molecular Genetics and Genetical Research, and reviewed by her professional peers before publication, and where her findings are either well replicated by other researchers or not.
But even with peer review, questions still remain about how you decide which scientific research you believe and trust.
The year is 1989, and the papers are full of a new discovery: cold fusion. Cheap and abundant energy is at hand. One of the world’s leading electrochemists, we’re told, has produced this energy in a tabletop experiment. I watch the buzz on Usenet, but I already know that cold fusion is probably doomed. My sister Carey, now back from the Peace Corps and studying for her Ph.D. at Caltech, has passed the word to me that her Caltech roommate, involved in the effort to replicate the finding, is getting negative results. Soon, what I’ve heard privately will be public knowledge.
If there’s anything I learn from the story of Fleischmann and Pons, it’s the difference between research and peer reviewed research. Peer review started when they made their announcement. Or, as Karen Street put it, “One doesn’t expect to replicate a result which is published in the newspaper rather than a science journal.” The first, and simplest, pitfall is understanding that “someone did a study” isn’t the same thing as “a study passed peer review.” Sometimes I’ve seen studies referenced in a blog (or even a newspaper or magazine) that turn out not to have been published in any peer reviewed journal. The fact that someone did a survey or study is meaningless if you can’t confirm that the study used a reasonable methodology.
Suppose, though, that we have, not just, “someone did a study,” but a really, genuinely peer reviewed result? What further do we need to consider?
- Some journals are more reputable than others. For instance, when I was at Stanford studying experimental psychology, the really prestigious journal for social psychologists to get published was the Journal of Personality and Social Psychology. There was at least one other journal with a name that would have sounded quite similar to someone not in the know, that was much less prestigious.
- But even reputable journals can publish results that you may want to doubt. For example, Daryl Bem, who is very highly regarded for his work in self-perception theory, published an article on precognition in the Journal of Personality and Social Psychology. Should you believe this result? Not till it’s very well replicated. Extraordinary claims require extraordinary proof, and precognition counts as an extraordinary claim. Of course, on matters of nature and nurture, part of why we disagree in how we’re guided by research is that we disagree on what is an extraordinary claim in the first place. The Journal of Personality and Social Psychology just recently published an study by Bobbi J. Carothers and Harry T. Reis, which found that, on a variety of measures, men and women are both from Earth, not all that different after all. Some may find this confirmation of common sense; others may see it as something counterintuitive and unlikely, which needs a lot more evidence before they’ll believe it. What are extraordinary claims to you, and what are ordinary ones?
- There are also different degrees or levels of peer review. One study, unreplicated, may well later prove to be wrong. Consider, for instance, the famous study in JAMA, later refuted, that showed an association between the MMR vaccine and autism. In fact, many single published medical studies later prove to be wrong. A whole series of studies reporting the same thing are stronger evidence. Review articles may tell you more than single studies (which is why my mother, when I was considering a choice between endometrial ablation and a hysterectomy before we learned that my chances of cancer were high enough that a hysterectomy would be required, told my to do my Highwire medical literature search with “reviews only” checked). And organizations like the National Academy of Sciences may provide another level of review.
- Some fields may have more robust peer review than others. But which ones? Razib Khan is wary of cultural anthropology, while Echidne of the Snakes is wary of evolutionary psychology. You probably have your own ideas about which of the two you agree with. Which scientific fields do you trust? Are there scientific fields whose peer review process you doubt?
So, especially for lay people with no real background in the science they’re trying to understand, and especially for issues where we already have strong beliefs, knowing when we need to let ourselves be corrected by the actual science can be a tricky process. On some issues, like nature vs. nurture, we may get different impressions of the relative influence of the two from different scientific fields even when peer review is working, simply because one field is studying more the one, and another the other.
What I do know, though is this: Believe peer reviewed research over what hasn’t met the test of peer review. Believe what’s been well replicated over what’s shown only in a single study. And what’s agreed on by the scientists of many fields (such as the theory of evolution) is a more robust finding than something where one field of study may point one way, and another field in a different direction.