There is good research and there is bad research, and there's a tremendous gulf between the two. It is possible to evaluate research based on its own merits and, with some training and some attention, to determine whether a study is a good study or a bad study and whether we can rely, therefore, on its conclusions.
I want to go into, first, just a kind of overview of the categories of research, what are the different types of research, what can we tell from each of those types of research, and then I’ll maybe give you kind of a framework for how to research a topic, what to watch out for, and what some of the larger kind of meta-problems are in terms of scientific research in general. This will probably be geared a little bit more towards practitioners and students and people who are going to be doing this kind of research, but hopefully it will be interesting enough for laypeople who are listening, which is most of my audience, because it might help you to understand that what you hear and see in the media, you can’t always take it at face value.
In this episode, we cover:
0:00 Clinician training announcement
2:48 Chris’s 11-day digital detox
10:38 Categories and types of research
22:38 Research framework
Links we discuss
Clinician training announcement
Chris Kresser: Hey everybody, Chris Kresser here. I know some of you who listen to the podcast don’t necessarily read the blog, so I wanted to make a quick announcement in case you missed it. I’m excited to let you know that later this year or perhaps early next year I’ll be launching my clinician training program. If you’re ready to add the power of ancestral nutrition and functional medicine into your practice to attract new patients and retain the ones you have, to dramatically increase your impact, and to join me in transforming the lives of thousands of people, this training is for you! You can go to ChrisKresser.com/clinician for more details. And there is also a link to a pre-registration list there that you should sign up for if you want to be among the first to know when the training launches. Space will be limited, so it’s a good idea to put your name on that list if you want to be in the first group. Thanks again for your interest and I’m looking forward to collaborating with you on transforming people’s health and having a major impact on the future of medicine. Okay, now on to the show …
Steve Wright: Good morning, good afternoon, good evening. You are listening to the Revolution Health Radio Show. I’m your host, Steve Wright, co-author at SCDlifestyle.com, and I want to let you know that this episode of RHR is brought to you by Chris Kresser’s program, 14Four.me. This is a healthy lifestyle reset program. It’s getting to be the end of the first quarter of the year, and a lot of people’s goals from the beginning of the year can sometimes begin to fade at this point. If your goals are centered around things like changing your diet, sleep patterns, movement, stress, all of the core essentials to really handling low energy, weight gain, stomach issues, skin issues, all these things, 14Four.me is really going to help you. Chris walks you through step by step, really holding your hand as you integrate these four core parts of being healthy. It’s a really amazing program. I know he has lots and lots of people already joining and doing it, so I hope you check it out.
All right, let’s get on to the RHR Show. With me is integrative medical practitioner, healthy skeptic, and New York Times bestselling author, Chris Kresser. Chris, good afternoon.
Chris Kresser: Hey, Steve, how are you?
Steve Wright: I’m doing very well, thanks.
Chris Kresser: Good.
Steve Wright: How is California? Because in Boulder we are experiencing 70 and sunny and that’s delightful.
Chris’s 11-day digital detox
Chris Kresser: It’s actually raining today. It’s been nice apparently. I was just off the grid for 10 days or 11 days, my annual off-the-grid experience, and as you know, it was extremely valuable and rejuvenating and brought me a lot of perspective and insight. I think it’s such a fantastic experience. I hope that everybody can benefit from it, and I feel an article coming on or perhaps a podcast or something because it’s just so life changing, and it’s so easy not to do it now in this super-hyperconnected world, so I’m really grateful I was able to do it, and I’m happy to be back, although the downside of the 10-day unplug is the 485 emails that were in my box when I got back and a very large list of things to do.
Steve Wright: I’m really curious, those first couple days, I’m imagining it probably gets lesser and lesser by day three, did you have those urges to check your phone, or were you curious about the computer and email and things like that? Was there an actual breakpoint where you finally were like, Oh, right, I don’t need this?
Chris Kresser: It’s happened differently in different times in the past, but at this point, to be perfectly honest, it takes me about 15 minutes to adjust! I mean, I think you know this, Steve. I already kind of insulate myself a fair amount from that. I don’t have notifications. We did those podcasts on productivity a while back where I shared that I batch email. I don’t have notifications turned on on my phone so that when somebody texts or emails or posts something to Twitter or whatever, I don’t get a notification. I’m pretty protective of my time and my mental space even when I’m on the grid, but maybe it’s just because I know how much I benefit from it and I know how good it feels now so that when I do it, really within an hour I’m just…. Actually the more challenging part for me is coming back. At the end of the 10 days, I start to feel this pit in my stomach where I’m dreading opening up my email and the flood of things that that’s going to bring. And I love my work. I mean, I couldn’t love my work any more. I’m so fortunate in that way, that what I’m coming back to is something that I really, really want to be doing, but even so, it’s still a kind of jarring transition. For me, it just feels like a chance to really reconnect with myself, my own rhythms, my family who was with me in this case, and to just kind of get clear and get a sense of what life is like without constant interruptions and demands, and it’s really interesting what becomes clear in that space… for me, at least. It was really a great experience.
Steve Wright: That’s awesome. This is your third or fourth year in a row? Is that correct?
Chris Kresser: Yeah, every year I do a 10-day-at-least thing, and then I do, as I think I’ve mentioned before, one or two days a week of completely unplugging. I call them ‘free days.’ That’s actually based on Dan Sullivan’s term from the Strategic Coach program, which is a 24-hour period with no working, thinking about work, talking about work, nothing related to work at all, and I aspire to having two free days a week at least and then chunks of free days throughout the year. Right now, I’d say, I’m successful about 50% of the time with that. I definitely have at least one free day a week, sometimes two, so I’m slowly but surely trying to increase that ratio.
Steve Wright: And on a free day, do you use your technology?
Chris Kresser: Only if I want to, like, look up a map or something like that, not to browse the Internet because that just too easily can lead to thinking about work even if I’m not meaning to. You know, you just see something on the sidebar or whatever and just there you go.
Steve Wright: Rabbit hole!
Chris Kresser: Yeah, exactly. I mean, I want to write and talk about it more in the future because I think it’s something that’s becoming increasingly important in this world that we live in, and the general trend is not supportive of that at all.
Steve Wright: Nope. Well, you inspire me.
Chris Kresser: Cool.
Steve Wright: Keep it up, keep sharing, and let’s get on to today’s question.
Chris Kresser: All right, let’s give it a listen. It’s a good one. This is a warning in advance: The geek factor on this episode is going to be pretty high! So apologies for those of you that aren’t into that, but I think it’s an important topic, so here we go.
Question from Hayden: Hi, Chris. My name is Hayden Smith, and I’ve got a question that’s been bothering me for a while. I want to know how you do your research, and how do you know what studies are valid or which are misleading? And do you prefer specific journals, or do you just take apart each and every study? I’m a senior in college studying micro- and molecular biology, so I’ve read thousands of scientific papers. And you know, I believe in this traditional foods diet. I live on a raw milk dairy, and my professors just scoff at this type of diet, and so I’ve learned not to even bring it up anymore because they’ll shoot me down with all sorts of research. Whether it’s about cholesterol, red meat, or the gut microbiome, it doesn’t really matter. They feel like they have better research, and I feel like I’ve got good research, too! So anyway, thank you so much. I know this is a tough question, but I really appreciate all of your help.
Chris Kresser: OK, Hayden, this is a really excellent question. It’s actually a topic I’ve been writing a little bit about recently. Those of you who follow the blog might have seen a couple articles I’ve written. One was about scientists and the public being at odds and what some of the issues are there, and those relate to research, of course. And then another was about fraud and conflicts of interest in medical research. And so I covered some of those topics, but I want to go into, first, just a kind of overview of the categories of research, like what are the different types of research, what can we tell from each of those types of research, and then I’ll maybe give you kind of a framework for how to research a topic, what to watch out for, and what some of the larger kind of meta-problems are in terms of scientific research in general. This will probably be geared a little bit more towards practitioners and students and people who are going to be doing this kind of research, but hopefully it will be interesting enough for laypeople who are listening, which is most of my audience, because it might help you to understand that what you hear and see in the media, you can’t always take it at face value. And of course, you already know that probably if you’re listening to the show and you’ve followed my blog, but this might give you a little more insight into that idea.
Categories and types of research
OK, so there are roughly two categories of research, papers, or studies. There are review papers, and then there’s original research or original studies. Reviews draw on original studies that have been published on a topic, and they kind of assimilate all those individual studies, and in some cases they might do a meta-analysis on them to draw a conclusion, or in other cases they just kind of review the studies and offer perspective or insight. Then original studies will test a hypothesis typically through either an experiment or population-wide data collection.
Within original studies you have two different categories: observational, which are also known as epidemiological, studies and experiments. An observational study is one that draws inferences about the effect of an exposure or intervention on subjects where the researcher or investigator has no control over the subject, so it’s not an experiment where they’re directing it and making things happen. They’re just looking at populations of people and drawing inferences about the effects of a diet factor or lifestyle factor or something like that. An example of an observational study would be in comparing rates of lung cancer in smokers and nonsmokers. They might look retrospectively at groups of people who smoke and groups of people who don’t smoke and then see what the rates of lung cancer are in each of those groups and draw some conclusions.
An experiment or an experimental study is one in which an intervention is intentionally introduced and then an outcome is observed where the investigator is controlling what group subjects are assigned to. An example of an experiment would be a randomized controlled trial or RCT, which is often considered to be the gold standard of medical research, that would answer the question, for example, of whether calcium supplements improve bone health. Study subjects would be randomized into two groups, meaning they’re randomly assigned to two different groups, and one group would get calcium supplements and the other group would get placebo. And the best kind of these trials, if they’re blinded, the investigators and the patients are “blinded,” which means they don’t know whether they’re getting the calcium supplements or the placebo, and the idea behind that is to reduce the placebo effect, which is substantial and in some cases can account for 30% or 40% or more of the effect that is observed in a trial.
One of the key things to understand about observational studies — and I know people who have been following me for a while will recognize this — is that you can’t establish causation from observational studies. You can establish a correlation or an association between two variables, but you can’t establish causation conclusively. I’m going to add some caveats to that a little bit later. To give you some silly examples of how observational data can be misinterpreted, these are often used in a statistics class or classes on research methodology. Consider the statement, “The more firefighters that are sent to a fire, the more damage gets done.” Well, obviously that’s not actually how it happens. It’s not more firefighters that are causing the damage. It’s that when fires are worse, more firefighters are required to fight it, so the causation there is reversed. Another one would be, “Children who get tutored get worse grades than children who don’t get tutored.” Again, the causality there is reversed. Children who are not getting good grades are more likely to hire tutors, or their parents will. And then there’s a strong correlation between ice cream sales and shark attacks in Florida. Does eating ice cream increase the risk of shark attack? Not likely. It’s more likely that hot weather is the underlying cause of both the increase in ice cream sales and shark attacks when people are going in the water more. Those are kind of silly examples, but they make the point.
A more relevant example would be hormone replacement therapy, HRT. I’m sure a lot of people remember that a while back there were a number of observational studies that found that women who were taking hormone replacement therapy had lower rates of heart disease. There was a lot of media around this, and a lot of postmenopausal women started taking hormone replacement therapy in order to protect themselves from heart disease, but oops, later on they did randomized controlled trials to test this hypothesis and found that not only did HRT not lower heart disease rates, it actually increased them. And later analysis found that women who were taking HRT and had the lower rates of heart disease were from higher socioeconomic groups with better-than-average diet and exercise regimens.
Those are a few examples of how you have to be really careful about inferring causality from observational studies. Unfortunately this happens way, way more often than you would think since it’s kind of like Research Methodology 101. One of the first things you learn if you study research methodology is correlation does not equal causation. And yet, particularly in the popular media, but even in the scientific community, this mistake is made over and over again, and you don’t have to look very far for some examples. The major one that everyone’s familiar with is the correlation between dietary cholesterol or saturated fat intake and heart disease. Early on, there was an association observed between those factors. People who ate more saturated fat had, in some studies, higher rates of heart disease, but then when they looked further into it and better quality studies were done and randomized controlled trials were done, there’s not a clear relationship. And in fact, the most recent dietary guidelines in the US have removed any prohibition against dietary cholesterol. Europe and Japan and other countries had done that a long time ago because they recognized that the research didn’t support it, but the US actually finally got around to that as well.
Steve Wright: Let’s just kind of recap this really quickly. Basically we can split scientific papers into two groups: the review papers and then study papers.
Chris Kresser: Original studies, mm-hmm.
Steve Wright: The studies get broken down into two more groups: observational studies and actual experiments that are done.
Chris Kresser: Mm-hmm.
Steve Wright: And so we’ve just covered the observational studies and the fact that there can be a lot of factors that get crossed and there are a lot of conclusions that are pulled out of these on a regular basis that are actually correlations and not causations, and so we need to be careful about how we use the evidence that is shown in an observational paper.
Chris Kresser: Exactly. That’s a great summary. There is kind of a caveat, as I mentioned before, which is that there are some criteria, which are called the Bradford Hill criteria, that can be used to strengthen the possibility that a correlation between two variables is causal. I’m not going to go over all of them, but some of them include temporal relationship: If two factors are related in time, like first A happens and then B happens, that’s more likely to indicate causality. The strength of the association: The stronger the association, the more likely it’s causal. Dose response, which means if you get a higher dose or stronger exposure of something, the risk increases. That also typically points towards causality. Consistency: When results are replicated in different settings using different methods, it’s more likely to be causal. Plausibility: If there’s a plausible mechanism that could explain why one factor is causing the other, that’s a point in the corner of causality. And then coherence: Is the finding coherent or compatible with other existing theory and knowledge?
Those criteria can be used to strengthen the likelihood of a causal relationship in situations where doing an experiment isn’t possible, perhaps due to ethical concerns. Using the smoking example again, a scientist couldn’t design a study where they gave cigarettes to nonsmokers for 20 years and then compare them with a control group that wasn’t smoking. That’s unethical because we know enough now to know that smoking increases the risk of lung cancer, so that experiment will never be done. Another example would be any kind of experiments on pregnant women. No research board is going to approve an experiment that involves doing something potentially harmful with one group of pregnant women and having a controlled group that doesn’t get that intervention. It’s stupid. It’s too risky. No one’s going to sign up for that. No one’s going to do that. So in certain situations, you have to rely on observational research, and that’s where the Bradford Hill criteria can be applied to strengthen the possibility. Even though you can’t know 100% that there’s a causal relationship, if you apply all those criteria and it passes all those tests, it makes it much more likely.
Steve Wright: Quick question about that.
Chris Kresser: Mm-hmm?
Steve Wright: When we’re looking at an observational study, we’re not looking for it to satisfy one out of those five… or is it the more that it gets out of the five then the stronger it is?
Chris Kresser: Well, yeah, there’s even more criteria that I didn’t go through.
Steve Wright: OK.
Chris Kresser: Just to spare some people! And keep the geek factor from getting completely off the charts. But yeah, certainly the more criteria that the relationship satisfies, the stronger the chances are that there’s a causal relationship. And so you can think of it as, like, approaching 100% certainty but never getting to 100% certainty. And by the way, that’s true for randomized controlled trials, too. I mean, there’s this idea that those are unassailable, but a randomized controlled trial can be designed well and with solid methodology and reliable results that are reproducible, but they can also be done poorly, so it’s not like those are perfect either. They tell you different things. That’s the main thing to understand.
In terms of how to research a topic, here’s kind of a basic framework. Sometimes it can be helpful to begin with review papers because the authors have already done the hard work of rounding up relevant studies. Like, if you look at the reference section of the reviews, you can find a lot of those original studies, and then you can go look them up and read them yourself. One thing that I want to urge caution about is relying on the conclusions of the review paper because it depends on what the original research was that they reviewed, it depends on what their perspective is and sometimes what their agenda is, but at the very least, the review papers can be a great source for looking deeper into original research.
Here are a few things to watch out for with review papers. I just kind of hinted at a couple of them, but number one is, do the citations in the review paper support the claim? You might be surprised, like, if you read something in a review paper and you see a little citation there, when you go and look up the citation. I can’t tell you how many times I’ve done that and found that the citation doesn’t at all support the claim that is made in the review paper. The results could’ve been taken out of context or generalized to populations where the results don’t apply. For example, they’re talking about children and they cite a study that was done in the elderly or adults. Or occasionally researchers will just completely misinterpret a study, whether it’s intentional or unintentional. So it’s definitely a good idea to look up citations.
Another problem that happens, which we’ve already talked about, is if authors are inferring causation from correlation. They might cite an observational study as proof of a causal relationship when, as we just reviewed, in many cases you can’t really draw that conclusion from an observational study.
You have to consider how the conclusions compare with other reviews. There very often can be other review papers that come to a completely different conclusion. Again, saturated fat and cholesterol would be a great example. You can certainly go into PubMed and find particularly older review papers that reviewed a bunch of original studies suggesting that cholesterol and saturated fat are major causative factors for heart disease, but now if you look at the more recent review papers, most of them are finding that there isn’t a relationship between saturated fat and cholesterol and heart disease.
Of course, we always want to consider the funding source. It’s a good idea to see who funded the study or review and what the authors’ affiliations are. If a paper comes off with a strong bias, that’s another thing to pay attention to. And this isn’t always apparent, but if you know a little bit about the researchers who are authors on the paper, sometimes that can give you some clues into what their agenda might be and what perspective they’re coming from, and it might help you to kind of interpret things with a little bit more caution.
Along the same lines, with original research, some things to look out for would be, is the study short term or long term? A lot of interventions may only be effective in the short term but have different effects over the long term. High doses of fish oil can be a good example of that. Or on the other hand, a study might observe some kind of intervention for only eight weeks and that isn’t really long enough for that intervention to take effect.
Another thing to consider is whether it’s done in vitro, which is in a cell culture, or in vivo, which is in live humans or animals. Is it done in humans, or is it done in animals? Animal research can be really helpful, and there are things that are possible with animal research that aren’t possible with human research, but the ability to extrapolate conclusions from animal research is somewhat limited. Was the methodology strong? If the subjects weren’t randomized and it wasn’t double blind or placebo controlled, the results might not be as dependable. In observational studies, you want to be looking to make sure that different confounding factors, things that could influence the outcome, are controlled for.
You want to pay attention to whether the researchers are using surrogate markers or endpoints. We can use cholesterol and saturated fat as a good example again. Some of the early studies showed saturated fat intake increases cholesterol, so they came to the conclusion that saturated fat intake must cause heart disease because everybody knows high cholesterol causes heart disease. So they were using a surrogate marker — cholesterol, in this case — to reach a conclusion, but later when they studied the direct relationship between saturated fat intake and heart disease, they found that there was no correlation. So it’s much better to use real endpoints, like whether someone has a heart attack, than using a surrogate marker, like cholesterol, because they don’t necessarily relate.
And then this is a big one: absolute risk versus relative risk. I’ve talked about this before. It gets a little bit complicated, and I’m actually planning an article about this soon where I’ll go into more detail. Scientists — but more specifically, drug companies — like to use relative risk to make the results sound impressive. For example, if a treatment reduces the risk of having a disease from 2% to 1%, the absolute reduction in risk there is pretty low. Your risk of getting that disease just decreased by 1%. That’s an absolute risk reduction. But if you use relative risk numbers, you could truthfully say that treatment reduces the risk of getting the disease by 50%, right? Because it went from 2% to 1%. That sounds a lot more impressive, but it gives a really skewed impression of how valuable the treatment actually is. Like, if I say to you, Steve, taking this drug is going to reduce your risk of having a heart attack by 50%, or I say your risk of heart attack went from 1 in 200 to 1 in 100, that’s less impressive, right?
Steve Wright: Much less impressive.
Chris Kresser: Yeah. This is something that happens in research all the time and in the way that research is reported, and it can get pretty dodgy, in fact, because drug companies will often use relative risk to talk about the benefits of a drug, but then they’ll use the absolute risk numbers for side effects, which is very convenient, you know?! In other words, they’re emphasizing the benefits and de-emphasizing the potential harm.
Those are the basic things you need to be aware of when you’re thinking about research and looking into a topic. There are some other sort of larger concerns that we’ve already touched on: the funding of a study, conflicts of interest, which should be listed in a paper but aren’t always. That was one of the things I wrote about in my article on conflicts of interest is that they are often not reported. It’s kind of an honor system where researchers in universities are left to report them by their own volition, and in many cases, that doesn’t actually happen. But with that in mind, you should be able to look at the paper itself, and if the authors are accurately reporting their conflicts of interest, they’ll be listed there.
Another problem is that the studies are only really as good as their underlying data. Researchers will collect data and then they write a paper about that data, and a lot of times people will only read the authors’ interpretation of the data without actually looking at the data themselves. That’s a mistake because if the data is bad, then it doesn’t matter what the researchers say about it. You can’t rely on those conclusions.
This is, of course, a huge problem in nutritional research. A lot of nutrition studies rely on self-reported food intake, and we’ve known for a long time that this is woefully inaccurate for a number of reasons. Number one, people just don’t remember what they ate. I pay a lot of attention to food. I couldn’t tell you what I ate two days ago for lunch right now. But sometimes these food intake processes questionnaires are asking people what they ate not just days ago, but weeks ago. So it’s a big problem, and there was a recent paper that was published in the International Journal of Obesity that revealed it’s probably even worse than we thought. Authors concluded that “[The data] are so poor as measures of actual [energy intake]” — you know, actual calorie intake — “and [physical activity energy expenditure] that they no longer have a justifiable place in scientific research.” That was the conclusion of the paper. In other words, a lot of the nutrition studies that we have are based on data that’s just completely false! That’s a huge problem. If you’re looking at a paper and the data is false, then the whole paper basically has to be thrown out.
On that note, it’s often a good idea to go right to the tables and the figures in a paper and look at the data and see how they compare with what the authors have written about the data, and you might be surprised to find that there’s often a big discrepancy. I can’t tell you how many times I’ve looked at a figure or a table in a paper and seen that it says one thing, and then I go the conclusion of the paper and see that it says the opposite thing or they’ve just completely ignored their own data or for whatever reason, again.
Another issue that I talked about in one of the articles that I wrote is that the quality of scientific research depends entirely on what questions we’re asking, and if we’re not asking the right questions, we won’t get the right answers. Or if we’re not asking a question, we can’t possibly get the answer to that question that we’re not asking! That’s a phenomenon called WNL, or we’re not looking. A good example of this that I use in the article would be antibiotics. For many years, it was believed that antibiotics were safe. If you looked in the scientific literature, you would see that, for the most part, antibiotics were safe. But the question of how they affected the gut microbiome was not being asked because we didn’t know enough to ask it. Now, years later, we know that antibiotics actually can have a profoundly adverse effect on the gut microbiome, which in turn can put us at risk for all kinds of other problems. We weren’t asking the right question, we didn’t get the right answer, and that completely affected the perception of the safety of those drugs, and there are many other examples of how that could be taking place even now.
Steve Wright: Is there another way of looking at that when, for instance, the headline and the conclusion talk about an endpoint but the actual paper is about a surrogate endpoint? For instance, the one that’s coming to my mind is you’ll see a lot of research about muscle building and protein synthesis, but there’s nothing that’s ever proven that if you up your protein synthesis that you’ll actually —
Chris Kresser: That you’ll build more muscle.
Steve Wright: — build more muscle.
Chris Kresser: Yeah.
Steve Wright: All these headlines say do these specific things to build more muscle, but really they’re just measuring some surrogate endpoint that we hope is correlated to the actual thing we want.
Chris Kresser: That actually kind of takes us to our next issue, which is groupthink. John Ioannidis has been a real critic of the quality of scientific research. He’s a scientist himself, but he has this quote that I love, which is, “For many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.” In other words, what often happens, as you said, Steve, is there’s a certain idea that’s out there that people just accept and then we build arguments based on the acceptance of that idea that everyone just kind of nods their head and says, yeah, of course. And we build these newer and often very elaborate theories that are all based on this foundational idea that’s prevailing and accepted in the kind of mainstream community, but what if that foundational idea or those foundational ideas are actually false? Then this whole edifice that we’ve built on those ideas comes crumbling down. You see this all the time with papers about the diet-heart hypothesis or cholesterol or whatever. In the introduction, authors will often link to previous studies that support the dominant paradigm idea, but if those studies they’re linking to actually are poorly done or are invalid, then the whole argument that the authors are making that’s based on that sort of foundation doesn’t fly. This is a big problem.
Another aspect of groupthink is that it’s very difficult for people who challenge the status quo. It’s not welcomed typically. I think I’ve told the story before of the researchers who discovered that ulcers were caused by a bacterium called H. pylori. Originally it was thought that ulcers were caused by stress, and when these two young doctors from Australia introduced the idea that ulcers were actually caused by a bacterium, they were essentially laughed off the stage and for years were kind of ridiculed and not taken seriously at all until one of them swallowed a vial of H. pylori and infected himself with an ulcer and then treated it successfully with antibiotics. And even then, it took years for the paradigm to shift! So that’s a huge, huge problem in scientific research.
Lastly — and this is related to groupthink — confirmation bias is just the elephant in the room, I think. This explains the phenomenon when we tend to seek out research and any information that confirms our bias and ignore any research or information that challenges it. Everybody is susceptible to this, including me. It’s just human nature. There’s some aspect of human nature where maybe it’s our kind of tribal roots where we seek out something that just makes us feel like our bias and our way of looking at things in the right way, and it’s something that scientists and anyone who’s interested in truth has to be constantly on guard about. I’m not, by any stretch of the imagination, perfect in this regard, but it’s something I think about a lot and try to keep my mind open. I read perspectives that are opposite to mine and just try to guard against that as much as I can, but I’m not always successful, for sure.
I don’t know if this has been helpful or has just given everyone the idea that scientific research is just a mess and completely unreliable! That’s not the intention, and I don’t think that’s the case. What I hope this has communicated is that there is good research and there is bad research, and there’s a tremendous gulf between the two. And it is possible to evaluate research based on its own merits and, with some training and some attention, to determine whether a study is a good study or a bad study and whether we can rely, therefore, on its conclusions. But unfortunately, that doesn’t seem to happen very much in the science media, if there is even such a thing anymore! Because it seems to me that what the science media is now is just a group of reporters who aren’t actually even doing this kind of investigation. They’re just parroting, you know, taking headlines off the wire and slapping them up there, and that’s dangerous because it can really give people the wrong idea and it can be a threat to public health.
Steve Wright: Chris, this has been very informative for scientific research, but I would love it if you would sort of comment on the other two forms of evidence that, I think, are typically used and need to be thought about as well, which is clinical evidence of whatever the study might be talking about and anecdotal or testimonial evidence of it working. What’s the interplay in your head between the three?
Chris Kresser: Yeah. Well, this could be maybe another podcast for another time, but I use kind of a triad. I think if it, anyway, as a triad of modern research, whether it’s observational or experimental; traditional, ancestral, evolutionary biology and traditional wisdom, like looking at our ancestors and our ancestral template to learn what’s appropriate; and then my clinical experience and personal experience. The things that I feel most certain about are the things that kind of pass all three of those or get over all three of those hurdles. When I look at them through all three of those lenses, it checks out. Something that is supported by modern observational or clinical research that when we look at the ancestral, evolutionary template, it’s supported by that, and when I look at my experience working with patients or my own personal experience and my own journey back to health and it checks out through all of those lenses, then I’m much more certain of it than I am if it only checks out through one. And I think having that kind of triad can be really helpful because there are times when there’s an apparent conflict in the modern research, for example, and you can then look at the ancestral, evolutionary template to help resolve that apparent conflict. That’s how I conceive of it.
Steve Wright: Awesome. Thanks for sharing.
Chris Kresser: All right, so we made it. Hopefully you’re not either sleeping or have crashed the car out of boredom or whatever! We’ll be back next week or the week after with another question. We’ve gotten a ton of questions recently, so thanks, everyone, for sending them in. It’s really great to see what people are thinking about. The questions don’t just inform this show or provide the basis for the show. They also help give me ideas for blog articles and other things, things that I can do that kind of get this information out to you, so thanks again and keep them coming.
Steve Wright: Yeah, thanks, everyone, for listening. If you would like to submit your question, go to ChrisKresser.com/PodcastQuestion. And in between shows, if you’re wondering kind of what Chris is researching, the stuff that he is finding interesting but not quite interesting enough to make it onto the blog yet or something like that, some new papers, things like that, make sure to follow him on social media, Facebook.com/ChrisKresserLAc and Twitter.com/ChrisKresser. Thank you.
Chris Kresser: Thanks, everyone. Talk to you next time.
Like what you’ve read? Sign up for FREE updates delivered to your inbox.I hate spam too. Your email is safe with me.