For many years, experts (and non-experts) in sports performance have claimed to take an “evidence-based” approach – that is, the idea is that training decisions should be made rationally and scientifically. However, if the pandemic has taught us anything, it is that different people can try to bend the same evidence to suit a predetermined narrative.
You may find yourself in a situation where someone swears their way is the best way, or a news article hails a new superfood or piece of gear as a miracle solution, but is it really? Or maybe a trusted friend or training partner passes along a blog post that doesn’t quite add up. What does the cold, hard science actually say? This is where it may become necessary for you to read the science for yourself. This can be challenging if you do not have a scientific background to rely on. My goal for this article is to help you engage with scientific evidence directly and to assess whether or not it is relevant to you.
The definition of “expert” opinion
The first thing question to ask is whether the authors are qualified to opine on the field of study. Your first stop should be PubMed, an online database of (mostly) reputable scientific journals. Look up the authors, and their previous work. For example, you might see my name as the lead author on a paper involving the W’-Balance Model. If you did a search on PubMed, you would also see my name on several related papers over the past decade. You would find my co-authors have publication histories going back much further. This should give you confidence in my work. If, on the other hand, you were interested in the genetics of certain parasites, and you looked hard enough, you might find a single conference abstract with my name on it from the late 1990’s. This might tell you that I’m not a world expert in parasitology, and that you should treat my opinions on the subject with caution.
What is a reputable source of information?
The next thing you should consider is the source. Look at the academic and research leaders in your field of interest and see where their work gets published. This is one sign that a journal is reputable. You can also look at their impact factor, which is a sort of scoring system for how often a journal’s publications are cited elsewhere, but this is an imperfect metric at best.
Academic publications must go through what is known as a peer review process, meaning that before something is published, it must go through a gauntlet of experts in that field, who scrutinize to see if the ideas, research methodologies, and conclusions are actually sound.
Also, consider the type of paper you are reading. Review papers throw a wide net and try to draw bigger conclusions from a selection of individual papers. They are often a good place to start. However, be sure to remember the point about the author—writing a review does not make one an expert. Recently, I read a review paper written by someone completely unqualified. Sometimes, bad science gets through the peer-review process. In general, what you want to look for is consensus; the widely held position among scientists who are established in the field.
What if a journal article is behind a paywall?
Once you find a paper, you may find it challenging to gain access. To put it bluntly, this is because scientific publication has become a racket, where publishers charge scientists to publish and readers or libraries exorbitant rates to gain access. You have a couple of options to bypass this mess. Your first option is to contact the authors. They will often be flattered by your interest and happily share a copy by email. It is also possible to obtain articles through interlibrary loan at your local library, so don’t be afraid to ask.
There are also pre-print servers, but these can be hit-and-miss and importantly, the work has not yet been peer-reviewed. Finally, Google is your friend. Papers are often put up on university websites, as are PhD theses. The latter are often a gold mine. For example, you can find much of my early work in my PhD thesis on the University of Exeter website. It has the same data as the papers published in the journals, along with more in-depth discussion, without the subscription or access fees.
How to read (and understand) a scientific paper
So, the file has appeared in your inbox. Now what? The next thing you need to understand is the structure of a scientific paper. There is an introduction, where the authors will try to tell you a little bit of the background required to understand their work. There is a methods section, which tells you exactly how they asked the questions they did. There is a results section, which tells you exactly what they found based upon their methods. Finally, there is a discussion section, where the authors will try to tell you what they think their data means. Sometimes, there is a short conclusion section that tells you why the work is important.
My undergraduate mentor, Dr. George Bazinet, showed me a great way of dealing with papers. Using scissors, he removed the introduction and discussion sections of a paper. He then asked me to read the methods and results in isolation. What did they do? What did they find? His reasoning was that I should decide what I think of the data before I let the authors tell me what they think. Thirty years later, I can still hear his voice, “Don’t fall for the Jedi mind trick, kid.”
This kind of reading puts someone without a scientific background at a disadvantage. You may not be familiar with all the different types of research methods and equipment. That’s OK! Get yourself an undergraduate exercise physiology textbook, so that you have a reference as you read. Moreover, the basic function of most equipment can be found online. Finally, don’t overlook trusted scientific journalists that help translate bench science into what it might mean for someone like you. For example, my colleague Alex Hutchinson does great work reading papers and explaining them – you can see some of his analysis on Triathlete. There are also scientists like myself, Dr. Mark Burnley, and Dr. Andy Jones, who take the time to go on podcasts and other media to explain, in layman’s terms, what our work means for athletes. Once you get the basics, the more advanced stuff is easier to wrap your brain around.
Not all studies are relevant
An important consideration is the study population. For example, a paper that involves cardiac rehabilitation for heart transplant patients in their 80s might not be totally applicable to a well-trained amateur athlete in their 20s. Men and women might not behave identically, and that is an important consideration when you look at the gender breakdown of many study populations. Population specificity can go deeper than you think. For example, work from my colleague Dr. Louise Burke’s lab shows that certain interventions work for sub-elite athletes, but not elites, even though both populations are very fit (and indeed might be lumped together in many published studies).
When possible, look for studies that involve participants as similar to yourself as possible. This is not always easy; scientists often study the subjects most available to them. For example, at the University of Exeter, we regularly volunteer ourselves as subjects in each other’s studies. It might be hard to generalize our results to people older or younger—or less fit.
Not all studies are meaningful, either
Most of us are not elite athletes. Some of us may occasionally vie for an age-group place, or a coveted Kona or Boston slot. However, the vast majority of athletes are racing for their own enjoyment. Therefore, I typically ask athletes to what lengths they are willing to go, and at what expense, for what reward?
For example, there is good evidence that mixed carbohydrate beverages may improve performance. You can experiment with a new sports drink at almost no risk, and with very little investment. There is also good evidence that shoes like those we helped develop for the Breaking2 project can take a few minutes off of your marathon time. However, they are quite expensive, and will make absolutely no difference to someone like me, who would need to take hours off of his time to qualify for Boston. Is it worth it to knock five minutes off of a 4-hour marathon? Only you can answer that.
Just be honest with yourself as you make the assessment. As I often tell my athletes, don’t try to doctor a bad cake with expensive icing. Bake a good cake and then we can talk about what you put on top of it.
Be a skeptic
Be suspicious of grand pronouncements, particularly from industry. Industrial support is not a disqualifier in and of itself. Many of the best scientists I know get money from industry. However, the mark of a good paper is that it gives an even-handed treatment of the data. Look for the parts of the discussion that argue both sides of the issue. For example, you might read something like:
Our data indicates that FastJuice aids fat burning. This is in agreement with previous data from Labs A and B, but is in conflict with data from Lab C. The difference could be due to differences in the test used, since Lab C used an older test that was shown to be less reliable by Lab D in 2016.
What I am getting at is something called confirmation bias. We all tend to believe things that support our existing beliefs. Good scientists think about this, and carefully avoid it by examining data in conflict with their own. If someone is asking you to ignore contradictory findings, make sure they have a good reason. If you find yourself doing that, and we all do from time to time, check yourself.
Above all, remember this: The major points of exercise physiology have been well-understood for a half-century or more. We learn new things all the time but that stuff is usually the gravy, not the turkey. There is no secret training program, no new magical understanding of physiology known only to the best athletes. If someone discovers something truly revolutionary, a lot of papers will be written about it in a short period of time. If the data someone is trumpeting is published in the Journal of Nowhere, and no one has published follow-up work, be careful. If a company shows you a bunch of testimonials and in-house data that isn’t published anywhere, be extra careful. Remember that almost everyone can achieve a new PR by better leveraging existing, well-established science. I am fortunate enough to have trained some of the fastest endurance athletes on the planet. People come to me from all over the world, and more than half of them leave with the same (boring) advice: Eat better, sleep more, organize your training, and fully rehabilitate your injuries. Believe me when I tell you: the secret is there is no secret.
Dr. Philip Skiba is the Director of Sports Medicine for Advocate Medical Group in Chicago, and is an honorary Associate Professor at The University of Exeter. He serves on the medical board for USA Cycling. Dr. Skiba has trained dozens of elite, world champion and Olympic athletes, and was a consultant to the Nike Breaking2 Project. His new book, Scientific Training for Endurance Athletes is available in the USA on Amazon, and worldwide from http://www.physfarm.com.