Over-egging the evidence by Prof Andy Lane

As a practitioner, I want to be able to find interventions that are useful for the athletes I work with. I want to be able to see the effects and know how to achieve them. For example, when using a sport psychology intervention, I want to know how much of a difference it should make.

As a researcher, I want to investigate what interventions work and, importantly for the research community, find out why they work. Whereas the practitioner expects a more clear-cut answer (does it work or not?), as a researcher, I do not expect such simplicity and am seeking to unpack the details. I realise the practitioner wishes findings to be given in a straightforward way, but I have a responsibility to uphold the science behind it; and if an answer is not straightforward, and the benefits come with a few ‘ifs and buts’, then I present it that way. To do anything else can be misleading, but all too often I feel a growing issue with findings being presented too strongly and simply implying that an intervention will work.

As an athlete, I am looking for interventions that help. I want to know what to do and am less interested in why they work. I read the popular press, am exposed to adverts, social media, along with the academic outputs. From reading these sources, it should be easy to improve performance. I should be able to follow one of the interventions, strategies, diets, training methods that work. What I find is that many of these come up short, often with no noticeable effect. When I go back to the study, or re-read the article in the popular press, I am left wondering why. In this blog article, I will propose a few reasons why this occurs and how we might address or work with it.

Researchers – surely your job is to show what works and what does not?? 

Logically, the above statement should be true. But if we look at the publication process, it shows that it favours the publishing of articles that demonstrate positive change, emphasising impact and claiming effect. Few papers publish negative findings and there are many reasons for this.

First, the Research Excellence Framework (REF) in the UK favours originality – and so, a study that shows something ought to work, but does not work, is not as original as showing something new that does work. And so, researchers look for new areas.

Second, the impact agenda that researchers increasingly follow, partly due to the REF, encourages publishing findings that show something works. Third, with many researchers being doctoral or recent graduates, in the early stages of a career, finding something does not work carries the risk of going up against a body of researchers, sometimes well-established researchers, and whilst this could build a career, it’s risky. A much safer option is to support and extend the key researchers and build networks.

Fourth, academic arguments on social media are sometimes unsavoury and presented in a way that don’t allow athletes to follow why something does or does not work. And so, the connection between the researcher, practitioner and athlete can be clouded by the use of jargon.

So, what can be done? 

Writers should be mindful of the audience and their motivations. Imagine an athlete on the eve of performing the most important competition of their life, and ask yourself; will the claims you’ve made in your article (or when reading someone else’s) persuade the athlete that the strategy is worth following? If the answer is "yes, the intervention is worth following”, then consider how strong the supporting evidence is. How big will the effect be, how likely is the intervention going to work for your athlete given we know that there is always individual variation (the results of most studies show this) and are the conditions where the study was conducted similar to the ones you face?

When these factors are considered, I find myself softening with regard to how strongly I present the results. I hold an image of the athlete in my mind, or consider the usefulness of the results from the perspective of my athletic self. The consequence of this process is a discussion section in an article on how difficult it is to produce a large effect. Practitioners I ask feel disappointed as they want stronger findings. My response of course, is that it’s better to present an honest response than a misleading one.

In conclusion, researchers play a key role in determining what works and why. I feel the impact agenda encourages over-emphasising findings. I feel this will have detrimental effects on the credibility of researchers in the long term as athletes and practitioners might find out quickly that an intervention does not work as strongly as suggested.