Imagine if you were free to choose
Imagine if you were free. No rights, no wrongs. Just able to choose what feels right for you in this moment. It seems impossible to freely decide or act when we are constantly bombarded with advice and recommendations from friends, family, social media, the news, the experts. What would you do differently if there were no right or wrong answers? (If this question feels too hard to answer, get in touch with me for a free 1:1 discovery call).
Actually, although most of them probably wish they could, even scientists don’t deal in facts, rights, wrongs, black or white. They do their absolute best to get as close to the facts as possible, but they know enough not to make promises. I should add I’m mainly thinking about research in Psychology (although my Physicist husband says it's the same in his field). And, let's face it, Psychology crops up in most important moments of our lives. How to make decisions, how to build relationships, how to structure your life (or not), how to communicate, how to be motivated, how to be happy, how to be successful, how to maximise your thinking capacity, how to learn, how to parent, how to bounce back, how to manage social situations and so on.
There is a classic example told when teaching research methods in psychology: ice cream sales are correlated with murder rates. As ice cream sales go up, so too do murders committed. If all you did was measure these two factors and conduct statistical analysis, you could conclude that ice cream sales cause murder and search for explanations (Is it the sugar high? Jealousy because the other guy got an extra chocolate flake??). Well, actually, perhaps the more likely answer is that ice cream sales do not cause murder, but rates of both are driven by a different factor: hot temperature. When designing their experiments, psychologists attempt to identify all the different factors that could influence the one that they are interested in, controlling them or allowing for them in some way, so that they don’t get caught out by examples like this.
They wouldn’t just take sales information from one ice cream truck, but check several of them – otherwise it might not be ice cream in general but the particular blend sold by Micky’s Frozen Delight, or something about the way that Micky himself interacts with customers that gets their blood boiling. If researchers were surveying people about ‘aggressive intent’ after eating a Chocolate Fudge Sundae, they wouldn’t just interview 10 people because it could be by chance that 6 of those 10 had indeed felt strong murderous intent. They would survey hundreds because then any pattern they find is less likely to be pure chance. They will use trained independent researchers, who have no idea that the working theory is that ice cream causes murder, to avoid the interviewer accidently phrasing the question in a way that influences participants’ memories (“Yeah come to think of it I did feel rageful when my ice cream finished too soon”). They will put people randomly into one of three groups: those that are given ice cream, those that are given nothing, and those that are given bananas (for example). If people have the same level of desire to commit murder regardless of the group, it’s unlikely to be the ice cream. They will compare age, gender, employment status, hunger level, starting level of aggression etc, to make sure that differences in murderous intent between those groups are not due to one of those other factors. They will spend time trying to understand, when people talk about murderous intent are they talking about the same thing? In many areas in Psychology you don’t have the convenience of things that you can clearly measure like ice cream sold and murder committed. We have trusted research tools to measure wellbeing these days but how do you really know that one person’s feeling of wellbeing is the same as another’s? You can’t get out a ruler and measure motivation. When Psychologists conduct research, they do their absolute best, based on thorough training in research methods, to account for as many factors as possible and conduct experiments with the least room for error or misinterpretation possible, distinguishing between random chance and genuine cause and effect.
But no matter how talented they are, how carefully and conscientiously they design their research, how do researchers know for sure they haven’t missed the one factor that is actually behind their findings? How do they really know that those two factors are behaving in the same way because of meaningful connection and not just coincidence? This is why when researchers talk about their findings, they (hopefully) will not say ‘This proves’, or ‘X increases Y’. It is very difficult to prove something. The ‘black swan’ example is often wheeled out to illustrate this point in research methods training: If your theory is that all swans are white, no matter how many white swans you see, you will never know for sure that every single swan is white. But you can disprove that theory the minute you spot a black swan. So now you can say not all swans are white, but you still can’t say definitively what all swans are. It’s really unwise for psychological scientists to talk in certainties and hard facts about their research. If they do, they might say something neutral and specific about that particular research, like ‘a statistical association was seen in this study: when ice cream sales went up, so did murders’ (but perhaps in fancier/scientific language). They won’t add, ‘therefore eating ice cream triggers murderous intent’. If they have good, evidence-informed reason to believe that this is the case, they might diplomatically say ‘the research suggests…’ or ‘one possible interpretation is…’ In their write-up they will include a whole section on the limitations of the study. They’ll describe all the problems with their methodology; how the results only apply to the American 20-year-old psychology students who were their participants, and different results may be found with a different type of participants; they’ll stress the need to repeat the study to check if the results were just a fluke; they’ll recommend future research to tweak elements of the study to see how it affects the results; they’ll suggest alternative explanations for the findings.
Even so, a skewed image of research findings can get out into public awareness, making them seem more promising than they are. This is because of publishing bias. Researchers are a lot more likely to submit their work for publication, and be successful in their aim of publishing their work, if they find interesting results that show something happening. So if 10 studies are conducted, and only one of them finds that ice cream causes murder, that one study is the most likely to be published. Finding that ice cream does not cause murder is not really newsworthy. On the other hand, if all 10 studies were published, we would get a more realistic view and be a lot less likely to see alarmist articles in the media or parents campaigning against ice cream trucks. This type of bias has caught public and media attention recently with revelations that results of clinical trials finding that antidepressant drugs relieve depression symptoms are published more frequently than results of clinical trials showing that they do not, giving the impression that these drugs are more effective than they probably are.
Another way that research findings can be misrepresented, is when they are simplified or sensationalised in the media. It might be reported truthfully that symptoms improved consistently across participants and repeated studies, but they might not include details of how much better – certainly not in the headlines. Maybe it was only a tiny improvement in very specific circumstances. The findings are still promising and interesting, but perhaps not as much as they seemed. And it is very rare that 100% of participants will experience the positive outcomes. Let’s say there is a study looking at the effect of saying affirmations every day for 3 weeks on depressive symptoms, compared to people who said their times tables instead, or said nothing different to normal (this example is completely made up by the way). And let’s assume this is a very perfect, trustworthy study that has been replicated many times etc. If 67% from the affirmations group experienced a reduction in depressive symptoms compared to no change in the other groups, that would be a hugely impressive result. It is highly likely to be statistically significant and unlikely to be chance: something is going on here when people say affirmations if as many as 67% have a positive outcome. But if this, along the way gets translated into advice that everyone with depression needs to say affirmations because affirmations cure depression, that means 33% of people who follow the recommendations will experience no change. If affirmations cure depression, why hasn’t it worked? What is wrong with them? It’s obvious when you go back to the research, nothing is wrong, it’s just that affirmations only work in 67% of cases. But as soon as you get a little distance from the research and into the realm of advice and influencers, there is a tendency to over-promise, and a risk of people feeling a sense of failure when it doesn’t work. If you are experiencing depression, the last thing you need is to feel like a failure too.
So if you enjoy learning about the research, it’s useful to have some idea about what makes good quality research so you can decide how trustworthy it is, and remember to take the findings with a pinch of salt. There’ve been many times when I’ve come across what seems to be a good quality piece of research that has a clear finding, only to find that the next research paper I read got very different results. Then as I read around I start to put together a cautious picture with plenty of nuance and variation, but what becomes clearer is that we don’t really know. That’s not to say the research isn’t useful. I’m not bashing research. I am a big research geek – I love immersing myself in it, pulling it apart, putting it together, comparing it, conducting research myself. We can learn a lot from the research. It can help guide us, give a starting point, help increase the chances of success, provide some clarity. It can be helpful to guide funding decisions for programmes to create something that might work for a majority, and want to reduce the risk of wasting time and money. And it’s just fascinating.
But the point is, although you can talk about likelihood and tendency, with matters of psychology you should be very cautious about using absolute rights and wrongs. We are unique and our complicated selves and lives involve a whole array of factors working together and interacting in a constantly shifting and changing dynamic. Even if we say, for argument’s sake, that there are facts, even highly trained researchers are not in possession of those facts. So by all means follow advice if it’s useful, but you don’t have to. There is no reason why any research, no matter how expertly conducted, should apply to you. It might work for someone else. It doesn’t mean it will work for you. It might work for you in five years when you, and your circumstances, are different. That doesn’t mean it will work for you today, with your current circumstances. So perhaps you are best off getting to know yourself right now, so that you can act according to what feels right for you in this moment, whether or not that is in line with advice (from experts or otherwise). (Get in touch with me if you want a free 1:1 coaching session to help you with this). If you do choose to follow advice and it doesn’t work for you, it doesn’t mean there is something wrong with you, the way you did it, or the advice itself. It’s just complicated. You’re unique. The factors didn’t align in that way today.
So feel free to choose to go against ‘what works’ and advice, no matter how sensible it is. No one actually knows for sure. You can experiment without having to prove anything (even the researchers aren’t trying to prove something, remember?). You’re not mad, stupid, stubborn or brave, you’re just working on a different hypothesis right now, and you are free to do so.