Neuroscience under scrutiny

Stella Collins provides help in understanding neuroscience and how to apply it through six questions.

You’ll be aware of the massive explosion of information that’s coming from the field of neuroscience and of the amount of interest that people in the training world are taking in it. 
It’s wonderful that suddenly we have a way to begin to measure what’s going on in people’s heads and even more useful when we, as training professionals, have something really scientific to help us back up some of our projects.
 
All the training institutions and professional bodies are taking an interest in neuroscience and if you’ve attended any conferences, exhibitions or seminars recently you’re sure to have heard something about how useful it is in our world of learning.  
 
I’m as excited as everyone else but occasionally a bit concerned about the bandwagon effect. Is everyone just jumping on this bandwagon because it sells more courses or gives us increased budgets or do they have a genuine commitment to improving what we do? You’ll possibly have come across organisations that use neuroscience in their marketing but perhaps don’t really seem to have clear evidence of how they’ve used that neuroscience to inform what they do.
 
Conversely, there are other people out there who really do understand their stuff and can separate out their neurotransmitters from their hormones and have come up with practical ways to implement that knowledge in a training environment.
 
If you’re reading this article, you’re more likely to be a training professional than a neuroscientist and it can be hard to sort the wheat from the chaff occasionally so this article explores some questions you can ask when you feel you might be blinded by science. Sometimes just the response to these questions will give you an idea as to whether there’s a genuine interest or it’s the bandwagon effect.
 
These are not questions I’ve made up; they are commonly recognised as useful ways to challenge research, information and hypotheses in the science community. Each question on its own may not be entirely helpful but, by asking all six, you’ll see patterns that add or detract from the credibility of the research you’re being told about.
 
1. Who did the research?  
There are two elements to consider here. Who is named as the researcher and which organisation has done the research?
When the research is backed by a major institution like a university or a major science business, you can be fairly confident there will have been audits, checks and balances, papers published and peer reviews to demonstrate scientific rigour. 
 
Who appears to have done the research? Has the person named as the lead researcher (their name will be the first to appear on any published paper) been cited in other papers or other research or is this their first publication?  
 
There’s nothing to say that major institutions can’t get it wrong. There’s a well-documented case of a renowned researcher at Harvard University, Dr Marc Hauser, who’d been getting significant recognition for his long-term studies on monkeys while researching cognitive evolution.
 
He was forced to resign from Harvard University in 2011 after he’d been found guilty of scientific misconduct. He’d fabricated data in one study, manipulated results in multiple experiments and incorrectly described how studies were conducted. Interestingly, ones of his projects was a ‘Moral Sense Test’ in which participants were presented with a series of hypothetical moral dilemmas and asked to judge each one!
 
Just because the research comes from a smaller company, lesser known researchers or universities you haven’t heard about before is not necessarily a reason to devalue the research everyone – has to start somewhere, but it’s worth asking the question. 
 
2. What’s on their agenda?  
When an organisation’s marketing says ‘research says our product is better than others’ then it’s relatively easy to be aware of the element of vested interest. However, if it’s a piece of scientific research that shows a particular training tool improves cognitive performance then it’s not readily identified as a piece of marketing or a public relations exercise.
 
However, much research is done and funded by major corporate businesses with a product to sell based on the research. This is normal and is why we have regulated industries with complex compliance and regulatory processes to check that their research is scientifically rigorous and ethical.
 
We all have vested interests in some way or another and we can’t dismiss research just because it comes from a particular source with something to sell, but be aware and ask yourself, and them, are there vested interests in the research results?
  
3. Where was it published first?  
Science research is usually published first in reputable science journals so that colleagues, peers and other people can look at their methodology, the results and the interpretation – this is the process of peer review.
 
Often scientists will attempt to replicate the work to check that it is reliable and they’ll refine and improve the methodologies. Think of it a bit like lawyers who love to pick holes in each other’s contracts. Scientists love to analyse and find flaws in other scientists’ methodologies and results with the aim of moving the science on.  
 
Research released first to the mass media and not peer reviewed tends to be less well regarded amongst the scientific community. Having said that, there is a heated debate amongst science writers as to how to do peer review now the internet is so all pervasive.  Should they publish first and allow the peer review to happen online or should they go through the more traditional procedure?
 
When something appears in the mass media, it is also going to be simplified because most of us won’t have the time to explore the detail, but again like the lawyers, the detail often contains important caveats and corollaries that may mean the research is only applicable under certain conditions and can’t be generalised.  
 
4. When was it published and when else? 
Have a look at when a piece of research was published. If it’s twenty years old it doesn’t mean it’s invalid but ask yourself what’s happened since. Was this the piece of ground-breaking research that everyone defers to and has been replicated many times or was it a one off and since then research has gone on to weaken or disprove 
the theory?
 
For instance, Herman Ebbinghaus did his initial work on memory retention back at the end of the 18th century and you’ll probably be familiar with the ‘forgetting curve’. If you were to repeat his experiment now you’d probably get quite similar results but there’s clearly far more recent work on how we remember and forget. And you might be surprised to know that Ebbinghaus wasn’t memorising interesting, connected and relevant bits of information – he was remembering random strings of words. Keep this in mind because it’s helpful to remember scientific studies don’t usually replicate real life. In fact, they can’t because the fact of being studied changes people’s behaviour anyway (Hawthorne effect).
 
5. How was the science done?  
Have the results been properly analysed? Have the researchers done double blind trials or eliminated the potential for the placebo effect or the Hawthorne effect? Statistical analysis of results is vital to check whether they are valid and not the result of coincidence or an accidental outlying result. Statistics go against our entirely human tendency to create patterns where patterns don’t exist – statistics can help you to identify a genuine pattern against a coincidence. One psychological phenomenon called salience helps to explain why we often pay attention to some things more than others because they seem more important or more familiar and it’s one of the reasons we need statistics to identify the real patterns.
 
What’s the sample size for the experiment? One of the challenges of brain scanning is that because of its expense many experiments are conducted on only small numbers of people so it’s harder to argue that the effects or results apply to everyone.
 
Worse still, most research is done on WEIRD (Westernised, Educated people from Industrialised, Rich Democracies) participants. According to one study, 68 per cent of research subjects in a sample of psychology journals were from the United States and 96 per cent from Western industrialised nations. Furthermore, psychology undergraduates make up the most common subjects. This presents a challenge when we take a single piece of research and suggest that ‘humans’ learn this way or that.
 
6. What’s the result saying? 
When scientists publish research, they tend to hedge it with statistical probabilities and caveats because they know that it’s very unlikely a single piece of research will tell them anything definitively. It’s usually just another piece in a complex puzzle and that’s particularly true of neuroscience because the brain is so complex.
 
Avoid looking for ‘the one true answer’ – real research evolves, changes and builds on previous research, sometimes overturning it completely. So if a piece of research suggests it’s a magic bullet or a magic wand that’s going to solve all your problems then go back to the previous five questions and ask lots more.  
 
Here’s a bonus question: is this research relevant to what I do, and can I, or should I apply it?  The fact that something stimulates your ‘anterior cingulate cortex’ may sound impressive but is it relevant to what you’re trying to do? And how on earth would you be able to tell if your carefully designed exercise did or didn’t stimulate someone else’s ‘anterior cingulate cortex’.
 
(In case you’re wondering, your anterior cingulate cortex helps to focus attention and tune into your own thoughts; it seems to play a role in depression causing sufferers to lock onto their own sad feelings.)
 
The chances are, as someone whose job is in the training world rather than research, you’re more likely to encounter information second or third hand through blogs or magazines. You won’t regularly come across research by reading an original research paper and articles straight from academia can be very daunting.
 
Check out blogs by people who’ve asked some of these questions for you. There are some really good ones around like the British Psychological Society Research Digest www.bps-research-digest.blogspot.co.uk.
 
Use reliable sources like New Scientist, Nature, Scientific American or books that round up and digest the research into something more digestible such as Brain Rules, Make your Brain Work, Mapping the Mind, Your Brain at Work.
 
It’s helpful for all of us to keep an open mind and to question the research. If the evidence seems to change then you may have to change your practice or your reasons for doing something. Reassess regularly, question what you find out and check against all the data.
 
This helps us sort out the significant from the insignificant, the real from the hypothesised, it helps to preserve a rigour and to make sure we recognise the difference between something that sounds useful and something that’s been proven.  
 
Brain science is really, really complicated and that’s why there are thousands of scientists around the world studying tiny, specific areas. We, as professionals in another sphere, can’t hope to understand it all so we do need people to simplify it for us but we also need to be careful about being blinded by science and seeing neuroscience as a panacea for everything in our world.
 
What we don’t know about how the brain works is still far greater than what we do know so sometimes you have to take a pragmatic approach. Research sometimes just confirms what you’ve always known intuitively. 
 
About the author
 
Stella Collins is the founder of Stellar Learning and the Brain Friendly Learning Group

Mary.Isokariari

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *