Professor of Clinical Neuropsychology, University of Cambridge
Disclosure statement
Julia Gottwald is the co-author of Sex, Lies, and Brain
Scans published by Oxford University Press. She has received funding
from the MRC, St John's College, Cambridge, and the German Academic
Scholarship Foundation.
Barbara Sahakian consults for Cambridge Cognition, Peak
and Mundipharma. She has received funding from the MRC, Wellcome Trust,
the NIHR HTC, Peak and J&J/Janssen.
Partners
The brain during memory tasks.
John Graner/wikipedia
Are you lying? Do you have a racial bias? Is your moral compass
intact?
To find out what you think or feel, we usually have to take your word
for it. But questionnaires and other explicit measures to reveal what’s
on your mind are imperfect: you may choose to hide your true beliefs or
you may not even be aware of them.
But now there is a technology that enables us to “read the mind” with
growing accuracy: functional magnetic resonance imaging (fMRI). It
measures brain activity indirectly by tracking changes in blood flow –
making it possible for neuroscientists to observe the brain in action.
Because the technology is safe and effective, fMRI has revolutionised our understanding of the human brain. It has shed light on areas important for speech, movement, memory and many other processes.
More recently, researchers have used fMRI for more elaborate
purposes. One of the most remarkable studies comes from Jack Gallant’s
lab at the University of California. His team showed movie trailers to
their volunteers and managed to reconstruct these video clips based on the subjects’ brain activity, using a machine learning algorithm.
In this approach, the computer developed a model based on the
subject’s brain activity rather than being fed a pre-programmed solution
by the researchers. The model improved with practice and after having
access to enough data, it was able to decode brain activity. The
reconstructed clips were blurry
and the experiment involved extended training periods. But for the
first time, brain activity was decoded well enough to reconstruct such
complex stimuli with impressive detail.
Enormous potential
So what could fMRI do in the future? This is a topic we explore in our new book Sex, Lies, and Brain Scans: How fMRI Reveals What Really Goes on in our Minds.
One exciting area is lie detection. While early studies were mostly
interested in finding the brain areas involved in telling a lie, more
recent research tried to actually use the technology as a lie detector.
As a subject in these studies, you would typically have to answer a
series of questions. Some of your answers would be truthful, some would
be lies. The computer model is told which ones are which in the
beginning so it gets to know your “brain signature of lying” – the
specific areas in your brain that light up when you lie, but not when
you are telling the truth.
Afterwards, the model has to classify new answers as truth or lies. The typical accuracy reported in the literature is around 90%,
meaning that nine out of ten times, the computer correctly classified
answers as lies or truths. This is far better than traditional measures
such as the polygraph, which is thought to be only about 70% accurate. Somecompanies
have now licensed the lie detection algorithms. Their next big goal:
getting fMRI-based lie detection admitted as evidence in court.
They have tried several times now,
but the judges have ruled that the technology is not ready for the
legal setting – 90% accuracy sounds impressive, but would we want to
send somebody to prison if there is a chance that they are innocent?
Even if we can make the technology more accurate, fMRI will never be
error proof. One particularly problematic topic is the one of false memories.
The scans can only reflect your beliefs, not necessarily reality. If
you falsely believe that you have committed a crime, fMRI can only
confirm this belief. We might be tempted to see brain scans as hard
evidence, but they are only as good as your own memories: ultimately
flawed. fMRI scanner.wikipedia
Still, this raises some chilling questions about the possibility for a
“Big Brother” future where our innermost thoughts can be routinely
monitored. But for now fMRI cannot be used covertly. You cannot walk
through an airport scanner and be asked to step into an interrogation
room, because your thoughts were alarming to the security personnel.
Undergoing fMRI involves lying still in a big noise tube for long
periods of time. The computer model needs to get to know you and your
characteristic brain activity before it can make any deductions. In many
studies, this means that subjects were being scanned for hours or in
several sessions. There’s obviously no chance of doing this without your
knowledge – or even against your will. If you did not want your brain
activity to be read, you could simply move in the scanner. Even the
slightest movements can make fMRI scans useless.
Although there is no immediate danger of undercover scans, fMRI can
still be used unethically. It could be used in commercial settings
without appropriate guidelines. If academic researchers want to start an
fMRI study, they need to go through a thorough process, explaining the
potential risks and benefits to an ethics committee. No such guidelines
exist in commercial settings. Companies are free to buy fMRI scanners
and conduct experiments with any design. They could show you
traumatising scenes. Or they might uncover thoughts that you wanted to
keep to yourself. And if your scan shows any medical abnormalities, they
are not forced to tell you about it.
Mapping the brain in great detail enables us to observe sophisticated
processes. Researchers are beginning to unravel the brain circuits
involved in self control and morality. Some of us may want to use this knowledge to screen for criminals or detect racial biases.
But we must keep in mind that fMRI has many limitations. It is not a
crystal ball. We might be able to detect an implicit racial bias in you,
but this cannot predict your behaviour in the real world.
fMRI has a long way to go before we can use it to fire or incarcerate
somebody. But neuroscience is a rapidly evolving field. With advances
in clever technological and analytical developments such as machine
learning, fMRI might be ready for these futuristic applications sooner
than we think. Therefore, we need to have a public discussion about
these technologies now. Should we screen for terrorists at the airport
or hire only teachers and judges who do not show evidence of a racial
bias? Which applications are useful and beneficial for our society,
which ones are a step too far? It is time to make up our minds.