Ian Robertson
My fingers drummed on the desk as I waited impatiently for the computer to start up. Usually it took seconds but today it was minutes before finally the familiar screen glowed into view. I clicked on the Outlook email icon and waited … and waited. Finally I could access my email, but every operation was grindingly slow. The same was true for Word. My computer had gone on a go-slow, work-to-rule or whatever the digital equivalent is.
Our technician diagnosed a glitch in the software and recommended re-installing the operating system, which she did. But no luck – the computer was still on a go-slow.
“The latest operating system is probably too big for its current RAM memory,” Lisa said. “We need to upgrade it.”
By the next day, with new software and a bigger memory, my computer was working again and I had my digital life back.
We are used to thinking about computer performance in terms of software, hardware and there at times fraught relationship. But we have no trouble in understanding that reprogramming the software can boost hardware performance. The same, however, cannot be said for how some people including professionals think about the mind and the brain.
At the beginning of my career I worked for ten years as a practising clinical psychologist before moving into brain research, an unusual combination of experience that has led me to discover some crucial things about how the software of our minds and the hardware of our brains work – or don’t work – together. I now realize that the mind and the brain interact with each other in ways I could never have believed when I set out on this journey.
For a long time I regarded my early work as a clinical psychologist treating people with various types of emotional problems as being unconnected with my second career as a neuroscientist looking at attention and brain rehabilitation. That made sense, because in most of science and medicine, they are disconnected, too. Even nowadays, the brain’s hardware researchers hardly ever talk to the mind’s software practitioners, and vice versa. Gradually, however, I have come to understand that many conditions affect both the software and the hardware and that, just as my technician needed to deal with both of these in order to re-establish the good performance of my computer, the same is true for psychology and neuroscience.
It is my good fortune to have straddled the boundary between clinical psychology practice and cognitive neuroscience theory, and hence to have been able to develop a new understanding of how we can “tune” our mind-brains, using both hardware and software, to increase our performance, to cope positively with life’s adversities and to rise to challenges.
*****
Every Monday morning, I wheeled the anaesthetized patients down to the treatment room. The psychiatrist would hold the electrodes to their heads, the bodies would judder as the current flowed, and then they would lie still. Then it was my job to wheel them back up to the ward where, an hour or two later, they would wake up, dazed and a little bewildered.
It was 1975 and I was working as a nursing assistant in a gleaming new psychiatric ward attached to a general hospital in New Zealand. Every Monday morning I had to assist in the “treatment” – electro-convulsive therapy (ECT) – that was given to most of the patients on the ward, whatever their diagnosis.
They stayed for a few weeks, very occasionally months, in this unit before they were sent on their way. Only a minority showed florid psychiatric symptoms such as hallucinations, delusions, mania or severe depression. A few were alcoholic, but the majority were suffering from depression, anxiety problems or personality disorders. I knew this because, as a graduate in psychology, I was allowed to read their notes.
Occasionally after the ECT I would witness a near-miraculous change in a profoundly depressed patient. And it was wonderful to see someone awake with a lightened spirit, having escaped from the black cave of depression. But most people seemed unchanged at best; the psychotic patients often a little worse.
I remember one of the consultant psychiatrists who ran the unit explaining to me patiently one day that there was a clear division between mental illness, which required medical treatment, and counselling for life problems, which could be done by any reasonable person.
“The patients in here,” he told me confidently, “are sick, and our treatments are ECT and medication – they don’t need counselling.”
And so, in that ward where I worked for almost a year, everyone was on pharmaceuticals of some sort, usually several different types, and the majority were wheeled down for their ECT on a Monday morning. That was the routine.
There was, however, another consultant psychiatrist who, though nominally also responsible for the unit, very seldom admitted any of his patients to the ward, and, when he did, they were either very sick or very suicidal. Instead, he ran a day clinic where the treatment was mainly various types of psychotherapy and group therapy delivered by trained nurses and psychologists. I worked there for a few weeks as well, and it seemed to me that the types of patients being treated in the day hospital didn’t differ much from those admitted to the wards by the other two doctors.
I was puzzled by the two completely different therapeutic approaches. My mentors on the ward were very clear that medical intervention of some sort was the answer to their patients’ problems. However, as a psychology graduate brought up in the wake of the 1960s fashion for personal and human potential development, I was sympathetic to the approach of the third consultant – that by talking through their problems, people under great stress should be able to resolve them and so find relief from that stress..
But, if I am honest, I couldn't see a clear difference in the outcomes of the patients under the two regimes. All of which left me a bit confused.
****
It was while working as a teacher in the Fiji Islands a year earlier, in 1974, that I had first come across the works of the nineteenth-century philosopher-psychologist Friedrich Nietzsche, in the sparsely furnished bookshelves of the public library in the sleepy town of Lautoka. In the absence of any more enticing reading matter I settled down to read his Twilight of the Idols, which, it turned out, he had written in a little under a week as an introduction to his wider work. I had studied philosophy at Glasgow University in conjunction with my psychology degree, but Nietzsche had not been part of the course.
This was a real pity, as he was not only a respected philosopher, but also as influential as a psychologist in his time as Sigmund Freud. In fact, he proposed many of the ideas which are attributed to Freud, several decades before Freud did, including the concept of the unconscious and the ideas that we repress uncomfortable emotions or project them on to others [1].
At the beginning of the book Nietzsche lists 42 disconnected maxims, one of which really jumped off the page at me, impacting on my young and impressionable mind to such an extent that it reverberates with me to this day: What doesn’t kill me, makes me stronger[2]. This dictum arose from Nietzsche’s view that humans can only achieve their potential if they shake off their idols, particularly the notion of God, and take full responsibility for their own lives through the exercise of the will and engaging their appetite for power, especially over their own destinies. Nietzsche makes it clear that this isn’t a new concept, quoting from the Roman Poet Aulus Furius Antias “spirits increase, vigor grows through a wound” to illustrate his point [3]. But for Nietzsche, being strengthened by adversity came naturally from his belief in the existential freedom of the individual to rise above the basic drives that Freud was later to describe. In this sense and throughout his writings Nietzsche was inclined to see individuals as agents who could learn to harness their own power, as opposed to subjects of forces over which they had little control.
And so it was that I had Nietzsche at the back of my mind when I found myself among the patients in my psychiatric ward. It seemed to me that the ECT-dispensing psychiatrist who saw his patients’ problems as a hardware fault couldn’t have been further from Nietzsche’s position – the patients, in his eyes at least, were most definitely not agents but the subjects of their emotional stress.
The psychiatrist who ran the psychotherapy ward, on the other hand, did seem to see his patients as agents in a common enterprise to find relief from stress. This made more sense to me, but nevertheless there seemed to be little difference in how the two groups of patients – hardware subjects and software agents – fared after their treatment. And for sure, neither group seemed to have become stronger through adversity. This left me confused. Surely, if Nietzsche was right, then his optimistic view of people as agents of their own fates should mean that a course of psychotherapy would help them master life’s stresses and become stronger as a result? I couldn’t see much evidence of this in either ward.
So as I left the South Pacific to cross the world back to Europe, I felt split between the two perspectives and unable to reconcile them. Yes, instinctively I felt drawn to Nietzsche’s belief that we can have control of the software of our minds. But when faced with the often fatalistic suffering of the patients I had worked with, I was left with the niggling doubt that maybe the first psychiatrist was right and that these patients’ emotional problems could be put down to a hardware fault in the brain.
*****
When I began my training as a clinical psychologist at the Maudsley Hospital/Institute of Psychiatry in London in October 1976 I was relieved to find that ECT was not as widely used in London hospitals as it was in New Zealand. I also learned that, soon after I had left, a new regime had been put in place in that New Zealand hospital that moderated some of the practices I had witnessed.
Most of my contemporaries at the Maudsley were students with a medical background who were training to be psychiatrists, but there were some, like me, who came from a psychology background and we trained in the Psychology sub-department of the Institute. The Institute’s focus was on treating the ‘hardware’ of the brain by hunting down its faulty circuits and correcting their disordered chemistry with clever science – indeed, this is the main impetus of psychiatry to this day. And the same basic assumptions lurked in the ether of that wonderful London institution as in the New Zealand hospital that our patients’ mental problems were caused by disorders of the brain and that ultimately the brilliant, white-coated scientists hunched over their test tubes in the Institute of Psychiatry would identify the faulty biology and find cures for these conditions.
This was the background against which I trained in the Psychology Department, where I learnt only to treat the “software” without considering for a moment the hardware of the brain. Mainly, I and my fellow psychology students learned something called “behaviour therapy”, in which people learned to overcome their phobias by gradually facing up to increasingly frightening situations. We learned to treat people with obsessive-compulsive disorders in a similar way.
It was hard, as an impressionable student, not to get sucked into the Institute of Psychiatry’s worldview, particularly as I found myself surrounded by so many brilliant and charismatic mentors. And there were two powerful factors in favour of it.
First was the then infant science of genetics. Within a couple of decades genetics was to achieve a remarkable flowering of scientific productivity, but during the 1970s it was dominated by twin studies, in which the balance between nature and nurture was assessed for various disorders by comparing their frequency in identical versus fraternal twins. If depression, for instance, appeared in both identical twins more often than it did in both fraternal twins, this showed that there was a strong inherited component to the depression.
Trainee psychiatrists were taught to interview patients carefully for a family history of psychiatric problems and, when evidence of such a history was found, that was usually taken as evidence of an inherited disease which could be causing the current problems.
But there was a second big reason for buying into such a medicalized view of mental problems. This was the fundamental belief – no, certainty – that the adult brain is “hard-wired” and that, unlike a broken leg, for instance, if it is damaged it cannot repair itself.
This was the near-universal orthodoxy in medicine and neuroscience at the time, and in much of psychology, also: that experience only molded the very young brain and that adult connections were soldered like a home electricity supply into a fixed and unchangeable neural circuitry. While houses can be rewired, brains can’t, and so, from the point of view of psychiatry, we are the more or less passive servants of our genetically-programmed, fixed-in-place neural circuits for the rest of our lives.
That was the belief that still clung like tobacco smoke to the curtain of assumptions underlying the world in which I trained and later worked. Simply put, since the brain is hard-wired, only physical or chemical treatments can change that wiring, hence the overwhelming focus on drugs and albeit less and less often - ECT.
But while ECT was by now being used relatively rarely, the prescription of drugs for psychological disorders was expanding dramatically – and continues to do so today. Let’s take one country with a centralized health care system and hence comprehensive records of drugs prescribed – England. In 2013, there were approximately 53 million people in that country. And in that year, the number of prescriptions for anti-depressants was … 53 million [4].
Even bearing in mind that many of these were repeat prescriptions, this is an astonishing rate of treatment, which does not even take into account the other types of psychotropic drugs, such as anti-anxiety medications, which are also being prescribed in enormous numbers. What is happening here? Is it the case that depression has been under-diagnosed in the past and that finally psychiatry has managed to catch up and deal with that scourge? Or are people avoiding facing up to stress for themselves, without medication, in favour of passively receiving drug treatments from their doctors? Or is it perhaps that there are more stresses these days to which people are more likely to succumb? These are huge questions to which no one, even now, seems to have clear answers.
*****
In some ways, modern life is more stressful than it was 100 years ago – we are faced with fragmented communities, broken families, work pressures and ruthless competition. But in many other ways life has become less stressful – gone are the days of the workhouse, hunger, dauntingly high levels of infant and maternal death, tuberculosis, diphtheria and the rest.
So why does there now seem to be so much more emotional distress? This was a thought that started to prey on my 26-year-old mind during my encounters with the patients at the Maudsley. Many were at the extreme end of the spectrum – that’s why they were there – but others had reacted badly to what I would have considered fairly unexceptional stresses in their lives and were burdened by an unhappiness that I found difficult to understand. It was also very clear to me that whatever stresses had brought them to seek treatment hadn’t made them stronger. After two years working in the world of psychiatry, I was having big doubts about Nietzsche’s belief that we are the free masters of the software of our minds.
In 1982 I began working in Edinburgh, both as a practising clinical psychologist and also teaching at the university. Bringing with me what I had learnt at the Maudsley, I found myself donning the mantle of the hard-nosed biologist, imparting facts to fresh-faced students such as “the brain is not a muscle; once dead, a neuron cannot regenerate; you cannot repair damaged brains”.
My psychiatrist colleagues didn’t necessarily regard their patients as brain-damaged as such, but rather that the biochemistry and wiring of their brains was askew, meaning that their brain circuits didn't work properly. This was potentially treatable, but only, of course, through medication or ECT. This approach to mental disorder fully accorded with the doctrine that the adult brain cannot be shaped by experience.
I passed on these orthodoxies with the grim satisfaction of the convert – all that airy-fairy 1960s optimism about personal growth and development based on self-actualization and self-improvement had to be confronted with the stark realities of the brain’s physical and genetic immutability.
In 1984 I began working as a neuropsychologist in the Astley Ainslie rehabilitation hospital in Edinburgh and I continued to lecture on the theme of “your brain is not a muscle …” and so on.
Until, one day, the sky fell in.
Contrary to everything I had ever been taught, a paper published in early 1984 showed that the adult brain is not “hard-wired” and that, on the contrary, it is changed by experience[5]. Overnight, my assumptions were overturned, leading me to change the direction of my career and ultimately to convert from being a practising clinical psychologist into becoming a research neuroscientist.
The research in that paper was based on the fact that in the brains of all mammals, including humans, there are so-called “sensory maps” in the cortex, where the brain cells’ responses to sensation in different parts of the body are mapped out.
In human brains, for instance, there is a separate map for each finger, such that when one finger is touched, the sensation of being touched arises from brain cells firing on the part of the map devoted to that finger. But if that finger is lost, then the brain cells responsible for that finger quickly start responding to touch in the two neighbouring fingers. The brain, in other words, is physically changed and shaped by experience, in this case the experience of losing a finger.
The smell of orthodoxies burning filled the scientific air. Soon more research appeared showing that, if you repeatedly stimulate one finger tip, then the brain map for that fingertip expands [6]. Then it was shown that even blocking the input to the brain from one finger with a temporary anaesthetic changes the sensory maps in the brain [7]. And blind people who have learned to read Braille show an expanded brain-map for the finger they use to read [8].
A major discovery always triggers an avalanche of research, and over the next decade hundreds of papers appeared showing that all our beliefs for the last hundred years about the human brain have been wrong: it is changed by experience [9]. And it isn’t just the sensory/touch parts of the brain which show this plasticity – it is true for every brain system, ranging from hearing [10] to language [11] to attention [12] to memory [13].
Crucially for my own bewildered journey through Nietzschian optimism and genetic fatalism, it transpired that our emotional experiences also physically shape our brains. Take babies, for example. Through the work of John Bowlby, in particular, it has been well known for a long time that it is important for newborn babies to form strong, emotionally secure relationships with their mothers. However, for some unlucky children that doesn’t happen. Children with so-called “insecure attachment”, whose mothers tend to be less responsive to their emotional needs, suffer more anxiety and are less easily soothed when distressed than securely attached children. And the effects of this are very long lasting.
When secure and insecure 18-month-old babies were followed up when they were 22 years old, those young people who had been insecurely attached as babies, and hence who had suffered a lot of anxiety in their lives, showed important differences in the amygdala, a key part of the brain responsible for emotion. The amygdala is particularly active when people are anxious, and so, over many years, this leads to it becoming bigger because its networks of brain cells become more and more strongly connected with repeated use. And indeed, the 22-year olds whose relationships with their mothers had been insecure, even though they would not be termed mentally ill, had bigger amygdalae than those with secure relationships [14].
I was dizzied by this discovery: the software of experience can re-engineer the hardware of the brain.
*****
I began to feel the way I imagine Nietzsche must have felt a century earlier when he saw his religious idols fall. Mine was a different idol – the brain disease theory of emotional distress. And the first of its orthodoxies that the adult brains is hard-wired had been tilted so badly that the idol was in danger of falling off its pedestal. But it was still standing because of the second stark biological reality holding upright my idol of a medical view of emotional disorders – genetics.
We have roughly 24,000 genes and they don’t change with experience – we are, more or less, stuck with what we inherit. The notion of the hard-wired brain may have fallen, but my Idol was still standing because of this brute fact about our inherited makeup. My psychiatric colleagues’ focus on family history in their patient assessments made clear sense in the light of this biological reality, as did the worldwide effort to find medical solutions to what appeared to be strongly genetically-determined emotional illnesses such as depression, obsessive-compulsive disorders and chronic anxiety problems. Because the twin studies were clear – there was a strong inherited element to most emotional disorders.
And then, with a sudden crash, the idol fell to the ground.
It was 1990 and research appeared telling me something about genes that I knew but hadn’t properly understood before: while their basic structure can’t be changed by external events, the way they work can be changed [15]. Genes work by “expressing” proteins and these proteins then control various functions in our bodies, brains and behaviour. Experiences and environment can turn on and turn off genes’ protein-synthesizing activity [16] [17, 18]. This is true of one particular experience common to every one of my patients in New Zealand, London and Edinburgh. And that experience is stress.
Exams are stressful for most people, which is why researchers in Ohio studied the activity of a gene called the Interleukin 2 receptor in medical students during exam periods as compared with non-stressful times when they had no exams. The researchers discovered that stress turned down the activity of this gene [15], which is central to the working of the body’s immune system.
Many later studies found similar effects of stress on gene function. For instance, London civil servants who felt stressed by having to do mental tasks under time pressure showed big changes in the functioning of another immune system gene that contributes to the hardening of the arteries (atherosclerosis). The Interleukin 1 beta gene “expresses” a protein that plays a key part in inflammation. The men most stressed by the tasks showed the biggest increase in this gene’s activity, with the effects lasting two hours or more after it had ended [19].
The final collapse of my idol confirmed me in my decision to become a researcher and to try to understand this incredible interaction between the software of the mind and the hardware of the brain.
As the last decade of the twentieth century began, I realized that it doesn’t make sense to think of mind and brain as separate software and hardware entities – we have to consider how they interact and affect each other. Thoughts and emotions turn genes on and off, physically reshaping the brain as they do so. And then these physical changes in turn mold our thoughts and emotions.
It made no sense to search for the physical causes of emotional distress independently of the psychological stressors creating the turmoil. Nor did it make any sense for psychologists to give no thought to the hardware of the brain when delivering their psychological therapies. The absurdity of separating mind and brain – a view that had surrounded me in New Zealand, London and Edinburgh – hit me like a brick and set me on a new career in neuroscience research.
But one doubt still kept circulating in my mind that was a remnant of my old hardware-fixated self: if psychological stressors can shape the brain, including gene function, surely psychological therapies should be able to do the same? I had to wait another twenty years before that doubt resolved. Talking and behaviour therapies can physically change the brain: for instance, successful cognitive behaviour therapy (CBT) for obsessive-compulsive disorders can lead to significant changes in brain function, one of the early such studies discovered [20]. Many others were to follow which showed that changes to the software of the mind can actually cause changes to the brain’s hardware [21].
Would my New Zealand psychiatrist, convinced that only physical therapies such as ECT could cure what he saw as the brain’s hardware faults, have had more respect for his psychotherapy-delivering consultant colleague had he known this fact? And would that software-focused consultant have taken a different approach to his psychotherapy had he known that stress had reshaped the hardware of his patients’ brains? I believe that the answer to both these questions is “yes”.
*****
I felt liberated to be caught on this revolutionary scientific wave, which swept me to Rome in 1989 for a year of research into brain plasticity. By now I was convinced that Nietzsche had at least the theoretical possibility of being right – what we do with our minds has consequences for our brains and vice versa. Stress does have the potential to make you stronger both in terms of your software and your hardware. Now the challenge for me was to find out how to put this principle into practice, and that is what I have been trying to do for the three decades since.
In 1991, I had the great fortune to get a job at the Applied Psychology Unit in Cambridge, since re-named the Cognition and Brain Sciences Unit. There I took forward my ideas on how we can best use our knowledge about how the software and hardware of the brain interact, to help people improve their performance and rise to the challenges facing them.
In 1999, I wrote a book called Mind Sculpture about this scientific revolution [22], and moved to Trinity College Dublin, convinced more than ever that Nietzsche was on to something, but wanting to understand the concept from a more scientific perspective.
The human brain is the most complex entity in the known universe and Nietzsche’s maxim recognizes in a philosophical sense the human ability to harness some of that complexity in shaping our own destiny. Only when we give up our idols, Nietzsche argues, can mankind reach its potential, by using its own willpower to forge its destiny.
During my time as a clinical psychologist, I tried to address my patients’ individual symptoms as best I could, but, I now realized, the assumption of the fixed-in-place brain circuitry had infiltrated my thinking and, as a result, sapped my confidence in my own ability to achieve with those early patients. And my own fatalism may even have communicated itself subtly to some of them, tethering them to the false idol of the unchangeable brain and thus disabling their inherent capacity shared by every living human being to have a hand in shaping their own neural destiny.
The big idea that now started brewing in my mind was trying to understand the interactions of the mind and the brain in order to help explain why some people are crushed by the problems life throw at them while others seem toughened by them. As I moved to Dublin, and into the 21st century, I was convinced that only by combining what we know about the hardware and the software of the brain, as well as how they interact, could we really explore the limits of Nietzsche’s maxim. How, when and why do some people rise to the challenge of bad experiences, while others fold under their weight?
Vanneste S, Mohan A, Yoo HB, Huang Y, Luckey AM, McLeod SL, et al. The peripheral effect of direct current stimulation on brain circuits involving memory. Science Advances. 2020;6(45):eaax9538.
Luckey AM, McLeod SL, Robertson IH, To WT, Vanneste S. Greater Occipital Nerve Stimulation Boosts Associative Memory in Older Individuals: A Randomized Trial. Neurorehabilitation and Neural Repair. 2020:1545968320943573.
Smith E, Ali D, Wilkerson B, Dawson WD, Sobowale K, Reynolds C, Robertson IH et al. A Brain Capital Grand Strategy: toward economic reimagination. Molecular Psychiatry. 2020.