Read Homo Deus Page 13


  Maybe the mind should join the soul, God and ether in the dustbin of science? After all, no one has ever seen experiences of pain or love through a microscope, and we have a very detailed biochemical explanation for pain and love that leaves no room for subjective experiences. However, there is a crucial difference between mind and soul (as well as between mind and God). Whereas the existence of eternal souls is pure conjecture, the experience of pain is a direct and very tangible reality. When I step on a nail, I can be 100 per cent certain that I feel pain (even if I so far lack a scientific explanation for it). In contrast, I cannot be certain that if the wound becomes infected and I die of gangrene, my soul will continue to exist. It’s a very interesting and comforting story which I would be happy to believe, but I have no direct evidence for its veracity. Since all scientists constantly experience subjective feelings such as pain and doubt, they cannot deny their existence.

  Another way to dismiss mind and consciousness is to deny their relevance rather than their existence. Some scientists – such as Daniel Dennett and Stanislas Dehaene – argue that all relevant questions can be answered by studying brain activities, without any recourse to subjective experiences. So scientists can safely delete ‘mind’, ‘consciousness’ and ‘subjective experiences’ from their vocabulary and articles. However, as we shall see in the following chapters, the whole edifice of modern politics and ethics is built upon subjective experiences, and few ethical dilemmas can be solved by referring strictly to brain activities. For example, what’s wrong with torture or rape? From a purely neurological perspective, when a human is tortured or raped certain biochemical reactions happen in the brain, and various electrical signals move from one bunch of neurons to another. What could possibly be wrong with that? Most modern people have ethical qualms about torture and rape because of the subjective experiences involved. If any scientist wants to argue that subjective experiences are irrelevant, their challenge is to explain why torture or rape are wrong without reference to any subjective experience.

  Finally, some scientists concede that consciousness is real and may actually have great moral and political value, but that it fulfils no biological function whatsoever. Consciousness is the biologically useless by-product of certain brain processes. Jet engines roar loudly, but the noise doesn’t propel the aeroplane forward. Humans don’t need carbon dioxide, but each and every breath fills the air with more of the stuff. Similarly, consciousness may be a kind of mental pollution produced by the firing of complex neural networks. It doesn’t do anything. It is just there. If this is true, it implies that all the pain and pleasure experienced by billions of creatures for millions of years is just mental pollution. This is certainly a thought worth thinking, even if it isn’t true. But it is quite amazing to realise that as of 2016, this is the best theory of consciousness that contemporary science has to offer us.

  —

  Maybe the life sciences view the problem from the wrong angle. They believe that life is all about data processing, and that organisms are machines for making calculations and taking decisions. However, this analogy between organisms and algorithms might mislead us. In the nineteenth century, scientists described brains and minds as if they were steam engines. Why steam engines? Because that was the leading technology of the day, which powered trains, ships and factories, so when humans tried to explain life, they assumed it must work according to analogous principles. Mind and body are made of pipes, cylinders, valves and pistons that build and release pressure, thereby producing movements and actions. Such thinking had a deep influence even on Freudian psychology, which is why much of our psychological jargon is still replete with concepts borrowed from mechanical engineering.

  Consider, for example, the following Freudian argument: ‘Armies harness the sex drive to fuel military aggression. The army recruits young men just when their sexual drive is at its peak. The army limits the soldiers’ opportunities of actually having sex and releasing all that pressure, which consequently accumulates inside them. The army then redirects this pent-up pressure and allows it to be released in the form of military aggression.’ This is exactly how a steam engine works. You trap boiling steam inside a closed container. The steam builds up more and more pressure, until suddenly you open a valve, and release the pressure in a predetermined direction, harnessing it to propel a train or a loom. Not only in armies, but in all fields of activity, we often complain about the pressure building up inside us, and we fear that unless we ‘let off some steam’, we might explode.

  In the twenty-first century it sounds childish to compare the human psyche to a steam engine. Today we know of a far more sophisticated technology – the computer – so we explain the human psyche as if it were a computer processing data rather than a steam engine regulating pressure. But this new analogy may turn out to be just as naïve. After all, computers have no minds. They don’t crave anything even when they have a bug, and the Internet doesn’t feel pain even when authoritarian regimes sever entire countries from the Web. So why use computers as a model for understanding the mind?

  Well, are we really sure that computers have no sensations or desires? And even if they haven’t got any at present, perhaps once they become complex enough they might develop consciousness? If that were to happen, how could we ascertain it? When computers replace our bus driver, our teacher and our shrink, how could we determine whether they have feelings or whether they are just a collection of mindless algorithms?

  When it comes to humans, we are today capable of differentiating between conscious mental experiences and non-conscious brain activities. Though we are far from understanding consciousness, scientists have succeeded in identifying some of its electrochemical signatures. To do so the scientists started with the assumption that whenever humans report that they are conscious of something, they can be believed. Based on this assumption the scientists could then isolate specific brain patterns that appear every time humans report being conscious, but that never appear during unconscious states.

  This has allowed the scientists to determine, for example, whether a seemingly vegetative stroke victim has completely lost consciousness, or has merely lost control of his body and speech. If the patient’s brain displays the telltale signatures of consciousness, he is probably conscious, even though he cannot move or speak. Indeed, doctors have recently managed to communicate with such patients using fMRI imaging. They ask the patients yes/no questions, telling them to imagine themselves playing tennis if the answer is yes, and to visualise the location of their home if the answer is no. The doctors can then observe how the motor cortex lights up when patients imagine playing tennis (meaning ‘yes’), whereas ‘no’ is indicated by the activation of brain areas responsible for spatial memory.7

  This is all very well for humans, but what about computers? Since silicon-based computers have very different structures to carbon-based human neural networks, the human signatures of consciousness may not be relevant to them. We seem to be trapped in a vicious circle. Starting with the assumption that we can believe humans when they report that they are conscious, we can identify the signatures of human consciousness, and then use these signatures to ‘prove’ that humans are indeed conscious. But if an artificial intelligence self-reports that it is conscious, should we just believe it?

  So far, we have no good answer to this problem. Already thousands of years ago philosophers realised that there is no way to prove conclusively that anyone other than oneself has a mind. Indeed, even in the case of other humans, we just assume they have consciousness – we cannot know that for certain. Perhaps I am the only being in the entire universe who feels anything, and all other humans and animals are just mindless robots? Perhaps I am dreaming, and everyone I meet is just a character in my dream? Perhaps I am trapped inside a virtual world, and all the beings I see are merely simulations?

  According to current scientific dogma, everything I experience is the result of electrical activity in my brain, and it should therefore be theoretically feas
ible to simulate an entire virtual world that I could not possibly distinguish from the ‘real’ world. Some brain scientists believe that in the not too distant future, we shall actually do such things. Well, maybe it has already been done – to you? For all you know, the year might be 2216 and you are a bored teenager immersed inside a ‘virtual world’ game that simulates the primitive and exciting world of the early twenty-first century. Once you acknowledge the mere feasibility of this scenario, mathematics leads you to a very scary conclusion: since there is only one real world, whereas the number of potential virtual worlds is infinite, the probability that you happen to inhabit the sole real world is almost zero.

  None of our scientific breakthroughs has managed to overcome this notorious Problem of Other Minds. The best test that scholars have so far come up with is called the Turing Test, but it examines only social conventions. According to the Turing Test, in order to determine whether a computer has a mind, you should communicate simultaneously both with that computer and with a real person, without knowing which is which. You can ask whatever questions you want, you can play games, argue, and even flirt with them. Take as much time as you like. Then you need to decide which is the computer, and which is the human. If you cannot make up your mind, or if you make a mistake, the computer has passed the Turing Test, and we should treat it as if it really has a mind. However, that won’t really be a proof, of course. Acknowledging the existence of other minds is merely a social and legal convention.

  The Turing Test was invented in 1950 by the British mathematician Alan Turing, one of the fathers of the computer age. Turing was also a gay man in a period when homosexuality was illegal in Britain. In 1952 he was convicted of committing homosexual acts and forced to undergo chemical castration. Two years later he committed suicide. The Turing Test is simply a replication of a mundane test every gay man had to undergo in 1950 Britain: can you pass for a straight man? Turing knew from personal experience that it didn’t matter who you really were – it mattered only what others thought about you. According to Turing, in the future computers would be just like gay men in the 1950s. It won’t matter whether computers will actually be conscious or not. It will matter only what people think about it.

  The Depressing Lives of Laboratory Rats

  Having acquainted ourselves with the mind – and with how little we really know about it – we can return to the question of whether other animals have minds. Some animals, such as dogs, certainly pass a modified version of the Turing Test. When humans try to determine whether an entity is conscious, what we usually look for is not mathematical aptitude or good memory, but rather the ability to create emotional relationships with us. People sometimes develop deep emotional attachments to fetishes like weapons, cars and even underwear, but these attachments are one-sided and never develop into relationships. The fact that dogs can be party to emotional relationships with humans convinces most dog owners that dogs are not mindless automata.

  This, however, won’t satisfy sceptics, who point out that emotions are algorithms, and that no known algorithm requires consciousness in order to function. Whenever an animal displays complex emotional behaviour, we cannot prove that this is not the result of some very sophisticated but non-conscious algorithm. This argument, of course, can be applied to humans too. Everything a human does – including reporting on allegedly conscious states – might in theory be the work of non-conscious algorithms.

  In the case of humans, we nevertheless assume that whenever someone reports that he or she is conscious, we can take their word for it. Based on this minimal assumption, we can today identify the brain signatures of consciousness, which can then be used systematically to differentiate conscious from non-conscious states in humans. Yet since animal brains share many features with human brains, as our understanding of the signatures of consciousness deepens, we might be able to use them to determine if and when other animals are conscious. If a canine brain shows similar patterns to those of a conscious human brain, this will provide strong evidence that dogs are conscious.

  Initial tests on monkeys and mice indicate that at least monkey and mice brains indeed display the signatures of consciousness.8 However, given the differences between animal brains and human brains, and given that we are still far from deciphering all the secrets of consciousness, developing decisive tests that will satisfy the sceptics might take decades. Who should carry the burden of proof in the meantime? Do we consider dogs to be mindless machines until proven otherwise, or do we treat dogs as conscious beings as long as nobody comes up with some convincing counter-evidence?

  On 7 July 2012 leading experts in neurobiology and the cognitive sciences gathered at the University of Cambridge, and signed the Cambridge Declaration on Consciousness, which says that ‘Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviours. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.’9 This declaration stops short of saying that other animals are conscious, because we still lack the smoking gun. But it does shift the burden of proof to those who think otherwise.

  Responding to the shifting winds of the scientific community, in May 2015 New Zealand became the first country in the world to legally recognise animals as sentient beings, when the New Zealand parliament passed the Animal Welfare Amendment Act. The Act stipulates that it is now obligatory to recognise animals as sentient, and hence attend properly to their welfare in contexts such as animal husbandry. In a country with far more sheep than humans (30 million vs 4.5 million), that is a very significant statement. The Canadian province of Quebec has since passed a similar Act, and other countries are likely to follow suit.

  Many business corporations also recognise animals as sentient beings, though paradoxically, this often exposes the animals to rather unpleasant laboratory tests. For example, pharmaceutical companies routinely use rats as experimental subjects in the development of antidepressants. According to one widely used protocol, you take a hundred rats (for statistical reliability) and place each rat inside a glass tube filled with water. The rats struggle again and again to climb out of the tubes, without success. After fifteen minutes most give up and stop moving. They just float in the tube, apathetic to their surroundings.

  You now take another hundred rats, throw them in, but fish them out of the tube after fourteen minutes, just before they are about to despair. You dry them, feed them, give them a little rest – and then throw them back in. The second time, most rats struggle for twenty minutes before calling it quits. Why the extra six minutes? Because the memory of past success triggers the release of some biochemical in the brain that gives the rats hope and delays the advent of despair. If we could only isolate this biochemical, we might use it as an antidepressant for humans. But numerous chemicals flood a rat’s brain at any given moment. How can we pinpoint the right one?

  For this you take more groups of rats, who have never participated in the test before. You inject each group with a particular chemical, which you suspect to be the hoped-for antidepressant. You throw the rats into the water. If rats injected with chemical A struggle for only fifteen minutes before becoming depressed, you can cross out A on your list. If rats injected with chemical B go on thrashing for twenty minutes, you can tell the CEO and the shareholders that you might have just hit the jackpot.

  Credit 1.16

  16. Left: A hopeful rat struggling to escape the glass tube. Right: An apathetic rat floating in the glass tube, having lost all hope.

  Sceptics could object that this entire description needlessly humanises rats. Rats experience neither hope nor despair. Sometimes rats move quickly and sometimes they stand still, but they never feel anything. They are driven only by non-consci
ous algorithms. Yet if so, what’s the point of all these experiments? Psychiatric drugs are aimed to induce changes not just in human behaviour, but above all in human feeling. When customers go to a psychiatrist and say, ‘Doctor, give me something that will lift me out of this depression,’ they don’t want a mechanical stimulant that will cause them to flail about while still feeling blue. They want to feel cheerful. Conducting experiments on rats can help corporations develop such a magic pill only if they presuppose that rat behaviour is accompanied by human-like emotions. And indeed, this is a common presupposition in psychiatric laboratories.10

  The Self-Conscious Chimpanzee

  Another attempt to enshrine human superiority accepts that rats, dogs and other animals have consciousness, but argues that, unlike humans, they lack self-consciousness. They may feel depressed, happy, hungry or satiated, but they have no notion of self, and they are not aware that the depression or hunger they feel belongs to a unique entity called ‘I’.

  This idea is as common as it is opaque. Obviously, when a dog feels hungry, he grabs a piece of meat for himself rather than serve food to another dog. Let a dog sniff a tree watered by the neighbourhood dogs, and he will immediately know whether it smells of his own urine, of the neighbour’s cute Labrador’s or of some stranger’s. Dogs react very differently to their own odour and to the odours of potential mates and rivals.11 So what does it mean that they lack self-consciousness?

  A more sophisticated version of the argument says that there are different levels of self-consciousness. Only humans understand themselves as an enduring self that has a past and a future, perhaps because only humans can use language in order to contemplate their past experiences and future actions. Other animals exist in an eternal present. Even when they seem to remember the past or plan for the future, they are in fact reacting only to present stimuli and momentary urges.12 For instance, a squirrel hiding nuts for the winter doesn’t really remember the hunger he felt last winter, nor is he thinking about the future. He just follows a momentary urge, oblivious to the origins and purpose of this urge. That’s why even very young squirrels, who haven’t yet lived through a winter and hence cannot remember winter, nevertheless cache nuts during the summer.