Read 21 Lessons for the 21st Century Page 8


  Imagine the situation: you have bought a new car, but before you can start using it, you must open the settings menu and tick one of several boxes. In case of an accident, do you want the car to sacrifice your life – or to kill the family in the other vehicle? Is this a choice you even want to make? Just think of the arguments you are going to have with your husband about which box to tick.

  So maybe the state should intervene to regulate the market, and lay down an ethical code binding all self-driving cars? Some lawmakers will doubtless be thrilled by the opportunity to finally make laws that are always followed to the letter. Other lawmakers may be alarmed by such unprecedented and totalitarian responsibility. After all, throughout history the limitations of law enforcement provided a welcome check on the biases, mistakes and excesses of lawmakers. It was an extremely lucky thing that laws against homosexuality and against blasphemy were only partially enforced. Do we really want a system in which the decisions of fallible politicians become as inexorable as gravity?

  Digital dictatorships

  AI often frightens people because they don’t trust the AI to remain obedient. We have seen too many science-fiction movies about robots rebelling against their human masters, running amok in the streets and slaughtering everyone. Yet the real problem with robots is exactly the opposite. We should fear them because they will probably always obey their masters and never rebel.

  There is nothing wrong with blind obedience, of course, as long as the robots happen to serve benign masters. Even in warfare, reliance on killer robots could ensure that for the first time in history, the laws of war would actually be obeyed on the battlefield. Human soldiers are sometimes driven by their emotions to murder, pillage and rape in violation of the laws of war. We usually associate emotions with compassion, love and empathy, but in wartime, the emotions that take control are all too often fear, hatred and cruelty. Since robots have no emotions, they could be trusted to always adhere to the dry letter of the military code, and never be swayed by personal fears and hatreds.23

  On 16 March 1968 a company of American soldiers went berserk in the South Vietnamese village of My Lai, and massacred about 400 civilians. This war crime resulted from the local initiative of men who had been involved in jungle guerrilla warfare for several months. It did not serve any strategic purpose, and contravened both the legal code and the military policy of the USA. It was the fault of human emotions.24 If the USA had deployed killer robots in Vietnam, the massacre of My Lai would never have occurred.

  Nevertheless, before we rush to develop and deploy killer robots, we need to remind ourselves that the robots always reflect and amplify the qualities of their code. If the code is restrained and benign – the robots will probably be a huge improvement over the average human soldier. Yet if the code is ruthless and cruel – the results will be catastrophic. The real problem with robots is not their own artificial intelligence, but rather the natural stupidity and cruelty of their human masters.

  In July 1995 Bosnian Serb troops massacred more than 8,000 Muslim Bosniaks around the town of Srebrenica. Unlike the haphazard My Lai massacre, the Srebrenica killings were a protracted and well-organised operation that reflected Bosnian Serb policy to ‘ethnically cleanse’ Bosnia of Muslims.25 If the Bosnian Serbs had had killer robots in 1995, it would likely have made the atrocity worse rather than better. Not one robot would have had a moment’s hesitation carrying out whatever orders it received, and would not have spared the life of a single Muslim child out of feelings of compassion, disgust, or mere lethargy.

  A ruthless dictator armed with such killer robots will never have to fear that his soldiers will turn against him, no matter how heartless and crazy his orders. A robot army would probably have strangled the French Revolution in its cradle in 1789, and if in 2011 Hosni Mubarak had had a contingent of killer robots he could have unleashed them on the populace without fear of defection. Similarly, an imperialist government relying on a robot army could wage unpopular wars without any concern that its robots might lose their motivation, or that their families might stage protests. If the USA had had killer robots in the Vietnam War, the My Lai massacre might have been prevented, but the war itself could have dragged on for many more years, because the American government would have had fewer worries about demoralised soldiers, massive anti-war demonstrations, or a movement of ‘veteran robots against the war’ (some American citizens might still have objected to the war, but without the fear of being drafted themselves, the memory of personally committing atrocities, or the painful loss of a dear relative, the protesters would probably have been both less numerous and less committed).26

  These kinds of problems are far less relevant to autonomous civilian vehicles, because no car manufacturer will maliciously program its vehicles to target and kill people. Yet autonomous weapon systems are a catastrophe waiting to happen, because too many governments tend to be ethically corrupt, if not downright evil.

  The danger is not restricted to killing machines. Surveillance systems could be equally risky. In the hands of a benign government, powerful surveillance algorithms can be the best thing that ever happened to humankind. Yet the same Big Data algorithms might also empower a future Big Brother, so that we might end up with an Orwellian surveillance regime in which all individuals are monitored all the time.27

  Indeed, we might end up with something that even Orwell could barely imagine: a total surveillance regime that follows not just all our external activities and utterances, but can even go under our skin to observe our inner experiences. Consider for example what the Kim regime in North Korea might do with the new technology. In the future, each North Korean citizen might be required to wear a biometric bracelet that monitors everything you do and say – as well as your blood pressure and brain activity. By using our growing understanding of the human brain, and using the immense powers of machine learning, the North Korean regime might be able for the first time in history to gauge what each and every citizen is thinking each and every moment. If you look at a picture of Kim Jong-un and the biometric sensors pick up the telltale signs of anger (higher blood pressure, increased activity in the amygdala) – you’ll be in the Gulag tomorrow morning.

  Granted, due to its isolation the North Korean regime might have difficulty developing the required technology by itself. However, the technology might be pioneered in more tech-savvy nations, and copied or bought by the North Koreans and other backward dictatorships. Both China and Russia are constantly improving their surveillance tools, as are a number of democratic countries, ranging from the USA to my home country of Israel. Nicknamed ‘the start-up nation’, Israel has an extremely vibrant hi-tech sector, and a cutting-edge cyber-security industry. At the same time it is also locked into a deadly conflict with the Palestinians, and at least some of its leaders, generals and citizens might well be happy to create a total surveillance regime in the West Bank as soon as they have the necessary technology.

  Already today whenever Palestinians make a phone call, post something on Facebook or travel from one city to another they are likely to be monitored by Israeli microphones, cameras, drones or spy software. The gathered data is then analysed with the aid of Big Data algorithms. This helps the Israeli security forces to pinpoint and neutralise potential threats without having to place too many boots on the ground. The Palestinians may administer some towns and villages in the West Bank, but the Israelis control the sky, the airwaves and cyberspace. It therefore takes surprisingly few Israeli soldiers to effectively control about 2.5 million Palestinians in the West Bank.28

  In one tragicomic incident in October 2017, a Palestinian labourer posted to his private Facebook account a picture of himself in his workplace, alongside a bulldozer. Adjacent to the image he wrote ‘Good morning!’ An automatic algorithm made a small error when transliterating the Arabic letters. Instead of ‘Ysabechhum!’ (which means ‘Good morning!’), the algorithm identified the letters as ‘Ydbachhum!’ (which means ‘Kill them!’). Suspecting that the ma
n might be a terrorist intending to use a bulldozer to run people over, Israeli security forces swiftly arrested him. He was released after they realised that the algorithm made a mistake. But the offending Facebook post was nevertheless taken down. You can never be too careful.29 What Palestinians are experiencing today in the West Bank might be just a primitive preview to what billions will eventually experience all over the planet.

  In the late twentieth century democracies usually outperformed dictatorships because democracies were better at data-processing. Democracy diffuses the power to process information and make decisions among many people and institutions, whereas dictatorship concentrates information and power in one place. Given twentieth-century technology, it was inefficient to concentrate too much information and power in one place. Nobody had the ability to process all the information fast enough and make the right decisions. This is part of the reason why the Soviet Union made far worse decisions than the United States, and why the Soviet economy lagged far behind the American economy.

  However, soon AI might swing the pendulum in the opposite direction. AI makes it possible to process enormous amounts of information centrally. Indeed, AI might make centralised systems far more efficient than diffused systems, because machine learning works better the more information it can analyse. If you concentrate all the information relating to a billion people in one database, disregarding all privacy concerns, you can train much better algorithms than if you respect individual privacy and have in your database only partial information on a million people. For example, if an authoritarian government orders all its citizens to have their DNA scanned and to share all their medical data with some central authority, it would gain an immense advantage in genetics and medical research over societies in which medical data is strictly private. The main handicap of authoritarian regimes in the twentieth century – the attempt to concentrate all information in one place – might become their decisive advantage in the twenty-first century.

  As algorithms come to know us so well, authoritarian governments could gain absolute control over their citizens, even more so than in Nazi Germany, and resistance to such regimes might be utterly impossible. Not only will the regime know exactly how you feel – it could make you feel whatever it wants. The dictator might not be able to provide citizens with healthcare or equality, but he could make them love him and hate his opponents. Democracy in its present form cannot survive the merger of biotech and infotech. Either democracy will successfully reinvent itself in a radically new form, or humans will come to live in ‘digital dictatorships’.

  This will not be a return to the days of Hitler and Stalin. Digital dictatorships will be as different from Nazi Germany as Nazi Germany was different from ancien régime France. Louis XIV was a centralising autocrat, but he did not have the technology to build a modern totalitarian state. He suffered no opposition to his rule, yet in the absence of radios, telephones and trains, he had little control over the day-to-day lives of peasants in remote Breton villages, or even of townspeople in the heart of Paris. He had neither the will nor the ability to establish a mass party, a countrywide youth movement, or a national education system.30 It was the new technologies of the twentieth century that gave Hitler both the motivation and the power to do such things. We cannot predict what will be the motivations and powers of digital dictatorships in 2084, but it is very unlikely that they will just copy Hitler and Stalin. Those gearing themselves up to refight the battles of the 1930s might be caught off their guard by an attack from a totally different direction.

  Even if democracy manages to adapt and survive, people might become the victims of new kinds of oppression and discrimination. Already today more and more banks, corporations and institutions are using algorithms to analyse data and make decisions about us. When you apply to your bank for a loan, it is likely that your application is processed by an algorithm rather than by a human. The algorithm analyses lots of data about you and statistics about millions of other people, and decides whether you are reliable enough to give you a loan. Often, the algorithm does a better job than a human banker. But the problem is that if the algorithm discriminates against some people unjustly, it is difficult to know that. If the bank refuses to give you a loan, and you ask ‘Why?’, the bank replies ‘The algorithm said no.’ You ask ‘Why did the algorithm say no? What’s wrong with me?’, and the bank replies ‘We don’t know. No human understands this algorithm, because it is based on advanced machine learning. But we trust our algorithm, so we won’t give you a loan.’31

  When discrimination is directed against entire groups, such as women or black people, these groups can organise and protest against their collective discrimination. But now an algorithm might discriminate against you personally, and you have no idea why. Maybe the algorithm found something in your DNA, your personal history or your Facebook account that it does not like. The algorithm discriminates against you not because you are a woman, or an African American – but because you are you. There is something specific about you that the algorithm does not like. You don’t know what it is, and even if you knew, you cannot organise with other people to protest, because there are no other people suffering the exact same prejudice. It is just you. Instead of just collective discrimination, in the twenty-first century we might face a growing problem of individual discrimination.32

  At the highest levels of authority, we will probably retain human figureheads, who will give us the illusion that the algorithms are only advisors, and that ultimate authority is still in human hands. We will not appoint an AI to be the chancellor of Germany or the CEO of Google. However, the decisions taken by the chancellor and the CEO will be shaped by AI. The chancellor could still choose between several different options, but all these options will be the outcome of Big Data analysis, and they will reflect the way AI views the world more than the way humans view it.

  To take an analogous example, today politicians all over the world can choose between several different economic policies, but in almost all cases the various policies on offer reflect a capitalist outlook on economics. The politicians have an illusion of choice, but the really important decisions have already been made much earlier by the economists, bankers and business people who shaped the different options in the menu. Within a couple of decades, politicians might find themselves choosing from a menu written by AI.

  Artificial intelligence and natural stupidity

  One piece of good news is that at least in the next few decades, we won’t have to deal with the full-blown science-fiction nightmare of AI gaining consciousness and deciding to enslave or wipe out humanity. We will increasingly rely on algorithms to make decisions for us, but it is unlikely that the algorithms will start to consciously manipulate us. They won’t have any consciousness.

  Science fiction tends to confuse intelligence with consciousness, and assume that in order to match or surpass human intelligence, computers will have to develop consciousness. The basic plot of almost all movies and novels about AI revolves around the magical moment when a computer or a robot gains consciousness. Once that happens, either the human hero falls in love with the robot, or the robot tries to kill all the humans, or both things happen simultaneously.

  But in reality, there is no reason to assume that artificial intelligence will gain consciousness, because intelligence and consciousness are very different things. Intelligence is the ability to solve problems. Consciousness is the ability to feel things such as pain, joy, love and anger. We tend to confuse the two because in humans and other mammals intelligence goes hand in hand with consciousness. Mammals solve most problems by feeling things. Computers, however, solve problems in a very different way.

  There are simply several different paths leading to high intelligence, and only some of these paths involve gaining consciousness. Just as airplanes fly faster than birds without ever developing feathers, so computers may come to solve problems much better than mammals without ever developing feelings. True, AI will have to analyse human feel
ings accurately in order to treat human illnesses, identify human terrorists, recommend human mates and navigate a street full of human pedestrians. But it could do so without having any feelings of its own. An algorithm does not need to feel joy, anger or fear in order to recognise the different biochemical patterns of joyful, angry or frightened apes.

  Of course, it is not absolutely impossible that AI will develop feelings of its own. We still don’t know enough about consciousness to be sure. In general, there are three possibilities we need to consider:

  Consciousness is somehow linked to organic biochemistry in such a way that it will never be possible to create consciousness in non-organic systems.

  Consciousness is not linked to organic biochemistry, but it is linked to intelligence in such a way that computers could develop consciousness, and computers will have to develop consciousness if they are to pass a certain threshold of intelligence.

  There are no essential links between consciousness and either organic biochemistry or high intelligence. Hence computers might develop consciousness – but not necessarily. They could become super-intelligent while still having zero consciousness.

  At our present state of knowledge, we cannot rule out any of these options. Yet precisely because we know so little about consciousness, it seems unlikely that we could program conscious computers any time soon. Hence despite the immense power of artificial intelligence, for the foreseeable future its usage will continue to depend to some extent on human consciousness.