Stop Killer Robots with Statistics

Adrian Cartland
6 min readMay 20, 2021
Bertrand Guay/AFP/Getty Images

Annihilation by robots is a black swan event. It has low probability but high impact. That is, it’s pretty bad if all of humanity was destroyed or enslaved or otherwise. The word robot is derived from the word slave. For most of human history (excepting the last couple of hundred years in many parts of the world) slavery has been a feature we understand and have ingrained in our culture the risks of a slave uprising. That is, we are aware of the risk of robots turning against their masters. What shall we do to prevent this?

Problems with Linear Thinking

To understand whether I should be afraid that my toaster will murder me in my sleep I must first understand what is the probability that my toaster will actually be capable of such murder and have such a desire. The first step in creating killer robots is to have immensely clever and powerful robots. When we look at Moore’s Law which states roughly that the power of semi-conductors will double every 18 months and which we have consistently seen for nearly 50 years we seem to be able to predict out that in another 50 years machines must be incredibly intelligent. If they can beat us at chess today then surely in 50 years they must be able to beat us at everything. The problem is the assumption that Moore’s Law will be able to continue indefinitely, and also the assumption that Artificial Specific Intelligence somehow leads to an Artificial General Intelligence. Both these things are false.

The problem with linear thinking can be easily illustrated by boiling water. If I slowly boil water increasing it at a rate of 10 degrees every minute and measure that up and I might predict that in an hour the water will be some thousands of degrees. Perhaps I will be able to make it hotter than the sun. Obviously, the water will stop increasing in temperature at 100 degrees and then change to steam. Another classic example of this is the turkey that thinks that it is doing well because it is fed by the farmer every day. However, the farmer has other plans for the turkey come Thanksgiving. Because we can see a linear increase on one level it does not mean that we can continue ad infinitum. On Moore’s Law at some stage they will reach the limits of physics. Or reach the limits of the Artificial Specific Intelligence. Any prediction of more than a couple of years should be almost totally ignored. While a constrained model might work well, there is always a problem of endogenous factors. Anyone that lives in the real world should be able to see that turkeys should not count on living past Thanksgiving.

The Problem of Multiple Long Tail Risks

It is wrong to say that it is not certain that we will one day of killer robots. It is correct to say that it is possible and being a long tail risk, we should prepare against it. That is there is a small chance of catastrophic impact. So, although it is not certain that we will have killer robots but it is a potential and given that potential we must use resources to prevent this. This is partly correct. You see against an existential risk we might be tempted to devote all of our resources. That is, every dollar we make we should put aside to constraining robots from taking over the world. Why build schools and hospitals when we should be putting all of that money aside for preventing a robot uprising which would destroy all of humanity? The answer is that there are multiple ways in which humanity could end. There could be a killer asteroid, killer climate change, or killer plague. We could not devote all of our resources to prevent against all of these simultaneously. Instead there must be a balancing of the relative likelihood of any particular scenario, it’s likely harm, and the effort that should be put in to prevent it. Of course, we should put effort into preventing killer robots but we don’t need to eradicate it’s possibility completely.

For an individual a shark attack is a black swan event in their life. It has low probability but extremely high impact. Our response, developed over millennia of human survival, is to post lookouts on the beach, set up shark nets and not swim in seal breeding grounds. We don’t avoid the sea altogether. There might also be other risks if we devoted all of our resources to preventing killer robots. We could cause mass unemployment, crime, starvation, technological backwardness and so on and so forth. There are a very large number of more mundane problems that can become very serious if we didn’t continue to address them. As there is always a very real trade off in our resources to doing one thing and we should not ignore that.

What Should we do to Prevent Killer Robots?

I’m not saying that we should ignore the problem all together. In order to do this there are a few tried and tested methods.

Firstly there is skin in the game. If someone who is swimming at the beach tells you that it is safe to swim there you might believe them. Someone who is elsewhere in the country might be less convincing given that it is your skin that is in the game and not theirs. The simplest way to do this is to make sure that the creators of new technology use it themselves. That is the engineer of the train should ride it themselves and they will ensure that their machine does not fail. When someone calls for sacrifice to prevent a long tail risk they should be doing so themselves and should be expending their own money. If someone claimed that we should halt all artificial intelligence research and not use any artificial intelligence so as to prevent killer robots they should have first given it up themselves and be living largely without the benefits of artificial intelligence. This means that they must be living almost entirely apart from our society.

Secondly, we need to understand the importance of tradition in preventing long tail risks. For example it is a tradition to treat others as you would have them treat unto you. This prevents against the long tail risk of you bothering some others and they in turn bothering you in an unexpected manner. Simple heuristics and traditions go a long way to preventing these. These traditions have evolved through a natural selection of ideas that favours those that survive long tail risks.

Thirdly, we need to enable and trust that given skin in the game, proven rules, and resources, human creativity will typically find a solution for long tail risks. Hence we have vaccines for the long tail risk of plagues that’s troubled us for most of our history. Our Malthusian over population problems were solved by increasing agricultural yields. Electronic tags can monitor the feeding and travel patterns of sharks. We are likely to invent a solution to overcome the long tail problem of killer robots. Just as we will invent new technological solutions to overcome viruses, climate change and killer asteroids. …. human ingenuity and allowed to flourish.

Are Robots Killing Jobs?

We can apply the same reasoning of killer robots destroying humanity against other long tail risks. If we’re concerned that robots might kill all of our jobs which we might consider an existential risk to a particular profession or job. If you have skin in the game being an incentive to prevent a robot taking your job, you will need to think creatively on what you can do to make your job something that is not automatable. If your job involves standing in an elevator and pressing the buttons to go up and down it is rather clear that you could be replaced by those mere buttons. Maybe you could do something else such as provide a service along with being in the elevator or assist customers to where they’re going and so on. Apply these statistical insights to other large concerns that you might have. If you’re concerned about killer viruses, killer climate change or killer asteroids you don’t need to devote all of society’s resources or spend our lives panicking that we haven’t. We’re likely to come up with something clever that fixes it.

And if we don’t well we’re all gone so it probably doesn’t matter.

--

--

Adrian Cartland

Creator of Ailira, the Artificial Intelligence that automates legal information and research, and Principal of Cartland Law. www.Ailira.com www.CartlandLaw.com