Elon Musk is in the news again. He gave a talk at MIT recently, where he had a few words about artificial intelligence:
“With artificial intelligence we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water, and it’s like yeah, he’s sure he can control the demon. Doesn’t work out…I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish”
This isn’t the first time Musk has gone on record expressing his fears about artificial intelligence. And indeed, he isn’t alone in his concerns about the demonic technologies we might summon in the future. Stephen Hawking agrees, and some prominent AI researchers have said that his concerns are “not completely crazy.” But what is new, at least as far as I know, is Musk’s proposal for regulatory oversight. It’s also an extremely challenging and intriguing proposal. How do you regulate something that doesn’t exist yet?
There might be a good case for Musk’s proposal. If the choice is between a bit of government oversight today, and a future in which we are chased from our homes by squadrons of malicious quad-roters with the brains of malfunctioning supermarket checkout machines, then I’ll go for the oversight. But regulation, at least ideally, proceeds with some knowledge of the thing that is being regulated. Imagine trying to create a highway code with no detailed knowledge about the capabilities of cars or the people handling them. They tried exactly that in nineteenth-century Britain, and the result was the infamous Red Flag Act, which required a man to walk in front of every car carrying a red flag to warn people of its approach. Regardless of its future potential, AI currently exists only in our imagination. So it is certainly possible to go too far.
But this problem of regulating the future is going to be increasingly important during the coming century. Moonshot projects are a very big thing right now. On this blog alone I’ve covered such utopian projects as hyperloops, space mining, self-driving cars, and delivery drones. But utopian visions are always unrealistic. New technology brings upheval with it. And the upheavals we can expect if even one of the technologies I just cited becomes reality are absolutely massive. And so, it is entirely reasonable to want some kind of oversight to deal with the potentially very nasty consequences of these innovations.
As soon as you start thinking through the practicalities of such regulation, however, you run into dozens of unanswered questions. How do you make sound predictions about future technologies to base your regulations on? How do you get research and development labs, many of which are small and mobile, to abide by your regulations rather than just packing up and moving to another country? How do you strike the balance between the permissiveness necessary to allow radical technological change, and the cautiousness necessary to make it safe? How do you even define the technology in question? What, exactly, counts, as AI? And so on.
I have a somewhat radical proposal here: Let’s get the public sector involved in moonshot developments. That means government-funded research into the (peaceful) development of things like space commercialisation, AI, virtual reality, and self-driving cars. This could be done directly through a government research body, such as the Canadian National Research Council, or though funding given out to private companies.
There are a few reasons why I think that this is a good idea. Firstly, there is a strong precedent for it. Contrary to popular belief, governments have virtually always been involved in innovation. Think of all the new technologies that came out of the space program, or the two world wars. And think of all the cutting-edge technology that comes out of universities. There is a good reason for us to want our governments to be ambitious and entrepreneurial, and work on new technology for the public good, rather than just for private profit. Secondly, by giving governments a stake in the actual development of moonshot technologies, rather than merely regulating them, we give them an incentive to not hamper them too much with regulations. Thirdly, this gets over the problem of jurisdiction. If we offer funding and assistance to firms working on radical new technologies, rather than merely rules, then we give them less incentive to pack up and move to a foreign country if they don’t like the regulations we hand down to them. Lastly, this places these questions in the political realm, which explicitly invites the public to comment on the competing visions of the future-both utopian and dystopian-that are involved in various moonshot projects. We still won’t know exactly what to expect from things like AI or space commercialisation, but we can debate more honestly about the possibilities if the debate takes place in the political, rather than the commercial, realm. Assessing competing visions of the future is what politics is fundamentally meant to do.
More fundamentally, however, this asks the question of who really has the right to decide which risks, and which benefits, we pursue with things like AI. Why should this be left entirely up to the private sector? If we are headed into a future in which artificial intelligence makes all our lives easier, with the slight risk of us winding up in a Terminator movie, then shouldn’t we have some kind of democratic say in that? When we’re dealing with technologies with the potential to permanently change the human condition, we should really ask whether a few boards of directors should get to call the shots. Especially when, as Musk has pointed out, not all science fiction futures are necessarily very desirable. Let’s all fight for a say on which demons we choose to summon.