Be afraid, be very afraid: The robots are coming and they will destroy our livelihoods!

This box is the seventeenth-century equivalent of a quadrotor drone. From spacebridges.com

In 1642, philosopher and mathematician Blaise Pascal built a little brass box, with a handful of dials sticking out of it. Each dial would select a number, and by a clever mechanism, another dial would display the sum or the difference of all the numbers selected. Pascal had invented the world’s first calculator. This freaked people out. Math, at that time, was synonymous with reason, which was the main qulality separating human beings from the rest of the animal kingdom. Pascal had taught minerals, which were lower than any animal, to reason. Pascal’s contemporaries anxiously asked what place would be left for humans in a world where metal could think.

We never really got over this anxiety. We’re all familiar with the Hollywood flims in which robotic exterminators with the faces of Arnold Schwarzenneger or Hugo Weaving hunt down humans in various robot-dominated post-apocalyptic wastelands. Lately, our science fiction robots are a bit friendlier. Think TARS from Interstellar, or the protagonist of Pixar’s latest, Big Hero 6. Friendly or not, however, the robots are still threatening our place in the world. The friendliness of helper-robots almost makes them scarier than the Terminator, because it is that very quality which is increasingly threatening our ability to earn a living.

This is a topic that terrifies me, but also makes me a little bit hopeful. So last night I took a train to London to see an Intelligence Squared debate on the subject. The proposition was appropriately ominous: “Be afraid, be very afraid: The robots are coming and they will destroy our livelihoods.” What follows is an account of that debate.

The first speaker, Andrew Keen, is a bit of a contrairian about the internet. The moderator, journalist Zainab Badawi, called him “The Antichrist of Silicon Valley”, his latest books are titled “The Cult of the Amateur”, and “The Internet is Not the Answer”. I wonder what he would think of me, an amateur who uses the internet to write optimistic things about technology. Keen’s opening statement, in any case, set up the debate quite well. He introduced the Morovich paradox: Computers tend to be very bad at simple tasks, like welding and folding clothing, but quite good at complicated ones, like analysing market data. This means bad news for the various educated professionals in the audience, he argued. Robots are diagnosing illnesses, marking essays, and doing research for law firms. And, as I’ve pointed out, they will also be supplanting human taxi and delivery drivers in short order. That means that we face an unsavoury future with even worse economic inequality than we have today: There will be a small upper class of programmers and entrepreneurs, a huge underclass, and nothing in between.

The first speaker against the motion was Walter Isaacson, who has written a biography of Steve Jobs, among a few other things. He started off with the rather unfortunate statement that “the industrial revolution wasn’t that bad”. One wonders if he has ever seen Oliver Twist. His argument that the total number of textile workers increased during the industrial revolution might be accurate in a strictly factual sense, but does not account for the quality or geographic distribution of the new jobs. That’s why the luddites (with whom he naturally compared his opponents) were so concerned about technology: There might have been jobs in cotton mills, but they didn’t provide the same quality of life that the cottage textile industry had.

The second speaker against the motion was Pippa Malmgren; a former economic advisor to President Bush. I must confess that a bit of a pigeonhole started to form when I heard that, and that it rapidly began filling up with pigeons when she started rattling out cliched platitudes about how “most innovationis coming from small groups of a few people working out of a garage somewhere”, and how anyone willing to pull themself up by their bootstraps can be one of those people. Alternatively, she said, people can become welders, because apparently there will also be a shortage of skilled tradespeople. “In my experience”, she said, as she waved a quadrotor drone in the air, “robots create jobs”. As with Isaacson, there was little discussion about the details: How many jobs, exactly, will the robots create?

The best speaker by far was the second speaker in favour of the motion: economist George Magnus. His argument hinged on one fact: That while the automation of the industrial revolution was largely about replacing human muscle power, the coming revolution will replace human brainpower. The difference, he argued, is crucial. Now that we have robots that can mimic our mental capacities, we are unlikely to have very much left to do.

There was a bit of wrangling over the details of Magnus’ argument. The speakers against the motion made a few vague and unconvincing arguments about creativity and interpersonal skills. Isaacson made a good point that it’s a lot harder to teach a robot to do some tasks than it is to teach a human, but he neglected missed the fact that you only have to teach one robot, after which point its expertise can be replicated indefinitely. And robots are already painting and composing music. In the question and answer period, Malmgren made some clichéd arguments about the power and promise of technology, which elicited an applause from some of the audience. Magnus’ blunt but honest reply, which got a much bigger applause, was that “that’s a very romantic view, which I would applaud as well, but I just don’t think it’s true”.

In the end, I think, the question of whether the robots will take our jobs comes down to a few pretty basic questions:
1) What will there be left for humans to do?
2) How many such jobs will there be?
3) What wages and working conditions will the majority of these jobs offer?

The answers, I regret to say, seem to be as follows: Not much; not very many; and bad. The speakers against the motion had every opportunity to disabuse me of this view, but consistently failed to do so. Repeated assertions that automation has always created more jobs in the past simply miss the fact that history does not necessarily repeat itself, and we count on it to do so at our own peril. Even if things will work out in the long run, that won’t necessarily help the next few generations. As George Magnus said, quoting John Maynard Keynes, “In the long run, we’ll all be dead.” If Walter Isaacson and Pippa Malmgren were really making the best case that could be made for our economic security, then I’m afraid to say that our jobs are probably doomed.

I’m not letting Keen and Magnus entirely off the hook. Their solutions to the problem sucked. Magnus proposed various band-aid measures such as wider use of labour-intensive construction. Try selling that one to a property developer. Keen, when asked by an audience member whether this meant we should rethink the purpose of work, replied with a pithy response of “We need jobs to earn a living!” This, of course, entirely missed the point of the question, which was presumably that we should find a way to arrange our economy so that we don’t need jobs to earn a living. Both of the speakers supporting the motion saw the fear in automation, and skilfully dismantled the opposition’s arguments, but they failed to produce any hope. Maybe I’m just naive, but as I argued yesterday, automation could be good news if we can manage it right.

At the end of the debate, the speakers against the motion had 52% of the audience in agreement, but the speakers for it had won over more people to their side, so they were declared the winners. I think the real verdict appeared during the question period, when a mother or teacher who had brought five schoolboys to the debate asked the speakers what career they should be working towards. Isaacson and Malmgren said they should follow their dreams and get working on their own entrepreneurial projects in their backyard shed (or become welders), while the speakers in favour were more pessimistic: Magnus pointed out that tech firms today only hire very few people, while Keen argued forcefully that they should NOT follow the entrpreneurial dream, because that would be like staking their future financial security on a lottery ticket.

I’m inclined to agree with Keen. Because if our only hope for the future is to become successful entrepreneurs, then most of us are doomed. You can’t have an economy where everybody is a tech entrepreneur. Malmgren and Isaacson’s insistences otherwise are dangerous, because they offer an appealing palliative for those 5 boys and their classmates. Kids of that generation will soon have a fight on their hands over the structure of an increasingly automated economy. I hope for their sake that we are not taken in by vague and illusory promises that we can all be the next Steve Jobs. Nice though that might sound, ultimately it stops us from getting down to the tough business of adapting our society to a world in which robots are increasingly sophisticated, and there is less and less for humans to do.

Advertisements

Regulating the Future: A few thoughts on the dark side of moon-shots.

Elon Musk is in the news again. He gave a talk at MIT recently, where he had a few words about artificial intelligence:

“With artificial intelligence we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water, and it’s like yeah, he’s sure he can control the demon. Doesn’t work out…I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish”

This isn’t the first time Musk has gone on record expressing his fears about artificial intelligence. And indeed, he isn’t alone in his concerns about the demonic technologies we might summon in the future. Stephen Hawking agrees, and some prominent AI researchers have said that his concerns are “not completely crazy.” But what is new, at least as far as I know, is Musk’s proposal for regulatory oversight. It’s also an extremely challenging and intriguing proposal. How do you regulate something that doesn’t exist yet?

This could be our future if we’re not careful with AI research, Elon Musk has warned.

There might be a good case for Musk’s proposal. If the choice is between a bit of government oversight today, and a future in which we are chased from our homes by squadrons of malicious quad-roters with the brains of malfunctioning supermarket checkout machines, then I’ll go for the oversight. But regulation, at least ideally, proceeds with some knowledge of the thing that is being regulated. Imagine trying to create a highway code with no detailed knowledge about the capabilities of cars or the people handling them. They tried exactly that in nineteenth-century Britain, and the result was the infamous Red Flag Act, which required a man to walk in front of every car carrying a red flag to warn people of its approach. Regardless of its future potential, AI currently exists only in our imagination. So it is certainly possible to go too far.

But this problem of regulating the future is going to be increasingly important during the coming century. Moonshot projects are a very big thing right now. On this blog alone I’ve covered such utopian projects as hyperloops, space mining, self-driving cars, and delivery drones. But utopian visions are always unrealistic. New technology brings upheval with it. And the upheavals we can expect if even one of the technologies I just cited becomes reality are absolutely massive. And so, it is entirely reasonable to want some kind of oversight to deal with the potentially very nasty consequences of these innovations.

As soon as you start thinking through the practicalities of such regulation, however, you run into dozens of unanswered questions. How do you make sound predictions about future technologies to base your regulations on? How do you get research and development labs, many of which are small and mobile, to abide by your regulations rather than just packing up and moving to another country? How do you strike the balance between the permissiveness necessary to allow radical technological change, and the cautiousness necessary to make it safe? How do you even define the technology in question? What, exactly, counts, as AI? And so on.

I have a somewhat radical proposal here: Let’s get the public sector involved in moonshot developments. That means government-funded research into the (peaceful) development of things like space commercialisation, AI, virtual reality, and self-driving cars. This could be done directly through a government research body, such as the Canadian National Research Council, or though funding given out to private companies.

There are a few reasons why I think that this is a good idea. Firstly, there is a strong precedent for it. Contrary to popular belief, governments have virtually always been involved in innovation. Think of all the new technologies that came out of the space program, or the two world wars. And think of all the cutting-edge technology that comes out of universities. There is a good reason for us to want our governments to be ambitious and entrepreneurial, and work on new technology for the public good, rather than just for private profit. Secondly, by giving governments a stake in the actual development of moonshot technologies, rather than merely regulating them, we give them an incentive to not hamper them too much with regulations. Thirdly, this gets over the problem of jurisdiction. If we offer funding and assistance to firms working on radical new technologies, rather than merely rules, then we give them less incentive to pack up and move to a foreign country if they don’t like the regulations we hand down to them. Lastly, this places these questions in the political realm, which explicitly invites the public to comment on the competing visions of the future-both utopian and dystopian-that are involved in various moonshot projects. We still won’t know exactly what to expect from things like AI or space commercialisation, but we can debate more honestly about the possibilities if the debate takes place in the political, rather than the commercial, realm. Assessing competing visions of the future is what politics is fundamentally meant to do.

More fundamentally, however, this asks the question of who really has the right to decide which risks, and which benefits, we pursue with things like AI. Why should this be left entirely up to the private sector? If we are headed into a future in which artificial intelligence makes all our lives easier, with the slight risk of us winding up in a Terminator movie, then shouldn’t we have some kind of democratic say in that? When we’re dealing with technologies with the potential to permanently change the human condition, we should really ask whether a few boards of directors should get to call the shots. Especially when, as Musk has pointed out, not all science fiction futures are necessarily very desirable. Let’s all fight for a say on which demons we choose to summon.