A few thoughts about Elon Musk

CEOs are being explicitly compared to comic book characters. So it’s probably safe to say that we live in a period of technological enthusiasm.

We’re living in a period in which what might be called “moonshot thinking”, or a general enthusiasm for new futuristic technologies, is very popular. In the last few years, we have seen Google promise both artificial intelligence and radically expanded human life-spans, we have seen proposals for asteroid mining , and we now have a list of 100 finalists to be the first human residents of Mars. If you asked anybody who pays attention to this stuff to name one person they would most associate with it, they almost certainly mention Elon Musk. A few others, such as Eric Schmidt, Jeff Bezos, and (shudder) Mark Zuckerberg might be mentioned, but it’s hard to ignore the sheer number of ambitious projects Musk has proposed and is currently working on. The guy is a kind of a big deal.

Tempting though it may be, we need to be careful not to fawn too much over people like Musk. First of all, because Musk’s perception as a selfless innovator who is interested in technical challenges and public service, is probably at least partly a PR creation. I’m inclined to believe that Musk probably is a decent human being, but we should still remember that he is a powerful billionaire, and therefore any discussion of him comes with a duty to be critical. Musk didn’t get to where he is by not earning a profit, after all. The other thing we have to keep in mind is that Musk didn’t get where he is without help. Tesla employs 10,000 people. SpaceX employs around 3,000. SolarCity employs more than 6,000 more. And many of those people are doing the hard research and design work for which Musk soaks up a lot of the credit. Musk is still almost certainly a clever guy, but the development of new technologies has been a large-scale team effort since at least Henry Ford and Thomas Edison.

But people like Musk, Schmidt, Ford, and Edison are still a fascinating element of technological culture, because of the enthusiasm they seem to be able to generate for their ideas. If I proposed the hyperloop, nobody would listen and I would probably lose some professional credibility. But because Elon Musk has a reputation for building cool stuff, he can make international news by publishing a 58 page report on the same idea. And the tech media covers virtually everything he says. Why is this the case? One obvious answer is that, as I argued at the top of the page, we are in an era where moonshot thinking predominates, and as the progenitor of a bunch of moonshot projects, Musk is somebody who people want to pay attention to. But just as an experiment, let’s consider it the other way around. What if people like Musk (rather than merely the things they create) are the reason that we are currently so convinced that our immediate future looks like a science fiction movie.

At first glance, this theory goes against everything that science and technology scholars have been saying for the last few decades. Technology, they tell us, is not created by heroic individuals. Some trace the myth of the lone inventor to an obscure Victorian dispute about patent law. Scientists have long acknowledged that they see far by standing on the shoulders of giants, and it is probably time that engineers, inventors, and entrepreneurs be willing to make the same admission. Tesla would be nowhere, for example, without the hard work of thousands of people working over the past few decades on better batteries for laptops and smartphones, to say nothing of the legions of people who mine the raw materials for these things, manufacture them, transport them, and sell them.

But what if we look beyond the technology itself, and pay a bit more attention to its public context and popular support? Could prominent, charismatic, and fascinating individuals make us more likely to give our endorsement to new technological ideas that would otherwise sound crazy? I think it’s plausible, mainly because we, as a species, seem to love colourful personalities. That’s why it makes national news when Kanye West interrupts Taylor Swift. It’s why celebrities are paid exorbitant sums to endorse products. It’s why most history is understood in terms of big political and cultural personalities, form Louis Armstrong to Winston Churchill. And it’s why websites like Perez Hilton exist. We like to embody our ideas about the world in the form of people. That’s why we remember most of the big technological changes of the past by remembering the people who embodied them. Cars are represented by Henry Ford. Electrical infrastructure is represented by Thomas Edison. Computers are represented by Steve Jobs and Bill Gates. And so on. We find it much harder to relate to technology, which at the end of the day is a thing, than we do relating to people.

So, according to the hypothesis I’m developing here, sometimes an inventor or entrepreneur catches the public eye for one reason or another. By either an accident or a conscious effort, they cultivate their public image until they have a substantial media following. This becomes a major business asset, allowing them to generate major publicity for virtually any new idea they have. Because of their past successes, the public and media establishment are willing to consider proposals from them that they would reject out of hand if they were voiced by anybody less prominent. The result of this media coverage is that these ideas get financial and political support, as well as motivating research on the idea and perhaps an early market niche from technological enthusiasts. This in turn makes the success of the idea more viable. The result is that people like Elon Musk can serve as standard bearers, playing a big role in shaping future technology regardless of their role in actually developing it.

While I would like to do some detailed research on this idea one day, it remains just a hypothesis at this stage. But as a hypothesis, it has some interesting and important implications. Most important, perhaps, is that it suggests that prominent entrepreneurs and inventors can be extremely powerful people. Politicians come and go and most powerful business leaders are restricted by regulations and market forces. But if people like Elon Musk truly do have this kind of influence over the direction of technological development, then it could be that a small handful of people, most of whom are white men, have a very large role to play in shaping the future of human societies. It’s hard to vote down a transportation system that already has infrastructure in place, regardless of whether your votes come in the form of ballots or dollars. That means that we need to be very critical of these kinds of people and the ideas they propose. We need to really get to grips with their motivations, and be willing to think seriously not just about the viability of their proposals, but also about their long-term social, political, economic, and environmental effects.

But the news isn’t all bad. The power of technological standard bearers can also be a force for good, if we find ways to influence the kinds of people who we give this technological credibility to. We need big technological changes to solve a whole host of very scary social, economic and environmental problems, and if it is possible for one prominent person to play a big role in pushing those kinds of changes, then so much the better. We should, of course, fight the tendency to put people on pedestals. But maybe there is a role for social activists in helping societies think critically about the people to whom they give technological power. And maybe if we can help boost the public exposure of the right kinds of people, then we can help push the kinds of technological change that will make the world a better place rather than a worse one.


Regulating the Future: A few thoughts on the dark side of moon-shots.

Elon Musk is in the news again. He gave a talk at MIT recently, where he had a few words about artificial intelligence:

“With artificial intelligence we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water, and it’s like yeah, he’s sure he can control the demon. Doesn’t work out…I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish”

This isn’t the first time Musk has gone on record expressing his fears about artificial intelligence. And indeed, he isn’t alone in his concerns about the demonic technologies we might summon in the future. Stephen Hawking agrees, and some prominent AI researchers have said that his concerns are “not completely crazy.” But what is new, at least as far as I know, is Musk’s proposal for regulatory oversight. It’s also an extremely challenging and intriguing proposal. How do you regulate something that doesn’t exist yet?

This could be our future if we’re not careful with AI research, Elon Musk has warned.

There might be a good case for Musk’s proposal. If the choice is between a bit of government oversight today, and a future in which we are chased from our homes by squadrons of malicious quad-roters with the brains of malfunctioning supermarket checkout machines, then I’ll go for the oversight. But regulation, at least ideally, proceeds with some knowledge of the thing that is being regulated. Imagine trying to create a highway code with no detailed knowledge about the capabilities of cars or the people handling them. They tried exactly that in nineteenth-century Britain, and the result was the infamous Red Flag Act, which required a man to walk in front of every car carrying a red flag to warn people of its approach. Regardless of its future potential, AI currently exists only in our imagination. So it is certainly possible to go too far.

But this problem of regulating the future is going to be increasingly important during the coming century. Moonshot projects are a very big thing right now. On this blog alone I’ve covered such utopian projects as hyperloops, space mining, self-driving cars, and delivery drones. But utopian visions are always unrealistic. New technology brings upheval with it. And the upheavals we can expect if even one of the technologies I just cited becomes reality are absolutely massive. And so, it is entirely reasonable to want some kind of oversight to deal with the potentially very nasty consequences of these innovations.

As soon as you start thinking through the practicalities of such regulation, however, you run into dozens of unanswered questions. How do you make sound predictions about future technologies to base your regulations on? How do you get research and development labs, many of which are small and mobile, to abide by your regulations rather than just packing up and moving to another country? How do you strike the balance between the permissiveness necessary to allow radical technological change, and the cautiousness necessary to make it safe? How do you even define the technology in question? What, exactly, counts, as AI? And so on.

I have a somewhat radical proposal here: Let’s get the public sector involved in moonshot developments. That means government-funded research into the (peaceful) development of things like space commercialisation, AI, virtual reality, and self-driving cars. This could be done directly through a government research body, such as the Canadian National Research Council, or though funding given out to private companies.

There are a few reasons why I think that this is a good idea. Firstly, there is a strong precedent for it. Contrary to popular belief, governments have virtually always been involved in innovation. Think of all the new technologies that came out of the space program, or the two world wars. And think of all the cutting-edge technology that comes out of universities. There is a good reason for us to want our governments to be ambitious and entrepreneurial, and work on new technology for the public good, rather than just for private profit. Secondly, by giving governments a stake in the actual development of moonshot technologies, rather than merely regulating them, we give them an incentive to not hamper them too much with regulations. Thirdly, this gets over the problem of jurisdiction. If we offer funding and assistance to firms working on radical new technologies, rather than merely rules, then we give them less incentive to pack up and move to a foreign country if they don’t like the regulations we hand down to them. Lastly, this places these questions in the political realm, which explicitly invites the public to comment on the competing visions of the future-both utopian and dystopian-that are involved in various moonshot projects. We still won’t know exactly what to expect from things like AI or space commercialisation, but we can debate more honestly about the possibilities if the debate takes place in the political, rather than the commercial, realm. Assessing competing visions of the future is what politics is fundamentally meant to do.

More fundamentally, however, this asks the question of who really has the right to decide which risks, and which benefits, we pursue with things like AI. Why should this be left entirely up to the private sector? If we are headed into a future in which artificial intelligence makes all our lives easier, with the slight risk of us winding up in a Terminator movie, then shouldn’t we have some kind of democratic say in that? When we’re dealing with technologies with the potential to permanently change the human condition, we should really ask whether a few boards of directors should get to call the shots. Especially when, as Musk has pointed out, not all science fiction futures are necessarily very desirable. Let’s all fight for a say on which demons we choose to summon.