L-R: Jason Tanz with Elon Musk and Jonathan Nolan. Photo by Samantha Burkardt

Our Last Innovation? Technologists Weigh Risks and Rewards of AI

Discussions of artificial intelligence can't avoid the dark side

SXSW logo

By Rob Preliasco

05/22/2018

SXSWorld


More than climate change, warfare or extremist ideology, Elon Musk is afraid of artificial intelligence. Many AI experts believe that children born today will be alive to see the creation of digital superintelligence, an AI that thoroughly outperforms humans in all areas of problem solving. When that happens, it could advance our science and our quality of life in unimaginable ways — or it could be the last thing we ever invent.

“I’m really quite close to… the cutting edge in AI, and it scares the hell out of me,” Musk said at this year’s SXSW. “It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential.”

Musk’s first SXSW 2018 appearance was at a Featured Session for HBO’s Westworld, where he unveiled a video by the show’s co-creators, Jonathan Nolan and Lisa Joy, commemorating the SpaceX Falcon Heavy rocket. The next day, Musk was interviewed by Nolan in a surprise one-man panel called “Elon Musk Answers Your Questions!”

Westworld is about sentient AIs that are abused by humans with calamitous results, so Nolan was a fitting interlocutor as Musk discussed the real-world risks posed by AI. Machine learning is making tremendous advances, he said, citing Google’s AlphaGo. This AI learned to play the ancient Chinese game of Go, which demands such abstract thinking and complex strategy that proficiency has long eluded AI programs. AlphaGo improved through hundreds of matches and eventually defeated all living human world champions using unprecedented moves and strategies. The next generation of the AI, AlphaGo Zero, learned the game simply by playing against itself and attained a superhuman level after a mere 70 hours. It trounced AlphaGo and has the capacity to teach itself other games in the same way.

Featured Session: Westworld Showrunners Jonathan Nolan & Lisa Joy with Cast. Photo by Samantha Burkardt

Featured Session: Westworld Showrunners Jonathan Nolan & Lisa Joy with Cast. Photo by Samantha Burkardt

 

For Musk and many dedicated AI researchers, it is this exponential improvement that’s frightening, not humanity’s dethronement as board game champions. What if machine learning was unleashed not so that an AI could perform a narrow task like playing board games or piloting a car, but for general problem solving? An AI that did this much better than we could would be a superintelligence, and with an intellect superior to ours it could then improve itself further — in ways we could never have imagined.

“I’m really quite close to… the cutting edge in AI, and it scares the hell out of me,”

No one knows what a superintelligence would do, but plenty of computer scientists, philosophers, neurology researchers and other experts focused exclusively on AI fear that it would do us harm, whether deliberately or through misinterpreting a command. It could meddle in human affairs.

For example, in 2016, lowly Twitter bots may have had a role in determining the U.S. presidential election. What if instead of bots, it had been the smartest intellect the Earth had ever seen with its own agenda?

AI Safety Researcher Allison Duettmann, with the Palo Alto-based Foresight Institute, says that the invention of advanced AI is inevitable despite these risks — and in large part because of them.

“We are moving forward so quickly [with AI] because… if you have more intelligent systems, you have a multiplying factor in every other domain,” she explains. “Given that certain players or agents are already developing AI… every other player has an immediate incentive to develop artificial general intelligence as well, because of the big advantages it would confer on whoever gets that right.”

Duettmann conducted a workshop at SXSW called “AI Philosophy: Why it’s Hard & State of the Art.” In it, she outlined cyber security concerns for AI, the risks of an AI arms race, the need for “social coordination” among those developing AI to ensure safety, and the challenges of instilling AIs with an ethical code. Given the potentially huge military, economic, social and scientific payoffs, it will be impossible to stop states and corporations from developing AI, she said, so it becomes crucial to make sure they do it responsibly.

Jonathan Nolan with Elon Musk. Photo by Chris Saucedo

Jonathan Nolan with Elon Musk. Photo by Chris Saucedo

 

In what is a rare stance for a Silicon Valley titan, Musk, in his own talk, said that this task will require government oversight. “This is a case where you have a very serious danger to the public, and therefore there needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely,” he said.

If an AI is truly intelligent, and even has a conscious mind, then we will be faced with the challenge of adapting our society to it. In a perfect future world, Musk said, “…there’s a benign AI and… we’re able to achieve a symbiosis with that AI.”

He added that this may take the form of an uplink between computers and their operators, possibly a neural one. “We’re already a cyborg in the sense that your phone and your computer are kind of extensions of you,” he said.

David Chalmers, co-director of NYU’s Center for Mind, Brain and Consciousness, was one of several experts to address human/AI interaction at a SXSW panel called “Can We Create Consciousness in a Machine?” Like Duettmann, he is a cautious optimist about the potential of AI, and believes that a human-like consciousness in a machine is possible, but we need to be ready for its implications.

“I believe that once we have those machines we will rapidly believe they are conscious,” he said. “Whether that’s correct or not is another question.”

Super Sponsors

Rivian logo

Stay Tuned

Sign up to receive the latest announcements, tips, networking invitations and more.