Former U.S. CTO: The 'Robot Apocalypse' Could Happen. Here's How You Stop It

In July, Tesla CEO Elon Musk said artificial intelligence posed an “existential risk for human civilization” — sparking a war of words between himself and the far more optimistic Facebook CEO Mark Zuckerberg.

But when it comes down to it, former U.S. Chief Technology Officer Megan Smith, who was appointed to the role by President Barack Obama in 2014 and served until January, says Musk may be right.

Though she also has a plan to — fingers crossed — make sure that doesn’t happen.

“My hope, so that we can avoid the robot apocalypse that [Stephen Hawking] and Elon and everybody are talking about — which could happen — is if we broaden participation and we also think a lot about what values we want,” she said at Fortune’s Most Powerful Women Next Gen Summit in Laguna Nigel, Calif. Tuesday, side-by-side with her co-founder Puneet Kaur Ahira ”

After serving as U.S. CTO, Smith became CEO at shift7, a company focused on increasing nationwide participation in technology and tech education, whether it be in elementary schools or in coal plants. The company’s mission is to tackle socio-economic issues such as trafficking and violence with the added firepower of technology. And artificial intelligence can be part of that solution.

But as it stands, the people actually creating AI are far too homogenous. According to Ahira, while the world is 7 billion strong, just about 10,000 people in roughly seven countries are coding all of AI. Most of those coders are also white and male, a demographic similar to that of the wider tech industry.

“We’re training on datasets that are very biased,” she said.

And that’s the danger, says Smith, especially if we plan to shape them AI our image.

“Do we want to train the [AI] data sets on what we do? Because we do some bad things. Technology gets weaponized, just like it gets used for good,” Smith said. “And so we’ve got to become much more woke about, and scrub ourselves in, and have the hard conversations about ethics. We need to have discussions on weaponized AI. And see what control we can put on that.”

“Even then we might not be able to,” she said on the panel, before pointing once again to Stephen Hawking and Elon Musk’s AI warnings. “If you listen to those guys maybe they’re right, maybe they’re not. But let’s at least try.”

Advertisement