Unlock stock picks and a broker-level newsfeed that powers Wall Street.
KURZWEIL: Human-Level AI Is Coming By 2029
artificial intelligence
artificial intelligence

Her/Terminator Which will it be?

When artificial intelligence is as smart as humans, the world will change forever.

While technological change itself is neutral, neither good nor bad, AI's effects on society will be so powerful that they've been described in both utopian and apocalyptic terms.

And some futurists think those changes are just on the horizon.

That includes Ray Kurzweil — author of five books on AI, including the recent best seller "How to Create a Mind," and founder of the futurist organization the Singularity University. He is currently working with Google to build more machine intelligence into their products.

AI: Coming Soon

In an article he wrote for Time Magazine, Kurzweil says that even though most of the people in the field think we're still several decades away from creating a human level intelligence, he puts the date at 2029 — less than 15 years away.

Kurzweil argues we are already a human-machine civilization. We already use lower level AI technology to diagnose disease, provide education, and develop new technologies.

"A kid in Africa with a smartphone has more intelligent access to knowledge than the President of the United States had 20 years ago," he writes.

Continued development of AI technology could better provide information and solutions to each individual person on the planet — it could potentially be the thing that designs cancer cures and medications that stop cognitive decline.

Something To Fear?

While Kurzweil thinks the development of human-level AI can happen safely — after all, so far this more informed world hasn't turned on us — not everyone is so sure.

Elon Musk said AI could be the human race's "biggest existential threat" at a recent symposium at MIT. "With artificial intelligence we're summoning the demon," Musk said.

Stephen Hawking agrees. "The development of artificial intelligence could spell the end of the human race," he recently told the BBC.

They fear — as has been theorized by science fiction authors for decades — that once we create something smarter than us we'll no longer be in control of what happens in the world.

So if that new intelligence doesn't like us or thinks we're harmful, it could decide to eliminate us. This wouldn't necessarily be the case, but if it was we probably couldn't stop it.

Of course, in the end, what Kurzweil estimates will happen in 2029 is the creation of a human-level intelligence, which isn't necessarily capable of becoming a force that takes over the world for good or ill.

But as Nick Bostrom, futurist and author of a recent book on AI titled Superintelligence, notes, just a little bit past the creation of a human level intelligence "is superhuman-level machine intelligence." Perhaps a machine with supercomputing processing power and human ability could even upgrade itself in short period of time.


Waiting for permission
Allow microphone access to enable voice search

Try again.