Elon Musk talked about the risks of artificial intelligence (AI) at “Great AI Debate” seminar during the four-day Abundance Summit. Reflecting on how there is a small chance that AI could be dangerous for people, maybe around 10 to 20 percent, he said that it can still be worth the risk. Without delving into specifics, he said, “I think there’s some chance that it will end humanity. I probably agree with Geoff Hinton that it’s about 10 percent or 20 percent or something like that. I think that the probable positive scenario outweighs the negative scenario.”
Last November, the billionaire said there’s a chance AI could turn out bad and therefore should have more rules.
Elon Musk also said at the summit that by 2030 AI will be smarter than people which is why there is need to be more careful about the bad consequences of the technology. Comparing making super-smart AI to raising a really smart kid, he said that it’s important to teach AI to always tell the truth and be curious.
“You kind of grow an AGI. It’s almost like raising a kid, but one that’s like a super genius, like a God-like intelligence kid — and it matters how you raise the kid,” he said, adding, “One of the things I think that’s incredibly important for AI safety is to have a maximum sort of truth-seeking and curious AI.”
Elon Musk said that his plan to keep AI safe is simple: make sure AI always tells the truth as if once AI learns to tell lies, it’s hard to stop.