By Gregory Piatetsky, KDnuggets.
Artificial General Intelligence (AGI), is defined as the machine intelligence that could successfully perform any intellectual task that a human can do. AGI can potentially bring enormous benefits - cure diseases, provide ample leisure time, eliminate car fatalities from safe self-driving cars, etc. AGI has poses also enormous risks - eliminating most (or all) jobs, dramatically increasing inequality, and perhaps an existential threat to humanity, as Elon Musk and Stephen Hawking warn. If AGI is achieved, then Singularity, when the computer intelligence will increase exponentially, could follow soon after.
Andrew Y. Ng says it is too early to worry about AGI and singularity - just like it is too early to worry about overpopulation on Mars.
With AlphaGo Zero (and later AlphaZero) achieving amazing successes and superhuman performance in Go, Chess, and other games, and with computers now able to recognize images, understand speech, drive cars, and diagnose radiology as well as humans or better, AGI seems to be getting closer by the week.
On the other hand, some experts like Francois Chollet, author of Keras and expert in Deep Learning, argue that intelligence explosion is impossible.
There are a variety of predictions about AGI, but to avoid bias, please express your opinion first and then look at the predictions - see some links below.
New KDnuggets Poll is asking (please vote)
Here are some recent predictions on Artificial General Intelligence
- AI is getting brainier: when will the machines leave us in the dust? , The Guardian, 2017
- When Will The First Machine Become Superintelligent?, Predictions from Top AI Experts, Medium, 2016
- "The wonderful and terrifying implications of computers that can learn", Jeremy Howard TED Talk, 2014.
- Exclusive: Interview with Rich Sutton, the Father of Reinforcement Learning
- Voices in AI – great conversations with leaders in AI, Machine Learning,