By Gregory Piatetsky, KDnuggets.
One of the main reasons for success was the use of a novel form of Reinforcement learning in which AlphaGo learned by playing itself.
The system starts with a neural net that does not know anything about Go. It plays millions of games against itself and tuned the neural network to predict next move and the eventual winner of the games.
The updated neural network was merged with the Monte Carlo Tree Search algorithm to create a new and stronger version of AlphaGo Zero, and the process resumed. In each iteration, the performance improved by a small amount, but because it can play millions of games a day, AlphaGo Zero surpassed thousands of years of human knowledge of Go in just 3 days.
Fig. 1: The progression of AlphaGo Zero, from DeepMind post.
This is a hugely significant advance for AI and Machine Learning research.
Here is slightly shortened Quora answer by Xavier Amatriain which eloquently explains why this is so significant.
Xavier Amatriain: In my opinion Alpha Go Zero represents the most significant research advance in AI in the last few years. It is more significant than the original Alpha Go.
Why? The key is not in any of the components being extremely innovative (although there is definitely some smart new stuff going on), but rather in the formulation of the problem itself. This is not about supervised vs. unsupervised learning. It is not even about the fact the network learns without human intervention or examples. It is about the fact that Alpha Go Zero learned without any data!
This is a feat that can not be understated. We have all heard about the "Unreasonable effectiveness of data". We have all heard how data-hungry deep learning approaches are. Well, it turns out that (under some constraints) we don't need __data and learn from itself. You can stretch your mind to think about that in physical world situations (e.g. biological systems) where you could describe the "rules of the game" and then allow the AI model to generate data and learn on its own. As a matter of fact, the whole network can also be seen as a synthetic data generation system. I would be really curious to see how AlphaGo Lee (the previous version) would perform if trained on the data generated by AlphaGo Zero.
Just a few days after the paper was published, I am pretty sure there are many researchers thinking on how to apply this approach to many practical problems. I definitely am thinking on ways to apply it to medicine.
As a final note, it is funny to think how about 10 years ago, some were claiming that we did not need smart algorithms and math anymore. "All you need is data", they said. While data is obviously valuable in many cases, this breakthrough does clearly represent a complete change of direction. I am excited to see where it takes us.