Xavier Amatriain’s Machine Learning and Artificial Intelligence Year-end Roundup

So much has happened in the world of AI that it is hard to fit in a couple of paragraphs. Here is my attempt.

By Xavier Amatriain, Cofounder/CTO at Curai


Hard to believe that it’s only been a year since I was doing the previous end-of-year round up. So much has happened in the world of AI that it is hard to fit in a couple of paragraphs. Here is my attempt. Don’t expect too many details, but do expect a lot of links to follow up on them.

If I have to pick my main highlight of the year, that has to go to AlphaGo Zero (paper). Not only does this new approach improve in some of the most promising directions (e.g. deep reinforcement learning), but it also represents a paradigm shift in which such models can learn without data. We have also learned very recently that the Alpha Go Zero generalizes to other games such as chess. You can read more about my interpretation of this advance in my Quora answer.
A recent meta-study found systematic mistakes in reporting metrics on GANs-related research papers. Despite this, it is undeniable that GANs have continued to present impressive results, especially when it comes to their applications thttps://arxiv.org/abs/1710.10742o the image space (e.g. Progressive GANs,  Conditional GANS in pix2pix, or CycleGans) . NLP is another area that has seen very impressive advances due to Deep Learning this year is NLP, and, in particular, translation. Salesforce presented an interesting non-autoregressive approach that can tackle full sentence translation. Perhaps even more groundbreaking are the unsupervised approaches presented by Facebook and UPV.  Deep Learning is also having a huge impact in an area that hits close to home: recommender systems. However, a recent paper also questioned some recent advances by showing how much simpler methods like kNN were competitive with Deep Learning. Let me point out that while it is true that many or most AI advances are coming from the Deep Learning field, there is continuous innovation in many other directions in AI and ML. It is also not a surprise that, as in the case of GAN research, the incredible fast pace of AI research can also lead to some loss in scientific rigor.

As a matter of fact, many criticize this lack of rigour and investment in setting the theoretical foundations of the methods. Just this week, Ali Rahimi described modern AI as “alchemy” in his NIPS 2017 Test of Time talk. This was quickly responded by Yann Lecun in a debate that is unlikely to be resolved any time soon. I think you might agree though that this year has seen many interesting efforts in trying to advance the foundations of Deep Learning. For example, researchers are trying to understand how deep neural networks generalize. Tishby’s Information Bottleneck theory was also debated at length this year as a plausible explanation to some of the Deep Learning properties. Hinton, who is being celebrated for his career this year, also keeps questioning foundational issues such as the use of backpropagation. Renowned researchers such as Pedro Domingos soon picked up the glove and developed Deep Learning methods that used different optimization techniques. A final, and very recent, fundamental change proposed by Hinton is the use of capsules ( see original paper) as an alternative to Convolutional Networks.
If we look at the engineering side of AI, the year started with Pytorch picking up steam and becoming a real challenge to Tensorflow, especially in research. Tensorflow quickly reacted by releasing dynamic networks in Tensorflow Fold. The “AI War” between big players has many other battles though, with the most heated one happening around the Cloud. All the main providers have really stepped up and increase their AI support in the cloud. Amazon has presented large innovations in their AWS, such as their recent presentation of Sagemaker to build and deploy ML model. It is also worth mentioning that smaller players are also jumping in. Nvidia, has recently introduced their GPU cloud, which promises to be another interesting alternative to train Deep Learning models. Despite all these battles, it is good to see that industry can come together when necessary. The new ONNX standardization of neural network representations is an important and necessary step forward to interoperability.
2017 has also seen the continuation (escalation?) of social issues around AI. Elon Musk continues to fuel the idea that we are getting closer and closer to killer AIs, to many people’s dismay. There has also been a lot of discussion about how AI will affect jobs in the next few years. Finally, we have seen a lot more focus being put on transparency and bias of AI algorithms.
Finally for the past few months I have been working on AI for medicine and healthcare. I am also happy to see that the rate of innovation in less “traditional” domains like healthcare is quickly picking up. AI and ML have been applied to medicine with years, starting with expert and Bayesian systems in the 60s and 70s. However, I often find myself citing papers that are only a few months old. Some of the recent innovations presented this year include the use of Deep RL, GANs, or Autoencoders to represent patient phenotypes. A lot of recent AI advances have also focused on Precision Medicine (highly personalized medical diagnosis and treatment) and genomics. For example David Blei’s latest paper addresses causality in neural network models by using bayesian inference to predict whether an individual has a genetic predisposition to a disease.
I look forward to a 2018 full of practical and theoretical advances. I am especially excited to see how those advances will impact important social areas such as healthcare.

Bio: Xavier Amatriain is Cofounder/CTO at Curai.


  • Big Data: Main Developments in 2017 and Key Trends in 2018
  • Data Science, Machine Learning: Main Developments in 2017 and Key Trends in 2018
  • Top 10 Quora Machine Learning Writers and Their Best Advice, Updated