At KDnuggets, we try to keep our finger on the pulse of main events and developments in industry, academia, and technology. We also do our best to look forward to key trends on the horizon.
To close out 2017, we recently asked some of the leading experts in Big Data, Data Science, Artificial Intelligence, and Machine Learning for their opinion on the most important developments of 2017 and key trends they 2018. This post considers what happened in Machine Learning & Artificial Intelligence this year, and what may be on the horizon for 2018.
Specifically, we asked experts in this area:
"What were the main machine learning & artificial intelligence related developments in 2017, and what key trends do you see in 2018?"
We solicited responses from numerous individuals, and asked them to keep their answers to under approximately 200 words, though we were not overly strict and allowed interesting responses to go longer.
As a quick review, last year's trends and predictions centered on the major themes of:
- The successes of AlphaGo
- Deep learning mania
- Self-driving cars
- TensorFlow's influence on the commoditization of neural network technology
To see what developments are recognized as the year's most important, and find out where our experts think Machine Learning & Artificial Intelligence is headed in 2018, read the contributions below.
A final note: We would not be able to present these posts without the gracious participation of the experts upon whom we have called. While not everyone we requested was able to participate, we are grateful for those who were. Enjoy their insights.
Xavier Amatriain is Cofounder/CTO at Curai, and formerly VP Engineering at Quora and Research/Engineering Director at Netflix.
If I have to pick my main highlight of the year, that has to go to AlphaGo Zero (paper). Not only does this new approach improve in some of the most promising directions (e.g. deep reinforcement learning), but it also represents a paradigm shift in which such models can learn without data. We have also learned very recently that the Alpha Go Zero generalizes to other games such as chess. You can read more about my interpretation of this advance in my Quora answer.
If we look at the engineering side of AI, the year started with Pytorch picking up steam and becoming a real challenge to Tensorflow, especially in research. Tensorflow quickly reacted by releasing dynamic networks in Tensorflow Fold. The “AI War” between big players has many other battles though, with the most heated one happening around the Cloud. All the main providers have really stepped up and increase their AI support in the cloud. Amazon has presented large innovations in their AWS, such as their recent presentation of Sagemaker to build and deploy ML model. It is also worth mentioning that smaller players are also jumping in. Nvidia, has recently introduced their GPU cloud, which promises to be another interesting alternative to train Deep Learning models. Despite all these battles, it is good to see that industry can come together when necessary. The new ONNX standardization of neural network representations is an important and necessary step forward to interoperability.
2017 has also seen the continuation (escalation?) of social issues around AI. Elon Musk continues to fuel the idea that we are getting closer and closer to killer AIs, to many people’s dismay. There has also been a lot of discussion about how AI will affect jobs in the next few years. Finally, we have seen a lot more focus being put on transparency and bias of AI algorithms.
For more on Xavier's views on this year's key trends and what he is looking forward to next year, see his year-end roundup here.
Georgina Cosma is is Senior Lecturer at the School of Science and Technology, Nottingham Trent University.
Machine learning models, and in particular deep learning models, are having a big impact on critical areas such as health care, legal systems, engineering and the financial industry. However, the majority of machine learning models are not easily interpretable. Understanding how a model achieved a prediction is particularly crucial in profiling and diagnostic models, where the human must be confident enough to trust the prediction presented by the model. Importantly, the decisions made by some machine learning models must be aligned with laws and regulations. Now is the time to create deep learning models which are transparent enough to explain their predictions, especially when the outcomes of these models are used to influence or inform human decisions.
Pedro Domingos is a Professor in the Dept. of Computer Science & Engineering, University of Washington.
- Libratus's poker victory, extending AI's dominance to imperfect information games (www.cmu.edu/news/stories/archives/2017/january/AI-beats-poker-pros.html)
- The increasingly intense races in self-driving cars and virtual assistants, with Alexa standing out in the latter.
- The developing contest for cloud AI between Google, Amazon, Microsoft and IBM.
- AlphaGo Zero is great, but not a breakthrough. Self-play is one of the oldest ideas in ML, and humans take far less than 5 million games to master Go.
Ajit Jaokar is Principal Data Scientist and Creator of University of Oxford Data Science for Internet of Things Course.
2017 was the year of AI. 2018 will be the year of maturity of AI. We are already seeing this with a more 'systems engineering/cloud native' perspective with AI. AI is becoming complex - with companies like h2o.ai simplifying the complexity of deploying AI.
I see AI being used increasingly for competitive advantage especially in Industrial IoT, Retail and Healthcare leading to even greater disruption. I also see AI being rapidly deployed at all levels of the Enterprise (again leading to new opportunities but more roles being lost). Thus, we have moved beyond the discussion of Python vs R and cats!
I see AI as merging the traditional Enterprise and the wider supply chain through Embedded AI (i.e. Data Science models that span Enterprise and IoT).
Finally, the shortage of Data Scientists who know AI / Deep learning techniques will continue outside the traditional sectors like Banking (especially in Industrial IoT).
Nikita Johnson is the Founder of RE.WORK.
2017 has seen huge advancements in ML & AI, notably the recent general reinforcement learning algorithm from DeepMind, beating the world’s best chess-playing computer program after teaching itself how to play in under four hours.
In 2018 I expect to see the infiltration of smart automation into a wide variety of companies from traditional manufacturing organizations to retail, to utilities. With the continued increase in data collection and analysis, the need for an enterprise-wide automation system strategy will be vital. This will allow companies to invest in a longer-term plan for AI and to ensure it is a priority for future growth and efficiency.
We will also see automated machine learning helping to make the technology more accessible to non-AI researchers and enable more companies to implement machine learning methods into their workplaces.
Hugo Larochelle is a Research Scientist at Google, and Associate Director, Learning in Machines and Brains Program, Canadian Institute for Advanced Research.
One trend in machine learning that I've been excited by and following most is that of meta-learning. Meta-learning is a particularly wide-ranging umbrella term. But this year, most exciting to me has been progress on the problem of few-shot learning, which addresses the problem of discovering learning algorithms that generalize well from very few examples. Chelsea Finn did a great job summarizing the early progress on this topic early this year, in this blog post: bair.berkeley.edu/blog/2017/07/18/learning-to-learn/. Of note, among the many amazing PhD students in machine learning right now, Chelsea Finn has certainly been one of the most productive and impressive this year.
Later in the year, more research on meta-learning for few shot learning was published, using deep temporal convolutional networks (arxiv.org/abs/1707.03141), graph neural networks (arxiv.org/abs/1711.04043) and others. We're also now seeing meta-learn approaches that learn to do active learning (arxiv.org/abs/1708.00088), cold-start item recommendation (papers.nips.cc/paper/7266-a-meta-learning-perspective-on-cold-start-recommendations-for-items), few-shot distribution estimation (arxiv.org/abs/1710.10304), reinforcement learning (arxiv.org/abs/1611.05763), hierarchical RL (arxiv.org/abs/1710.09767), imitation learning (arxiv.org/abs/1709.04905), and many more.
This is an exciting area that I'll definitely be paying close attention to across 2018 as well.
Charles Martin is a Data Scientist & Machine Learning AI Consultant.
2017 saw a huge uptick deep learning AI platforms and applications. The year kicked off with Facebook releasing PyTorch, their Tensorflow competitor. Gluon, Alex, AlphaGo... the advances kept coming. ML evolved from feature engineering and logistic regression to reading papers, implementing neural nets, and optimizing training performance. In my consulting practice, clients have sought custom object detection, advanced NLP, and reinforcement learning. While markets and bitcoin surged, AI has been a silent revolution, and the retail apocalypse has stoked real that AI will devastate industries. Companies want to transform themselves. We have seen great interest in AI mentoring, both technical and strategic.
2018 will surely be a breakthrough year towards a global AI first economy. We have Demand from Europe, Asia, India, and even Saudi Arabia. Global demand will continue to grow, with AI advances from China and Canada, and countries like India retooling from IT to AI. Demand for corporate training is large, both in the US and abroad. AI will enable massive efficiencies, with traditional industries benefitting, such as manufacturing, health care, and finance. AI startups will bring new products to market and add ROI across the board. And new technologies, from robotics to self driving cars, will amaze with more astounding progress.
It is going to be a great year for innovation. If you are onboard.
Sebastian Raschka is an applied machine learning and deep learning researcher and computational biologist at Michigan State University, and the author of Python Machine Learning.
During the last few years, there've been lots of discussions among the open source community regarding all the new deep learning frameworks that emerged. Now, that these tools have somewhat matured, I hope and expect to see a less tool-centric approach and that more energy will be devoted to both the development and implementations of novel ideas and applications utilizing deep learning. In particular, I expect to see many more exciting problems being solved using generative adversarial neural networks and Hinton's capsules, which have been a hot topic of discussion this year.
Also, as you might guess based on our recent semi-adversarial neural nets paper on imparting privacy to face images, user privacy in deep learning applications is an issue that is very dear to my heart, and I hope and expect that this topic gains more attention in 2018.
Brandon Rohrer is a Data Scientist at Facebook.
2017 is notable for yet more machines-beat-humans achievements. Last year AlphaGo passed a longstanding milestone on the road to intelligence by beating the world’s best humans. This year AlphaGo Zero outdid its older brother by teaching itself from scratch. (deepmind.com/blog/alphago-zero-learning-scratch) It didn’t just beat a human, it beat the collective Go playing experience of all humanity. Of more practical interest, a machine now transcribes telephone conversations from the Switchboard benchmark as well as humans do, too. (arxiv.org/abs/1708.06073)
However, AI achievements remain narrow and brittle. Changing even a single pixel in an image can defeat a state-of-the-art classifier. (arxiv.org/pdf/1710.08864.pdf) I predict 2018 will bring more general and robust AI solutions into the limelight. Almost every major technology company already has an Artificial General Intelligence effort. These groups and their early results will catch headlines. At the very least, “AGI” will move to replace “AI” as the buzzword of the year.
Elena Sharova is a Data Scientist at an investment bank.
What were the main Machine Learning/AI related developments in 2017?
I see an increase in the number of companies and individuals moving their data and analytics to cloud-based solutions, as well as a sharp increase in awareness of how important data security is.
The largest and the most successful technology companies have been competing to become your data storage and analytics platform. To the data scientist this means that the tool-boxes and solutions they develop are being shaped by what and how well such platform can deliver.
2017 has seen an explosion of high profile data security breaches across the world. This is a development that should not be ignored. As more and more data moves to third party storage, the need for stronger security that can adapt to new threats will continue to grow.
What key trends do you see in 2018?
Ensuring compliance with the Global Data Protection Regulation (GDPR) and increasingly having to navigate the ‘hidden’ technical debt of Machine Learning systems are my predictions for the key trends of 2018. GDPR, being an EU regulation, has global reach and all data scientists should be fully aware of how it affects their work. As per Google’s NIPS’16 paper, data dependencies are costly, and as firms create complex data-driven models, they will have to think carefully how to address this cost.
Tamara Sipes is the Director of Commercial Data Science at Optum/UnitedHealth Group.
Main Developments in 2017 and Key Trends in 2018:
- The power of the Deep Learning and Ensemble Modeling methods continued to demonstrate its value and superiority over the rest of the machine learning tools in 2017. Deep learning, specifically, has seen a wider adoption in a variety of fields and industries.
- As for the trends in 2018, Deep Learning will likely be used to generate the new features and new concepts from the original inputs, and be replacing the need to create or engineer novel variables by hand. Deep Nets are incredibly powerful in detecting features and structure in data, and data scientists are realizing the value that unsupervised Deep Learning can unleash for this purpose.
- Effective anomaly detection will likely be the focus in the near future as well. Anomalies and other kinds of rare events are the focus of data science efforts in many industries: intrusion detection, financial fraud detection, fraud, waste, abuse and error in healthcare and equipment malfunction are just a few examples. Detecting all of these rare events is enabling competitive edge over the contenders in the fields. Keeping up with the evolving nature of these rare events will be an intriguing and difficult challenge in this area.
Rachel Thomas is fast.ai Founder, and a USF Assistant Professor.
While not as flashy as Alpha Go or a backflipping robot, the 2017 AI trend I’m most excited about is that deep learning frameworks are getting dramatically more user-friendly and accessible. PyTorch (released this year) is friendly to anyone who knows Python (largely due to dynamic computation and OOP design). Even TensorFlow is moving in that direction, by adopting Keras into its core codebase and announcing eager (dynamic) execution. The barriers for coders to use deep learning are getting lower all the time. I expect to see this trend of increased developer usability continue in 2018.
A second trend is increased media coverage of the capabilities of AI for surveillance by authoritarian governments. This privacy threat is not new to 2017, but it’s only recently begun getting widespread attention. Research on using deep learning to identify protesters wearing scarves and hats, or to identify someone’s sexual orientation by their picture, brought more media attention to AI privacy risks this year. Hopefully in 2018 we can continue to broaden the conversation about AI ethics beyond just Elon Musk’s fears of evil super-intelligence, and to address surveillance, privacy, and the encoding of sexist & racist biases.
Daniel Tunkelang is Chief Search Evangelist at Twiggle, and a consultant for a number of high profile organizations.
2017 has been a big year for autonomous vehicles and conversational digital assistants. These two applications epitomize how deep learning is turning science fiction into facts on the ground.
But the most important development for machine learning and AI this year has been the focus on ethics, accountability, and explainability. Elon Musk lit up the press with his apocalyptic warnings about AI triggering a world war, which folks like Oren Etzioni and Rodney Brooks thoughtfully rebutted. Nonetheless, we're facing clear and present dangers from biases in our machine learning models, such as sexism in word2vec, racism in algorithmic criminal sentencing, and deliberate manipulation of the scoring models for social media feeds. None of these issues are new, but the accelerated adoption of machine learning -- and particularly of deep learning -- has surfaced these issues to the general public.
We are finally seeing Explainable AI emerge as a discipline, bringing together academics, industry practitioners, and policy makers. The coming year will amplify the pressure to shine light into the black boxes of our deep learning models.
- Big Data: Main Developments in 2017 and Key Trends in 2018
- Machine Learning & Artificial Intelligence: Main Developments in 2016 and Key Trends in 2017
- Data Science, Machine Learning: Main Developments in 2017 and Key Trends in 2018