Key Trends and Takeaways from RE•WORK Deep Learning Summit Montreal – Part 2: The Pioneers

The most anticipated aspect of the RE•WORK Deep Learning Summit Montreal was the assembly of deep learning pioneers Yoshua Bengio, Yann LeCun, and Geoff Hinton on stage separately and together for the first time at such an event.
By Matthew Mayo, KDnuggets.

Last week I was fortunate enough to have attended the RE•WORK Deep Learning Summit Montreal (October 10 & 11), and was able to take in a number of quality talks and meet with other attendees.

Beyond the quality talks of the conference, some of which are outlined in my previous summary post, the most anticipated aspect of the summit was the assembly of deep learning pioneers Yoshua Bengio, Yann LeCun, and Geoff Hinton on stage separately and together for the first time at such an event. It felt a bit like seeing Wu-Tang Clan live in concert.

What follows is a short summary of the the talks given by the 3 pioneers, as well as of their discussion panel.

On the afternoon of the summit's first day, Yoshua Bengio, of the Université de Montréal, kicked off the pioneer block as he talked about Deep Learning and Cognition. A favorite topic of Bengio, he covered such concepts as deep understanding, invariance, and disentanglement. 'How do babies learn?' posits Yoshua. Unsupervised and autonomous learning.

Bengio is concerned with the discovery of good representations, and recognizes problems with our unsupervised training objectives. He discussed independently controllable features, latent factors which control variations within observed data, identified by isolating what it is the agent in an interactive environment can control.

Re.Work pioneers

He also briefly touched on the consciousness prior, a new proposed prior for representation learning, introduced to help disentangle abstract factors from each other. You can read more about the conscious prior here, and further discussion can be found here.

Next up was Geoffrey Hinton of the University of Toronto, speaking on one of his favorite conceptual topics, Dynamic Routing With Capsules. Capsules are an abstraction concept which Geoff has been formulating for a number of years, an earlier iteration of which is outlined here. The need for such an abstraction, says Hinton, is due to the design downfalls of CNNs, and the capsules are an attempted design alteration aimed at replacing, or at least supplanting targeted, neurons.

Re.Work pioneers

If interested in knowing more about this "radical proposal" (Hinton's words), I encourage you to look at the slides linked above, or check out this video of a related talk, as I won't mangle the concept by trying (and failing) to do it justice in a few sentences. When paired with Geoff's recent comments on the limitations of backpropagation, this is confirmation that he is not one to rest on his laurels, nor does he believe that neural networks have reached their pinnacle.

While perhaps not the most important point, at least related to research, Hinton's razor sharp wit is one of his greatest presentation assets, and it was on full display during this talk.

The third deep learning pioneer on-hand was Yann LeCun of Facebook and the NYU Center for Data Science, whose talked was titled How Could Machines Learn as Efficiently as Animals and Humans? Yann spent his talk discussing what I saw as 2 major concepts: the current obstacles to artificial intelligence, and the architecture of an intelligent system.

Re.Work pioneers

As LeCun sees it, contemporary obstacles to AI include the broad abilities to model the real world, and to reason and plan. The overarching and connecting concept is the ability for AI to employ common sense; computers just don't have it. He then went on to say that common sense is predictive learning, or what we think of as unsupervised learning. 2 tangentially related points he also made regarding drawbacks to AI were that current pure reinforcement learning is not practical for the real world (we can't allow cars to drive until they figure out how to drive themselves -- dangerous, expensive, time-consuming), and that systems have a requirement to experience reality in real time (we can't speed up reality like we can simulated environments).

Yann's discussion on the architecture of an intelligent system noted that such a system differs from reinforcement learning in that there is no explicit reward. He also posited the question of how to train world simulators, given the above constraints, and launched into short overviews of a series of neural network architectures of the past several years, and how they fit into his idea of intelligent systems. Yann's stated key takeaway? Unsupervised learning is the key to neural network and artificial intelligence advances, and should be more aggressively pursued from a research point of view moving forward, as it has taken a proverbial backseat to supervised learning and classification problems for quite some time (and with great results!). In fact, LeCun stresses: The revolution will not be supervised.

The three pioneers then appeared onstage together alongside Joelle Pineau, who guided a conversation among the four researchers. Full of as much comedy as insight, the discussion really served more as a peak into their personalities, friendships, and professional relationships than it did anything else. Interestingly, Joelle had them start off by introducing each other, which provided some laughs.

We finally heard from Geoff who joked that he might have been the supervisor for Bengio’s thesis, but couldn’t actually remember.

You can read a transcript of the panel discussion here, courtesy of the RE•WORK team.

 
Related:

  • RE•WORK Deep Learning Summit Montreal Panel of Pioneers Interview: Yoshua Bengio, Yann LeCun, Geoffrey Hinton
  • Key Trends and Takeaways from RE•WORK Deep Learning Summit Montreal – Part 1: Computer Vision
  • Top 10 Quora Machine Learning Writers and Their Best Advice, Updated