A Survey of Selected Papers on Deep Learning at ICML 2016

Green flower with 3s written all over
A Two Sigma research scientist provides an overview of some of the most interesting research presented at ICML 2016.

Machine learning offers powerful techniques to find patterns in data for solving challenging predictive problems. The dominant track at the International Conference on Machine Learning (ICML) in New York this year was deep learning, which uses artificial neural networks to solve problems by learning feature representations from large amounts of data.

Significant recent successes in applications such as image and speech recognition, and natural language processing [14,22], have helped fuel an explosion of interest in deep learning. And new research in the field is continuing to push the boundaries of applications, techniques, and theory. Below, Two Sigma research scientist Vinod Valsalam provides an overview of some of the most interesting research presented at ICML 2016, covering recurrent neural networks, unsupervised learning, supervised training methods, and deep reinforcement methods.

Recurrent Neural Networks

Unlike feed-forward networks, the outputs of recurrent neural networks (RNNs) can depend on past inputs, providing a natural framework for learning from time series and sequential data. But training them for tasks that require long-term memory is especially difficult due to the vanishing and exploding gradients problem [10], i.e., the error signals for adapting network weights become increasingly difficult to propagate through the network. Specialized network architectures such as Long Short-Term Memory (LSTM) [11] and Gated Recurrent Unit (GRU) [5] mitigate this problem by utilizing gating units, a technique that has been very successful in tasks such as speech recognition and language modeling [6]. An alternative approach that is now gaining more focus is to constrain the weight matrices in a way that is more conducive to gradient propagation, as explored in the following papers.

Unitary Evolution Recurrent Neural Networks

Arjovsky, M., Shah, A., & Bengio, Y. (2016) [1]

The problem of vanishing and exploding gradients occurs when the magnitude of the eigenvalues of weight matrices deviate from 1. Therefore, the authors use weight matrices that are unitary to guarantee that the eigenvalues have magnitude 1. The challenge with this constraint is to ensure that the matrices remain unitary when updating them during training without performing excessive computations. Their strategy is to decompose each unitary weight matrix into the product of several simple unitary matrices. The resulting parameterization makes it possible to learn the weights efficiently while providing sufficient expressiveness. They demonstrate state of the art performance on standard benchmark problems such as the copy and addition tasks. An additional benefit of their approach is that it is relatively insensitive to parameter initialization, since unitary matrices preserve norms.

Recurrent Orthogonal Networks and Long-Memory Tasks

Henaff, M., Szlam, A., & LeCun, Y. (2016) [9]

In this paper, the authors construct explicit solutions based on orthogonal weight matrices for the copy and addition benchmark tasks. Orthogonal matrices avoid the vanishing and exploding gradients problem in the same way as unitary matrices, but they have real-valued entries instead of complex-valued entries. The authors show that their hand-designed networks work well when applied to the task for which they are designed, but produce poor results when applied to other tasks. These experiments illustrate the difficulty of designing general networks that perform well on a range of tasks.

Strongly-Typed Recurrent Neural Networks

Balduzzi, D., & Ghifary, M. (2016) [3]

Physics has the notion of dimensional homogeneity, i.e. it is only meaningful to add quantities of the same physical units. Types in programming languages express a similar idea. The authors extend these ideas to constrain RNN design. They define a type as an inner product space with an orthonormal basis. The operations and transformations that a neural network performs can then be expressed in terms of types. For example, applying an activation function to a vector preserves its type. In contrast, applying an orthogonal weight matrix to a vector transforms its type. The authors argue that the feedback loop of RNNs produces vectors that are type-inconsistent with the feed-forward vectors for addition. While symmetric weight matrices are one way to preserve types in feedback loops, the authors tweak the LSTM [11] and GRU [5] networks to produce variants that have strong types. Experiments were inconclusive in showing better generalization of typed networks, but they are an interesting avenue for further research.

Unsupervised Learning

The resurgence of deep learning in the mid-2000s was made possible to a large extent by using unsupervised learning to pre-train deep neural networks to establish good initial weights for later supervised training [14,22]. Later, using large labeled data sets for supervised training was found to obviate the need for unsupervised pre-training. But more recently, there has been renewed interest in utilizing unsupervised learning to improve the performance of supervised training, particularly by combining both into the same training phase.

Augmenting Supervised Neural Networks with Unsupervised Objectives for Large-scale Image Classification

Zhang, Y., Lee, K., & Lee, H. (2016) [29]

This paper starts out with a brief history of using unsupervised and semi-supervised methods in deep learning. The authors showed how such methods can be scaled to solve large-scale problems. Using their approach, existing neural network architectures for image classification can be augmented with unsupervised decoding pathways for image reconstruction. The decoding pathways consist of a deconvolutional network that mirrors the original network using autoencoders. They initialized the weights for the encoding pathway with the original network and for the decoding pathway with random values. Initially, they trained only the decoding pathway while keeping the encoding pathway fixed. Then they fine-tuned the full network with a reduced learning rate. Applying this method to a state-of-the-art image classification network boosted its performance significantly.

Deconstructing the Ladder Network Architecture

Pezeshki, M., Fan, L., Brakel, P., Courville, A., & Bengio, Y. (2016) [20]

A different approach for combining supervised and unsupervised training of deep neural networks is the Ladder Network architecture [21]. It also improves the performance of an existing classifier network by augmenting it with an auxiliary decoder network, but it has additional lateral connections between the original and decoder networks. The resultant network forms a deep stack of denoising autoencoders [26] that is trained to reconstruct each layer from a noisy version. In this paper, the authors studied the ladder architecture systematically by removing its components one at a time to see how much each component contributed to performance. They found that the lateral connections are the most important, followed by the injection of noise, and finally by the choice of the combinator function that combines the vertical and lateral connections. They also introduced a new combinator function that improved the already impressive performance of the ladder network on the Permutation-Invariant MNIST handwritten digit recognition task [15], both for the supervised and semi-supervised settings.

Supervised Training Methods

Historically, deep neural networks were known to be difficult to train using standard random initialization and gradient decent [7]. However, new algorithms for initializing and training deep neural networks proposed in the last decade have produced remarkable successes [24]. Research continues in this area to better understand existing training methods and to improve them.

Dropout distillation

Bulò, Samuel Rota, Porzi, L., & Kontschieder, P. (2016) [4]

Dropout is a regularization technique that was proposed to prevent neural networks from overfitting [23]. It drops units from the network randomly during training by setting their outputs to zero, thus reducing co-adaptation of the units. This procedure implicitly trains an ensemble of exponentially many smaller networks sharing the same parametrization. The predictions of these networks must then be averaged at test time, which is unfortunately intractable to compute precisely. But the averaging can be approximated by scaling the weights of a single network. However, this approximation may not produce sufficient accuracy in all cases. The authors introduce a better approximation method called dropout distillation that finds a predictor with minimal divergence from the ideal predictor by applying stochastic gradient descent. The distillation procedure can even be applied to networks already trained using dropout by utilizing unlabeled data. Their results on benchmark problems show consistent improvements over standard dropout.

Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks

Arpit, D., Zhou, Y., Kota, B., & Govindaraju, V. (2016) [2]

One of the difficulties of training deep neural networks is that the distribution of input activations to each hidden layer may shift during training. One way to address this problem, known as internal covariate shift, is to normalize the input activations to each hidden layer using the Batch Normalization (BN) technique [12]. However, BN has a couple of drawbacks: (1) its estimates of mean and standard deviation of input activations are inaccurate, especially during initial iterations, because they are based on mini-batches of training data and (2) it cannot be used with batch-size of one. To address these drawbacks, the authors introduce normalization propagation, which is based on a data-independent closed-form estimate of mean and standard deviation for every layer. It is based on the observation that the pre-activation values of ReLUs in deep networks follow a Gaussian distribution. The normalization property can then be forward-propagated to all hidden layers during training. The authors show that their method achieves better convergence stability than BN during training. It is also faster because it doesn’t have to compute a running estimate of the mean and standard deviation of the hidden layer activations.

Scalable Gradient-Based Tuning of Continuous Regularization Hyperparameters

Luketina, J., Raiko, T., Berglund, M., & Greff, K. (2016) [16]

Tuning hyperparameters is often necessary to get good results with deep neural networks. Typically, the turning is performed either by manual trial-and-error, by using search, or by evaluating validation set performance. The authors propose a gradient based method that is less tedious and less computationally expensive to find good regularization hyperparameters. Unlike previous methods, their method is simpler and computationally lightweight, and it updates both hyperparameters and regular parameters using stochastic gradient descent in the same training run. The gradient of the hyperparameters is obtained from the cost of the unregularized model on the validation set. Although the authors show that their method is effective in finding good regularization hyperparameters, they haven’t extended it to common training techniques such as dropout regularization and learning rate adaptation.

Deep Reinforcement Learning

The researchers at DeepMind extended the breakthrough successes of deep learning in supervised tasks to the challenging reinforcement learning domain of playing Atari 2600 games [19]. Their basic idea was to leverage the demonstrated ability of deep learning to extract high-level features from raw high-dimensional data by training a deep convolutional network. However, reinforcement learning tasks such as playing games do not come with training data that are labeled with the correct move for each turn. Instead, they are characterized by sparse, noisy, and delayed reward signals. Furthermore, training data are typically correlated and non-stationary. They overcame these challenges using stochastic gradient descent and experience replay to stabilize learning [17], essentially jump-starting the field of deep reinforcement learning.

Asynchronous Methods for Deep Reinforcement Learning

Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D., Kavukcuoglu, K., … (2016) [18]

The experience replay technique stabilizes learning by making it possible to batch or sample the training data randomly. However, it requires more memory and computation and applies only to off-policy learning algorithms such as Q-learning [28]. In this paper, the authors introduce a new method based on asynchronously executing multiple agents on different instances of the environment. The resulting parallel algorithm effectively de-correlates the training data and makes it more stationary. Moreover, it makes it possible to extend deep learning to off-policy reinforcement learning algorithms such as SARSA and actor-critic methods [25]. Their method combined with the actor-critic algorithm improved upon previous results on the Atari domain using much less computation resources.

Dueling Network Architectures for Deep Reinforcement Learning

Wang, Z., Schaul, T., Hessel, M., van Hasselt, Hado, Lanctot, M., & de Freitas, Nando (2016) [27]

This work, which won the Best Paper  award, introduces a new neural network architecture that complements the algorithmic advances in deep Q-learning networks (DQN) and experience replay. The authors point out that the value of an action choice from a given state need to be estimated only if that action has a consequence on what happens. The dueling network architecture leverages this observation by inserting two parallel streams of fully connected layers after the final convolutional layer of a regular DQN. One of the two streams estimates the state-value function while the other stream estimates the state-dependent advantage of taking an action. The output module of the network combines the activations of these two streams to produce the Q-values for each action. This architecture learns state-value functions more efficiently and produces better policy evaluations when actions have similar values or the number of actions is large.

Opponent Modeling in Deep Reinforcement Learning

He, H., Boyd-Graber, J., Kwok, K., & III, Hal Daumé (2016) [8]

The authors introduce an extension of the deep Q-network (DQN) called Deep Reinforcement Opponent Network (DRON) for multi-agent settings, where the action outcome of the agent being controlled depends on the actions of the other agents (opponents). If the opponents use fixed policies, then standard Q-learning is sufficient. However, opponents with non-stationary policies occur when they learn and adapt their strategies over time. In this scenario, treating the opponents as part of the world in a standard Q-learning setup masks changes in opponent behavior. Therefore, the joint policy of opponents must be taken into consideration when defining the Q-function. The DRON architecture implements this idea by employing an opponent network to learn opponent policies and a Q-network to evaluate actions for a state. The outputs of the two networks are combined using a Mixture-of-Experts network [13] to obtain the expected Q-value. DRON out-performed DQN in simulated soccer and a trivia game by discovering different strategy patterns of opponents.

Conclusions

Deep learning is experiencing a phase of rapid growth due to its strong performance in a number of domains, producing state of the art results and winning machine learning competitions. However, these successes have also contributed to a fair amount of hype. The papers presented at ICML 2016 provided an unvarnished view of a vibrant field in which researchers are working actively to overcome challenges in making deep learning techniques more powerful, and in extending their successes to other domains and larger problems.

References

[1] Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary Evolution Recurrent Neural Networks. In Proceedings of The 33rd International Conference on Machine Learning, pages 1120--1128, 2016. [link]

[2] Devansh Arpit, Yingbo Zhou, Bhargava Kota, and Venu Govindaraju. Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks. In Proceedings of The 33rd International Conference on Machine Learning, pages 1168--1176, 2016. [link]

[3] David Balduzzi and Muhammad Ghifary. Strongly-Typed Recurrent Neural Networks. In Proceedings of The 33rd International Conference on Machine Learning, pages 1292--1300, 2016. [link]

[4] Samuel Rota Bulò, Lorenzo Porzi, and Peter Kontschieder. Dropout distillation. In Proceedings of The 33rd International Conference on Machine Learning, pages 99--107, 2016. [link]

[5] Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. In Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, 2014. [link]

[6] George E. Dahl. Deep learning approaches to problems in speech recognition, computational chemistry, and natural language text processing. PhD thesis, University of Toronto, 2015.

[7] Xavier Glorot and Yoshua Bengio. Understanding the Difficulty of Training Deep Feedforward Neural Networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS'10), 2010.

[8] He He, Jordan Boyd-Graber, Kevin Kwok, and Hal Daumé III. Opponent Modeling in Deep Reinforcement Learning. In Proceedings of The 33rd International Conference on Machine Learning, pages 1804--1813, 2016. [link]

[9] Mikael Henaff, Arthur Szlam, and Yann LeCun. Recurrent Orthogonal Networks and Long-Memory Tasks. In Proceedings of The 33rd International Conference on Machine Learning, pages 2034--2042, 2016. [link]

[10] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, and Jürgen Schmidhuber. Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies. In Field Guide to Dynamical Recurrent Networks. IEEE Press, 2001.

[11] Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural Comput., 9(8):1735--1780, November 1997. [link]

[12] Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Proceedings of The 32nd International Conference on Machine Learning, pages 448--456, 2015. [link]

[13] Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and Geoffrey E. Hinton. Adaptive Mixtures of Local Experts. Neural Comput., 3(1):79--87, March 1991. [link]

[14] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436--444, May 2015. [link]

[15] Yann LeCun, Corinna Cortes, and Christopher Burges. The MNIST Handwritten Digit Database, 1998. [link]

[16] Jelena Luketina, Tapani Raiko, Mathias Berglund, and Klaus Greff. Scalable Gradient-Based Tuning of Continuous Regularization Hyperparameters. In Proceedings of The 33rd International Conference on Machine Learning, pages 2952--2960, 2016. [link]

[17] Tambet Matiisen. Guest Post (Part I): Demystifying Deep Reinforcement Learning, 2015-12-21T18:00:20+00:00. [link]

[18] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous Methods for Deep Reinforcement Learning. In Proceedings of The 33rd International Conference on Machine Learning, pages 1928--1937, 2016. [link]

[19] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing Atari with Deep Reinforcement Learning. arXiv:1312.5602 [cs], December 2013. [link]

[20] Mohammad Pezeshki, Linxi Fan, Philemon Brakel, Aaron Courville, and Yoshua Bengio. Deconstructing the Ladder Network Architecture. In Proceedings of The 33rd International Conference on Machine Learning, pages 2368--2376, 2016. [link]

[21] Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semi-supervised Learning with Ladder Networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 3546--3554. Curran Associates, Inc., 2015. [link]

[22] Jürgen Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85--117, January 2015. [link]

[23] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 15:1929--1958, 2014. [link]

[24] Ilya Sutskever. Random Ponderings (Guest Post): A Brief Overview of Deep Learning, Tuesday, January 13, 2015. [link]

[25] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. [link]

[26] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion. J. Mach. Learn. Res., 11:3371--3408, December 2010. [link]

[27] Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van Hasselt, Marc Lanctot, and Nando de Freitas. Dueling Network Architectures for Deep Reinforcement Learning. In Proceedings of The 33rd International Conference on Machine Learning, pages 1995--2003, 2016. [link]

[28] Christopher J. C. H. Watkins and Peter Dayan. Q-learning. In Machine Learning, pages 279--292, 1992.

[29] Yuting Zhang, Kibok Lee, and Honglak Lee. Augmenting Supervised Neural Networks with Unsupervised Objectives for Large-scale Image Classification. In Proceedings of The 33rd International Conference on Machine Learning, pages 612--621, 2016. [link]

This article is not an endorsement by Two Sigma of the papers discussed, their viewpoints or the companies discussed.The views expressed above are not necessarily the views of Two Sigma Investments, LP or any of its affiliates (collectively, “Two Sigma”).  The information presented above is only for informational and educational purposes and is not an offer to sell or the solicitation of an offer to buy any securities or other instruments. Additionally, the above information is not intended to provide, and should not be relied upon for investment, accounting, legal or tax advice. Two Sigma makes no representations, express or implied, regarding the accuracy or completeness of this information, and the reader accepts all risks in relying on the above information for any purpose whatsoever. Click here for other important disclaimers and disclosures.