Deep Learning


Deep Learning may be a subfield of machine learning concerned with algorithms inspired by the structure and performance of the brain called artificial neural networks.

If you are just starting a call at the sector of deep learning, otherwise you had some experience with neural networks a while ago, you'll be confused. Many across the globe learned and used neural networks within the 1990s and early 2000s. The leaders and experts within the field have ideas of what deep learning is and these specific and nuanced perspectives shed tons of sunshine on what deep learning is all about.

Let’s dive in.

Deep Learning is Large Neural Networks

Andrew Ng from Coursera and Chief Scientist at Baidu Research formally founded Google Brain that eventually resulted in the productization of deep learning technologies across an outsized number of Google services. He has spoken and written tons about what deep learning may be and is a good place to start. In early talks on deep learning, Andrew described deep learning in the context of traditional artificial neural networks. In the 2013 talk titled “Deep Learning, Self-Taught Learning and Unsupervised Feature Learning” Andrew described the idea of deep learning as:

Using brain simulations, hope to: 
  • Make learning algorithms far better and easier to use.
  • Make revolutionary advances in machine learning and AI.
It is believed that this is often our greatest shot at progress towards real AI. 
Later his comments became more nuanced.

The core of deep learning consistent with Andrew is that we now have fast enough computers and enough data to truly train large neural networks. When discussing why now is the time that deep learning is beginning at ExtractConf 2015 during a talk titled “What data scientists should realize deep learning“, he commented: “very large neural networks we will now have and … huge amounts of knowledge that we've access to”.

He also commented on the important point that it's all about scale. That as we construct larger neural networks and train them with more and more data, their performance continues to extend. This is generally different from other machine learning techniques that reach a plateau in performance.

“for most flavors of the old generations of learning algorithms … performance will plateau. … deep learning … is that the first-class of algorithms … that's scalable. Performance just keeps recuperating as you feed them more data”

Finally, he's clear to mean that the advantages from deep learning that we are seeing in practice come from supervised learning. From the 2015 ExtractConf talk, he commented: “almost all the value today of deep learning is through supervised learning or learning from labeled data”

Earlier at a talk to Stanford University titled “Deep Learning” in 2014, he made a similar comment: “one reason that deep learning has taken off like crazy is that it is fantastic at supervised learning”

Deep learning is really all about large neural networks.

When you hear the term deep learning, just consider an outsized deep neural net. Deep refers to the number of layers typically and so this type of the popular term that’s been adopted in the press. It can be thought of as deep neural networks generally.
The scalability of neural networks indicates that results get better with more data and bigger models, that successively require more computation to coach.

Deep Learning is a Hierarchical Feature Learning

In addition to scalability, another often-cited advantage of deep learning models is their ability to perform automatic feature extraction from data which is also called feature learning.

It is described that deep learning in terms of the algorithm's ability to get and learn good representations using feature learning.
“Deep learning algorithms seek to require advantage of the unknown structure within the input distribution so on get good representations, often at multiple levels, with higher-level learned features defined in terms of lower-level features”

An elaborated perspective of deep learning along these lines is provided in a technical report titled “Learning deep architectures for AI” where it emphasizes the importance of the hierarchy in feature learning. 

“Deep learning methods aim at learning feature hierarchies with features from higher levels of the hierarchy formed by the composition of lower-level features. Automatically learning features at multiple levels of abstraction allow a system to seek out complex functions mapping the input to the output directly from data, without depending completely on human crafted features.”

Why Call it “Deep Learning“? Why Not Just “Artificial Neural Networks“?

Geoff Hinton has started the introduction of the phrasing “deep” to explain the event of huge artificial neural networks.
He co-authored a paper in 2006 titled “A Fast Learning Algorithm for Deep Belief Nets” during which they describe an approach to training “deep” (as during a many-layered network) of restricted Boltzmann machines.
Using complementary priors, we derive a quick, greedy algorithm that will learn deep, directed belief networks one layer at a time, provided the highest two layers form an undirected associative memory. We describe an efficient way of initializing the weights that permits deep autoencoder networks to find out low-dimensional codes that employment far better than principal components analysis as a tool to scale back the dimensionality of knowledge.

It has been obvious since the 1980s that backpropagation through deep autoencoders would be very effective for nonlinear dimensionality reduction, as long as computers were fast enough, data sets were large enough, and therefore the initial weights were close enough to an honest solution.
In a ask the Royal Society in 2016 titled “Deep Learning“, Geoff commented that Deep Belief Networks were the beginning of deep learning in 2006 which the primary successful application of this new wave of deep learning was to speech recognition in 2009 titled “Acoustic Modelling using Deep Belief Networks“, achieving the state of the art results.

It was the results that made the speech recognition and therefore the neural network communities notice, the utilization “deep” as a differentiator on previous neural network techniques that probably resulted within the name change.

Deep Learning Algorithms
Deep Learning methods are a contemporary update to Artificial Neural Networks that exploit abundant cheap computation.

These methods are highly concerned with building larger and complex neural networks and many methods are concerned with very large datasets of labeled analog data, such as image, text. audio, and video.

The most popular deep learning algorithms are:
  • Convolutional Neural Network (CNN)
  • Recurrent Neural Networks (RNNs)
  • Long Short-Term Memory Networks (LSTMs) 
  • Stacked Auto-Encoder
  • Deep Boltzmann Machine (DBM)

Comments

Popular posts from this blog

Our First Workshop Went Like...!

Convolutional Neural Networks

Classification of Animals Using CNN Model