An Introduction to the World of Artificial Intelligence, Machine Learning and Deep Learning

Team Pecan

Want to learn more?

Contact us
After clicking, you will be returned to the page

These days, we hear a lot about artificial intelligence (AI), machine learning (ML) and Deep Learning (Neural Networks). Let’s learn what they are.

Artificial intelligence, first coined in 1956 by John McCarthy, involves machines that can perform tasks that are characteristic of human intelligence. Tasks such as planning, understanding language, recognizing objects, and problem solving.

Machine learning goes beyond that. It involves, as formulated by Arthur Samuel in 1959, “the ability to learn without being explicitly programmed”. The ability to learn involves "feeding" the algorithms with histrocial data and then, based on categorization, analysis and probabilities, the algorithms extract implicit patterns from the data. Many algorithms are even able to learn from their mistakes and improve themselves.

Thus, at its core, machine learning is simply a way of achieving AI.

An Introduction to deep learning

Deep learning is one of many approaches to machine learning. The expression “deep learning” was first used when talking about Artificial Neural Networks (ANNs) by Igor Aizenberg and colleagues in or around the year 2000. Deep learning was inspired by the structure and function of the brain, namely the interconnection of many neurons, in a layer-like sturcuture.

There have been a lot of developments and advancements in the fields of AI, ML and Deep Learning over the past 60 years. In the 1980s, neural network models were proposed and attracted much interest, with some empirical success. When in the 1990s, interest turned to Support Vectors Machines (SVM), a different machine-learning algorithm. Since 2012, neural networks have re-emerged, and showed remarkable success on various tasks. Today, deep learning is present in our lives in ways we may not even realize; Google’s voice and image recognition, Netflix and Amazon’s recommendation engines, Apple’s Siri, automatic email and text replies, chatbots, and much more.

Why is deep learning taking off now? Why are we seeing these dramatic improvements?

As we mentioned, machine learning algorithms and in particular neural network models, require vast amounts of training data. Beyond that, effective neural networks require considerable computing resources.

Today, those amounts of training data can be collected and vast computing resources are more available for these programs to become more efficient and therefore improve.

Why deep learning?

According to Andrew Ng (from Coursera and Chief Scientist at Baidu Research), deep learning is the first class of algorithms that is scalable. Meaning, as we construct larger neural networks and train them with more and more data, their performance continues to increase. This is generally different than other machine learning techniques that reach a plateau in performance.

deep learning vs machine learning

Jeff Dean is a Senior Fellow in Google's Systems and Infrastructure Group and has been involved and perhaps partially responsible for the scaling and adoption of deep learning within Google. In a 2016 talk titled “Deep Learning for Building Intelligent Computer Systems” he highlights the scalability of neural networks indicating that results get better with more data and larger models, that in turn require more computation to train.

deep learning neural networks

Generally, deep learning:

·      Has best-in-class performance on problems that outperforms other solutions in multiple domains 

·      Reduces the need for feature engineering, one of the most time-consuming parts of machine learning practice. (Which is covered more precisely in one of our next posts: "An explanation of Automated Feature Engineering for Predictive Modeling")

Too good to be true? Why not use only deep learning? Why are people still using other algorithms?

·     Deep learning is extremely time consumiong to train. The most complex models can take months to train using hundreds of machines equipped with GPUs.

·      Determining the networks' topology / training-method / hyperparameters / and configuration is a black art with no underlying theory to guide you.

·      What is learned is not easy to comprehend. ("Black Box")

However, with Pecan:

·      You'll get your predictions in only a few days and without resources amount limitations.

·      You will get local root-casue analysis - thus reducing the difficulty to comprehend the models.

Pecan solved all the problems for you. Sounds like magic?

Come and try with zero risk - Only pay after the model is trained and you’ve evaluated its accuracy.



It's time to plug your organization into its future.

Start your free demo
After clicking, you will be returned to the page