The Data Daily

What’s inside AI? Machine Learning and Deep Learning Fundamentals

Last updated: 11-13-2020

Read original article here

What’s inside AI? Machine Learning and Deep Learning Fundamentals

What really fascinates you about AI? To which most of the responses I get are ,” How really accurate, how complex and how miraculous the results given by neural networks are and their ability to learn always awe me.”

AI has been a hot topic for a considerable amount oftime now. But to all the novice out there, it is a commonplace to be where we wonder how do I set off to AI avenue and what are prerequisites for you to really understand the notion of it. Not catching a proper direction might actually intimidate you before even starting. Today, we are breaking down from what actually AI is, what terminologies come with AI to how we can actually start our learning path.

To give a little background, let’s start with history. The term Artificial Intelligence came to the scene in the 1950s and was proposed by a handful of pioneers from the nascent field of computer science. People started wondering whether they can build a machine that can actually reason, comprehend, learn and give rational decisions as human beings. Alan Turing introduced the framework, famously known as the Turing test which proposed that a computer can be said to possess artificial intelligence if it can mimic human responses under specific conditions. In 1980s where a powerful system with tremendous computation power brought a brainwave of neural networks and fueled with mathematics and statistics, machine learning reinforced. In 2010, with the arrival of cloud computing and GPU, AI shifted to deep learning algorithms for deep neural networks.

It is often a confusion in the beginning between the terms Artificial Intelligence, Machine learning and Deep learning itself. To give a description in a few sentences, AI is a bigger picture which encompasses any technique to mimic human beings. Machine learning is a subset of AI techniques that enables machines to improve with experience using statistical methods. Deep learning is a subpart of machine learning that makes implementation of multi-layer neural networks feasible. If you don’t know what neural network means, then we will get into this in a later part of this blog.

Now let’s decode what is inside machine learning by understanding different approaches to it. Machine learning is broadly classified into two types, namely supervised learning and unsupervised learning. In supervised learning , the data we have is labelled or in other sense, we know exactly what we are expecting as output of this system. This learning paradigm is further classified as classification and regression algorithms.We can take a classification example as a system that can classify an image as a cat or a dog and for regression, we can take an example of predicting housing price.

In unsupervised learning, the data we have is unlabeled. The system tries to understand the data in order to generate a specific pattern based on their similarities and differences. Under this paradigm comes clustering where unlabeled dates you have are divided into different mutually exclusive clusters where the inputs under the same cluster are alike to each other and the inputs on different clusters have huge unlikeliness between them.

Reinforcement learning is another paradigm of learning with experience and rewards. RL is goal oriented where the agent (or learner) is encouraged to explore and discover the best action to reach a goal instead of explicitly stating what to do. It constantly struggles between exploration (using experience to obtain reward) and exploitation (trying something new to obtain either great or worse rewards).Here you provide partial information to the model but not correct answer or action. RL is an increasingly popular technique for organizations that deal regularly with large complex problem spaces. Because RL models learn by a continuous process of receiving rewards and punishments on every action taken, it is able to train systems to respond to unforeseen environments.

These were the brief introduction to machine learning approaches and to implement them there are many algorithms. This brings the question of which algorithm is suitable for which problem. The first thing to do is to figure out whether it is a classification or regression problem. The next thing to do is see the complexity of data and the amount of data you have. And another is evaluating accuracy that the measuring metric is giving. You have to figure out how well the model you have built is fitting into the data. You might have to test a couple of algorithms to choose the one that suits your data.

Another machine learning paradigm is Artificial Neural Network. Artificial Neural Network(ANN) uses the processing of the brain as a basis to develop algorithms that can be used to model complex patterns and prediction problems. Similar to the task of being able to look and differentiate an object in front of us because in our brain, neurons work together to process what our eyes capture and give us the results, neural networks have neurons. The data(input) is fed to the input layer and its output goes to the hidden layer which finally propagates to the output layer. Neural Networks turn out to be a much better way than other learning approaches to learn complex hypotheses, complex nonlinear hypotheses even when your input feature space, and input dataset is large.

Now after getting certain heads-up in the subject, let’s discuss where to start?

For this , we can begin by learning either R or Python programming. However, Python is more popular because it is easy to learn and it comes with a number of very attractive machine learning libraries. After that to continue with python, there are lots of popular python packages but to begin with, some of them can be listed as:

Some of popular framework to learn for deep learning can be:

For the IDE/environment, you can use Jupyter Notebook which is powerful, versatile, shareable and provides the ability to perform data visualization in the same environment. Google Colab is another important tool which is a free cloud service which supports free GPU.You can improve your Python programming language coding skills, develop deep learning applications using popular libraries such as Keras, TensorFlow, PyTorch, and OpenCV. The most important thing about using Google Collab is GPU and is totally free.

The learning curve to Artificial Intelligence and machine learning is never smooth and never always increasing but the basic foundation of mathematics and programming along with having what it takes to learn always comes handy. The more challenging it seems, the more fun it is as it unlocks doors to countless opportunities we get exposed to. So, my final suggestion would be, though you do not know what’s happening under the hood, just run your first model, just run it and see the results. In a way you will gain confidence once you start breaking down underlined details and once the dots start connecting, you will feel all the struggle were totally worth it.

Read the rest of this article here