Topic-2-AI CONCEPTS AND APPLICATIONS

 

AI CONCEPTS AND APPLICATIONS

Let us start with the basic example. Yesterday we discussed basic data intelligence that the functions which should be done through human intelligence are now being done or trying to do by AI Machine learning. For which some basic components are reused.

For example Large Language Modal which is a language model in which a text input is given and it returns a text message as output.

This modal has established an industry. Everyone is making of its own language modal with the help of his own algorithm. Few of which are good language modals and few a bad ones. Some are more popular and some are less popular. Point is that every one is working in his own niche so the basic building block is similar.

For example in health care such systems have been established that diagnose your disease after asking symptoms as an input from you about your blood pressure sleep timings etc. These systems with the passage of time are developing to prescribe you a medicine as well.

There are few things in AI that are actually being utilized in many areas. For example in Object detection a vehicle is recognized by system and this recognition is being utilized at many places.

Another example is that a weapon movement and detection systems has been developed that detects any unusual movement in the shopping mall. This system is being utilized by other areas such as traffic system and in space and lot of other places.

For facilitating the blinds such sticks have been developed which detect the objects before stick strikes with it. There are such cameras which tell the name of the object when these are pointed towards them.

These devices are ultimately linked with the IOT (Internet of Things).  So it is for those who are thinking about coding of AI. You have not to code the AI , but have to do some thing remarkable by using these small building blocks.



Classification helps us to sort the input data. For example in the example given above. It would sort the Spam and Non-Spam emails.

For example we want to know how many students in a class want to learn AI and how many are not interested in learning AI.

The system in this case would generate few question that would be answered by the students and on the basis of their answers the systems will tell about the students interested or not interested in learning AI.

Same is the case of medical field, we can sort the diseased and healthy persons and diagnosis about a specific disease.



AI has detected both human and the horse. Same is the case with the human. The watching sense of human among other senses which help in making the decision is the most important sense (although others are as well). Do you think that AI has learnt to detect things through watching ability like human. So AI is developing thinking ability.

AI is trained like human baby. Human as a child is told that this is horse so data is feed in child brain. Similarly, AI is also trained by feeding data about the horse in it.

So when are feeding data in AI Modal as an IOT Hardware. we are teaching him how to watch, how to taste and how to smell. So, in future IOT hardware and AI Modal are going to work together

It was told that in Generative Models we give data to system in parts and system goes beyond the given data to some extent . It does not mean that system goes much far above the given information.

It produces synthetic data that resembles with the original data. Chat GPT works on the Generative Model.

Have you ever noticed that Chat GPT does not provide the same answer in response to one question when asked multiple times. It changes the words and arrangement of sentences every time.

In the image given above, Model will generate the image by using two sources A & B. However keep in mind that it is impossible to generate every image for example picture of Monkey on Moon.

We will discuss it at the end

AI is the branch of Computer Science in which we apply such methods by which computers learn without explicit programming.

Advancement so far achieved is in the field of ANI and we are far away from General AI. Although some people say that Large Language Programming and other generative modals are examples of Baby Generalized AI, about which we are not going to discuss here. However, its food for thought you people to explore further.

We also discussed in our last lecture that AI value is going to be in trillions as referred in McKinsey’s report.

There are many ways to learn AI, but most popular way is learning it through Machine Learning which is called sub part of AI.

In supervised Learning, there misconception that modal is trained as we use it.

A modal operates in two modes:

·       In first mode which is called Training Mode. In this mode we train modal on the basis of previous data. In preview of the Supervised Learning, input is given to the modal, modal predicts the out put and this predicted out put is compared with the original out put and feed back is given to the Modal about accuracy of the predicted output. On the basis of this feedback modal will update itself.

·       Prediction / Entrance Mode. On the basis of training in above mode, when unknown data is given to the modal, the modal converts this input to output. If the modal is well trained there are more chances that its output would be better.

It was asked that when a modal generates 100% accurate results, should it be acceptable or not. In other words, 100% out put is good thing or bad thing?

It is not acceptable as in real world it would fail. System has crammed the input data at such level that it is giving the 100% result.

·       Rule of thumb is that use 70% of the data to train the modal and remaining 30% system should remain unknown to the modal and would be used for testing.


If results of training data and test data are close to each other, it means we have done the good job. For example both data are giving 99% .

I If results of training data and test data are not  close to each other, it means there are doubts in predicted data. For example training data is producing 99% and test data is producing 46%. This creates doubts that needs to be addressed.

Example from real life was given that when a incompetent student gives 99% results, we compare his result with the education he learnt during the session. He did not attend the lectures, missed the important session so his results of 99% are doubtful and are not acceptable.

Another example, from a desk of bank counter was given. He is working so efficiently on key board that we think that he is expert in computer field. But when we change the software , the same person performance tells the other story.

t is pertinent to mention that 100% accuracy in out put in some cases could not be achieved. For example, in case of stock of summer and winter clothes.


It was asked to set the realistic attainable targets about the completion of the course.

A modal operates in two modes:

·       In first mode which is called Training Mode. In this mode we train modal on the basis of previous data. In preview of the Supervised Learning, input is given to the modal, modal predicts the output and this predicted output is compared with the original out put and feedback is given to the Modal about accuracy of the predicted output. On the basis of this feedback modal will update itself.

·       Prediction / Entrance Mode. On the basis of training in above mode, when unknown data is given to the modal, the modal converts this input to output. If the modal is well trained there are more chances that its output would be better.

UNSUPERVISED LEARNING

Topic was started with the proverb “A man is known by a company he keeps”.

A question arises here that why there is need of Unsupervised learning in the presence of Supervised Learning?

As we discussed in Supervised Learning, there is need of pair of Input and Out Put Data. In case we have 50,000/- images and in case of following of Supervised learning who will annotate these 50,000 images? who will label them, resulting lot of data would remain unlabeled, resulting non availability of out put.

In order to solve this problem we approach to the concept of unsupervised learning.

Concept of Unsupervised learning is that, take the large number of data and make group of similar items. This qualifies the proverb “A person is known by the company he keeps”.

Suppose we have 1000 images and when we give this data to the unsupervised learning modal. He identifies the similar groups and tell the number of similar groups. This concept of grouping the similar items is called clustering and these groups are called clusters.

Similarly, if we have lot of pdf data, unsupervised learning algorithm would make different clusters based on different topics like Marketing, Accounting etc.

For example we have to categorize the fruit and vegetable in one case and in other case we have to sort the tomatoes of different quality out of basket of tomatoes. In this case number of categories would be established.

Anomaly Detection

Anomaly is another technique of Unsupervised learning. In anomaly we tell modal that all inputs should be normally be treated. When something different comes in front of modal, it says it is not normal.

Clustering + Anomaly

In anomaly detection the process works between detection normal and abnormal.

Example of Bike Engine Manufacturing was given. The modal will check the engine on normal parameters and where it is not normal it would detect it and this abnormality detection is called anomaly detection.

In Internet of Things, the consumer electronics will generate data. The devices will also communicate each other. IOT and AI are related with each other.

In classification, Modal is learning function or approximation of function. For example, sequel function, modal is function that is converting input to output. For example, detecting the duplicate images in phone gallery is an example of clustering.

Remember there are multiple ways to solve the problem. If a problem can be solved with the help of supervised learning, it does not mean that it can not be solved with unsupervised learning. It is the nature of problem that will determine which technique have to be opted.

Every modal works on the input to out put function. Better the function better out put would be produced and vice versa.

There is need to analyze the training data accuracy and test data accuracy. If result of both is close to each other than it means modal is working properly.

If the training data accuracy is more than enough and test data accuracy is less than enough, it is called overfitted modal.

If the training data accuracy is less than 60% it is called underfitted modal. In this case modal needs to be made more complex using different algorithm.

 

DIMENSIONALITY REDUCTION

The number of input features, variables, or columns present in a given dataset is known as dimensionality, and the process to reduce these features is called dimensionality reduction.

A dataset contains a huge number of input features in various cases, which makes the predictive modelling task more complicated. Because it is very difficult to visualize or make predictions for the training dataset with a high number of features, for such cases, dimensionality reduction techniques are required to use.

Dimensionality reduction technique can be defined as, "It is a way of converting the higher dimensions dataset into lesser dimensions dataset ensuring that it provides similar information." These techniques are widely used in machine learning for obtaining a better fit predictive model while solving the classification and regression problems.

It is commonly used in the fields that deal with high-dimensional data, such as speech recognition, signal processing, bioinformatics, etc. It can also be used for data visualization, noise reduction, cluster analysis, etc.

REINFORCEMENT LEARNING


In childhood while learning the bicycle, you started to learn the riding the bicycle with the help of your friend. At first you could not ride it properly and fall and get wounded. After healing the wounds, and taking the break of few days you try again and again your muscles that are necessary for driving a cycle automatically start working . During this stage of learning, if you apply the correct techniques like keeping the handle straight and paddling you are rewarded and enjoy the long ride of the bicycle. But if you move the handle in wrong direction and don’t push the paddle with right power, you will fall and will be punished by getting hurt again.

Similarly reinforcement learning is art of practicing. Reinforcement Learning is the branch of Machine Learning that deals with practicing.

Reinforcement learning is a machine learning training method based on rewarding desired behaviors and/or punishing undesired ones. In general, a reinforcement learning agent is able to perceive and interpret its environment, take actions and learn through trial and error.

Agent: Who want to learn skills

Reward: Enjoyment during the learning. Agent makes effort to maximize the award.

Penalty: Penalty is the negative reward.

Agent has finite set of actions that result in reward or penalty. In case of penalty the agent will not repeat that set of actions.


DIFFERENCE BETWEEN MACHINE LEARNING – AI – DEEP LEARNING

Artificial Intelligence is the concept of creating smart intelligent machines.

Machine Learning is a subset of artificial intelligence that helps you build AI-driven applications.

Deep Learning is a subset of machine learning that uses vast volumes of data and complex algorithms to train a model.

DEEP LEARNING

Human brain works with the help of Neurons. What if lot neurons artificially produced to perform various functions? Although we have not yet known how the neurons works. We use estimation technique to produce the neurons. Artificial neurons are manufactured to perform various functions.

It is interesting that it is not difficult to recognize the function of neurons.

RELU is the Max Function Neuron

ARTIFICIAL NEURAL NETWORK

When number of artificial neuron are combined together in such a way that collaborate to take input , perform some function and convert it to the output.

In a neural network, the activation function is responsible for transforming the summed weighted input from the node into the activation of the node or output for that input.

The rectified linear activation function or ReLU for short is a piecewise linear function that will output the input directly if it is positive, otherwise, it will output zero. It has become the default activation function for many types of neural networks because a model that uses it is easier to train and often achieves better performance.




Comments

Popular posts from this blog

Topic-18 | Evaluation Metrics for different model

Topic 22 | Neural Networks |

Topic 17 | Linear regression using Sklearn