Lecture 10 - Object Recognition & Hugging Face |

 


Lecture 10 - Object Recognition & Hugging Face |

 Hugging face is the market place where lot of trained modals are available to solve your problems.

 Agenda

● How to Explore the Hugging Face

○ Pipelines

○ Models

○ Datasets

● Name Entity Recognition

● Image Classification

 

 Recap

● What is Hugging face?

● Why hugging face?

● What are Hugging face pipelines?

● What is Inference?

There are modals of many big companies. Here high paid peoples developed modals after putting efforts of many years refined the data and presented their precious to you. What we are doing. We are sleeping. How many of us are aware than here Meta AI have 655 modals, Google 587 modals etc etc.

Do you know its treasure for smart freelancer.

Next hugging face explained that what type of problems can be solved by their modals.



 



Image classification assigns a label or class to an image. Unlike text or audio classification, the inputs are the pixel values that comprise an image.


Object Detection models allow users to identify objects of certain defined classes. Object detection models receive an image as input and output the images with bounding boxes and labels on detected objects.


Question answering tasks return an answer given a question. If you’ve ever asked a virtual assistant like Alexa, Siri or Google what the weather is, then you’ve used a question answering model before. There are two common types of question answering tasks:

Summarization creates a shorter version of a document or an article that captures all the important information. Along with translation, it is another example of a task that can be formulated as a sequence-to-sequence task. Summarization can be:

  • Extractive: extract the most relevant information from a document.
  • Abstractive: generate new text that captures the most relevant information.

This guide will show you how to:

1.       Finetune T5 on the California state bill subset of the BillSum dataset for abstractive summarization.

2.       Use your finetuned model for inference.

 

Text classification is a common NLP task that assigns a label or class to text. Some of the largest companies run text classification in production for a wide range of practical applications. One of the most popular forms of text classification is sentiment analysis, which assigns a label like 🙂 positive, 🙁 negative, or 😐 neutral to a sequence of text.

This guide will show you how to:

1.       Finetune DistilBERT on the IMDb dataset to determine whether a movie review is positive or negative.

2.       Use your finetuned model for inference.

 

Translation converts a sequence of text from one language to another. It is one of several tasks you can formulate as a sequence-to-sequence problem, a powerful framework for returning some output from an input, like translation or summarization. Translation systems are commonly used for translation between different language texts, but it can also be used for speech or some combination in between like text-to-speech or speech-to-text.

This guide will show you how to:

1.       Finetune T5 on the English-French subset of the OPUS Books dataset to translate English text to French.

2.       Use your finetuned model for inference.

70% to 80% jobs on fivver can be solved through these modals on average rate of 10,000 dollars to 30,000 dollars.
Now click on the modal menu of the hugging face first page.

bert-base-multilingual-uncased-sentiment
This a bert-base-multilingual-uncased model fine-tuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. It predicts the sentiment of the review as a number of stars (between 1 and 5).
This model is intended for direct use as a sentiment analysis model for product reviews in any of the six languages above, or for further fine-tuning on related sentiment analysis tasks.
Training data
Here is the number of product reviews we used for fine-tuning the model:

Language

Number of reviews

English

150k

Dutch

80k

German

137k

French

140k

Italian

72k

Spanish

50k

Accuracy

The fine-tuned model obtained the following accuracy on 5,000 held-out product reviews in each of the languages:

  • Accuracy (exact) is the exact match on the number of stars.
  • Accuracy (off-by-1) is the percentage of reviews where the number of stars the model predicts differs by a maximum of 1 from the number given by the human reviews

Language

Accuracy (exact)

Accuracy (off-by-1)

English

67%

95%

Dutch

57%

93%

German

61%

94%

French

59%

94%

Italian

59%

95%

Spanish

58%

95%

Contact

In addition to this model, NLP Town offers custom, monolingual sentiment models for many languages and an improved multilingual model through RapidAPI.

Feel free to contact us for questions, feedback and/or requests for similar models.


1.Each problems solver has YouTube video which explains how to use this  problem solver.

2.     Each problems solver would have its variants. Examples would have been given along with its code. Interesting thing is that you don’t have to re-write the code, it can be copied easily with the copy icon given on top right.

Task Variants

Natural Language Inference (NLI)

In NLI the model determines the relationship between two given texts. Concretely, the model takes a premise and a hypothesis and returns a class that can either be:

  • entailment, which means the hypothesis is true.
  • contraction, which means the hypothesis is false.
  • neutral, which means there's no relation between the hypothesis and the premise.

The benchmark dataset for this task is GLUE (General Language Understanding Evaluation). NLI models have different variants, such as Multi-Genre NLI, Question NLI and Winograd NLI.

Multi-Genre NLI (MNLI)

MNLI is used for general NLI. Here are som examples:

 

Example 1:
    Premise: A man inspects the uniform of a figure in some East Asian country.
    Hypothesis: The man is sleeping.
    Label: Contradiction
Example 2:
    Premise: Soccer game with multiple males playing.
    Hypothesis: Some men are playing a sport.
    Label: Entailment

Inference

You can use the 🤗 Transformers library text-classification pipeline to infer with NLI models.

from transformers import pipeline
classifier = pipeline("text-classification", model = "roberta-large-mnli")
classifier("A soccer game with multiple males playing. Some men are playing a sport.")
## [{'label': 'ENTAILMENT', 'score': 0.98}]

 Previously we have used only pipeline. Now we can instruct pipeline that which assistant it should use.

Look there is modal parameter given in the following code.

classifier= pipeline("text-classification", model = "roberta-large-mnli")

Now when we click on the modal it explains about the modal and next to its name of modal is given. You need not to worry about from where we would get it.

If the modal does not work well. Don’t worry we have its two other replacements that are given on top right of the page.

Now you have used the modal and now want to evaluate it. There are many matrics for evaluation the modal as are given:

Now we go to the landing page of hugging face and click modal again.

There are 237,339 modals available in hugging face at the moment. Now there is question that how could we select the modal of our choice.

Hugging face helps us to finding the modal that match our requirement. Hugging face has categorized its modals in broad categories as given below

In the category of Natural Language Processing, Question Answering modal will answer the question even it is asked from the book comprising 700 pages or less. When we click on it, this will list the number of modals stating its numbers.

When we click any modal, it directs us to another page. This page in the language of hugging face is called modal card.

Most important place on the modal card is its title place. You can copy it from the icon given against its name.

deepset/roberta-base-squad2

Each modal of the AI is in two modes:

1.     Inference Mode

2.     Training Mode

When the modal is in training mode, it is in stage of updating and will generate prediction based on the data provided to it.

When the modal is in inference mode, it is not in stage of training. It just takes data from you and predicts it.

So, the modal given is inference modal and you can generate prediction from it instead of training by given modes.

Following modals should be focused for the time being:

Zero shot classification means we can classify the modal without training.

Text to text modal generates another text when a text is given to it.

Sentence Similarity modal will tell whether the sentences are similar to each other or not.

Those are self motivated may all these modals.


Object Detection will differentiate among different objects, like person, football etc.

Image Segmentation will differentiate a specific image by highlighting a specific image pixels.

Snap chat uses computer vision by detecting your face. This identify your face for applying filter accordingly.

This is Hugging face and as it is treasure so go and explore it. More practicing it would make you the perfect practitioner of AI.

Now we open the colab, and see  the difference when the modal is given and when the modal is not given

Now we explore, how we insert modal.

Open Hugging face and go to problem solving portion and click Text Classification.

 In the inference portion modal name has been separated by a comma. At first suppose I don’t know anything about modal and copy the modal suggested by the Hugging face in my code.

Here the result is in accurate as its is given neural result for the negative text.

Now I go back to hugging face and try the replacement modals given on the modal card page. I have simply copied the code and paste at the place of modal

 Now the result is similar to our text classification without modal. Now let us understand this scenario

When we don’t give the modal name, at this stage the hugging face in the back end uses the best modal to generate the result of text classification. We can say the hugging face is responsible to choose the best modal

When we give the modal name ourselves, it is then becomes our responsibility about the capability of the modal, how much it is trained and how much it is compatible with the data you provide. But try to understand the a modal can be good one but it is possible that it is not good when treated with the data you provide.

At the modal page , in modal detail every type of data is given


 When the data is of small size it means it has been trained on small data

When the data is of medium size it means it has been trained on medium data

When the data is of large size it means it has been trained on large data

It must be remembered that when the modal is given such type of data that was not used during its training, then the modal will start giving wrong results when it is given with the data that was not utilized during its training. In this case even modal trained with large data will fail. So it depends on case to case basis.

So need lot of experimentation while using pipelines and keep in mind the user expectations.

Comments

Popular posts from this blog

Topic-18 | Evaluation Metrics for different model

Topic 22 | Neural Networks |

Topic 17 | Linear regression using Sklearn