Topic-3-ChatGPT & Prompt Engineering



SEMANTICS

What is semantics in artificial intelligence?

What is Semantic AI? Semantic AI, which is also related to natural language processing (NLP) or natural language understanding, is a branch of artificial intelligence focusing on how computers understand and process human language.0

 

Idea is that you have large amount of data. Old method was that when a question was asked you start to match their words.  But as we discussed in Stemming and Lemmatization; there cane be difference in words.

Applying the tokenization technique, that is breaking the sentences in small words. Magic happened here that we converted all our data in specific numeric form called vectors and store in vector data base. Whenever a question arises, it is also converted in vectors and store in data base. So Semantic search means, searching such words which are same in meaning in context of meaning. So, when I take a query and match it with my data base having same meaning as in query, it is called Semantic Search.

Same has been done in Irfan ChatGpt, the data given by the team and data got through brain storming and after spending time with Sir Irfan was store in a modal. When a query will be asked from Irfan Chat Gpt it would be semantically matched and matching outcome to the query would come as an output.

Question arises where is Chat Gpt in all this scenario. After matching the query with similar data in database, it is ChatGpt that arranges these matching words into meaningful sentences.

Now we continue the demo of Irfan ChatGpt.










Chat wit ChatGpt can be elaborated with the help of an example. Suppose you are sitting with an old man who have lot of knowledge and wisdom. Due to his wisdom and knowledge, you respect him and hesitant to talk him freely. So while asking a question from him, you break your sentences into small chunks and ask questions slowly. Chat GPT is that old man.

Simulate Persona

Suppose you ask a question like what degree should I Pursue, what career should I choose. Even What should do in my life?

Without context and with poor prompt ,it would be difficult to answer and chatGPT would fail. If we prompt our question like:

Where I live?

What are my interests?

What degree I have qualified?

This is my financial positon?

And so on-------------

After knowing these questions, I would have a context in my mind to suggest you a career choice.

In case of abdominal pain, instead of directly telling the doctor that I have pain in my abdomen, if you elaborate about your age, previous history etc it would be easy for the doctor to deal with your abdominal pain.

So the first prompt is to create the Context (Who Am I)  that is called Simulate Persona.

Task

Step to complete the Prompt

Constraints

Goal

Format Output

Number of Paragraphs, Image or text.



An easy approach is that when you are raising a query from Chat GPT , Imagine you are talking with a human. Giving the more than required or less than required information will create the confusion.

If you learn the art of selection of words and sequence of words , you will be successful in every field of life. Same is the case with Chat GPT if you select the proper words with proper sequence you will consume less tokens.

You pay after accumulating the token of input and output.

PLAYGROUND

Playground AI is a web app that allows users to create AI art and share it with the community. It offers free Dall-E2 image generation, automated art style prompts, free image upscaling, image saving on the cloud, and social media for AI-generated images.

Let us explain it with an example. You instruct your child, Servant or junior etc that be careful while conversation with me. We often correct him and advise him not repeat it again.

Play ground is free.







Frequency penalty works by lowering the chances of a word being selected again the more times that word has already been used. Presence penalty does not consider how frequently a word has been used, but just if the word exists in the text

The OpenAI Frequency Penalty setting is used to adjust how much frequency of tokens in the source material will influence the output of the model.

Frequency_penalty and presence_penalty are two parameters that can be used when generating text with language models, such as GPT-3.

·        Frequency_penalty: This parameter is used to discourage the model from repeating the same words or phrases too frequently within the generated text. It is a value that is added to the log-probability of a token each time it occurs in the generated text. A higher frequency_penalty value will result in the model being more conservative in its use of repeated tokens.

·        Presence_penalty: This parameter is used to encourage the model to include a diverse range of tokens in the generated text. It is a value that is subtracted from the log-probability of a token each time it is generated. A higher presence_penalty value will result in the model being more likely to generate tokens that have not yet been included in the generated text.

Both of these parameters can be adjusted to influence the overall quality and diversity of the generated text. The optimal values for these parameters may vary depending on the specific use case and desired output.

Temperature controls randomness, so a low temperature is less random (deterministic), while a high temperature is more random.

More technically, a low temperature makes the model more confident in its top choices, while temperatures greater than 1 decrease confidence in its top choices. An even higher temperature corresponds to more uniform sampling (total randomness). A temperature of 0 is equivalent to argmax/max likelihood, or the highest probability token.

top_p computes the cumulative probability distribution, and cut off as soon as that distribution exceeds the value of top_p. For example, a top_p of 0.3 means that only the tokens comprising the top 30% probability mass are considered.


Comments

Popular posts from this blog

Topic-18 | Evaluation Metrics for different model

Topic 22 | Neural Networks |

Topic 17 | Linear regression using Sklearn