/in-context-learning-LLMs

The world is going through a revolution in art (DALL-E, MidJourney, Imagine, etc.), science (AlphaFold), medicine, and other key areas, and this approach is playing a role in this revolution. .

Primary LanguageJupyter NotebookMIT LicenseMIT

Prompt engineering-LLMs

The world is going through a revolution in art (DALL-E, MidJourney, Imagine, etc.), science (AlphaFold), medicine, and other key areas, and this approach is playing a role in this revolution. However, LLMs are complex and require so much in terms of THE cost and and skills to be trained and Produce acurate results. Only Big companies are able to train them using billions of parameters because they can afford the resources needed. Having shown good impact on the society and the business in general, knowing how to use giant AI models for multiple case in business and social problems is vital.

It is predictedd that more organizations are going to start centering their businesses on LLMs and similar products such as DALL-E 2, MidJourney, Bloom, etc. Therefore,specialized people in prompt engineering demand will grow fast. It is time to start getting familiar with these technologies.

Concepts to understand when using LLMs

Word Embedding

In word embeddings approach individual words are represented as real-valued vectors in a predefined vector space. Word embeddings provides a way to use an efficient, dense representation in which similar words have a similar encoding. They are not computed manually, and weights are learned by the model during training, in the same way a model learns weights for a dense layer.

Attension Mechanism

In the Transformer, the attention module repeats its computations multiple times in parallel. Each of these is called an “attention head.” Attention is a powerful mechanism developed to enhance the performance of the Encoder-Decoder architecture on neural network-based machine translation tasks. Learn more about how this process works and how to implement the approach at work. The Attention module splits its query, key, and value parameters N-ways and passes each split independently through a separate head. The mechanism provides context for any position in the input sequence. For example, if the input data is a natural language sentence, the transformer does not have to process one word at a time. This allows for more parallelization than RNNs and therefore reduces training times.

Self-attention Algorithm

Transformer models use a different strategy to memorize the whole sequence: self-attention! In the self-attention layer, an input x (represented as a vector) is turned into a vector z via three representational vectors of the input: queries, keys, and values. The self-attention mechanism allows the inputs to interact with each other (“self”) and find out to who they should pay more attention (“attention”). The outputs are aggregates of these interactions and attention scores.

Transformer Algorithms

A transformer is a deep learning model that is self-sufficient and evaluates its input and output data representations. Transformers are primarily applied in computer vision and NLP. They are also used in machine language translation, conversational chatbots, and search engines. Transformers quickly became the front-runner for applications like word recognition that focuses on analyzing and predicting text. It led to a wave of tools, like OpenAI’s Generative Pre-trained Transformer 3 (GPT-3), which trains on hundreds of billions of words and generates consistent new text to an unsettling degree.

Named entity recognition using the cohere API

The preprocessing and model training is at the following folder Notebook

  • how to connect to Cohere API.
co = cohere.Client(api_key)

News scoring using using the cohere API

The preprocessing and model training is at the following folder Notebook

      model='medium',  
      inputs=[text], # the string to be classified
      examples=examples # a couple of examples - training set
  ) 

Streamlit Dashboard

Frontend

Flask API backend

Author

@Niyomukiza