Download Oreilly – Quick Guide to ChatGPT, Embeddings, and Other Large Language Models (LLMs) 2023-8

Quick Guide to ChatGPT Embeddings and Other Large Language Models (LLMs)

Description

Course Quick Guide to ChatGPT Embeddings and Other Large Language Models (LLMs). This training course provides you with 9 hours of video training to learn how to use and set up Large Language Models (LLM) such as GPT, T5 and BERT in real projects. With a step-by-step approach, this course teaches you how to build and run large language models and uses real-world examples to better understand the concepts. Topics covered in this course, using BERT Siamese architectures, information retrieval system with OpenAI and GPT3 embeddings, as well as building an image description system with vision transformer and GPT-J, are:

  • Build recommendation engines
  • Setting up information recovery systems
  • Building a description system

The course also fills a gap in the field by providing clear guidelines and best practices for using large language models and will be a valuable resource for anyone looking to use LLMs in their projects.

What you will learn:

  • Launching an application using proprietary models, citing an example of a data retrieval system with OpenAI and GPT3 embeds for questions and answers.
  • Improved GPT3 performance using custom instances via their API.
  • Learn the basics of prompt engineering with GPT3 to achieve more nuanced results by building a chatbot that changes its speaking style to suit the audience (using a data retrieval system).
  • Deployment of customized LLM models in the cloud

This course is suitable for people who:

  • Machine learning engineers are experienced in machine learning, neural networks and natural language processing (NLP).
  • Developers, data scientists, and engineers interested in using Large Language Models (LLM) in their projects.
  • People who want to get the best output from GPT-3 or ChatGPT models.
  • Those interested in advanced NLP architectures
  • Interested in generating and improving the performance of LLM models
  • People who are comfortable with libraries like TensorFlow or PyTorch.
  • People who are comfortable with linear algebra and vector/matrix operations.

Course details

  • Publisher: Oreilly
  • teacher: Sinan Ozdemir
  • Training level: beginner to advanced
  • Training duration: 2 hours 43 minutes

Headlines of the course on 9/2023

  • Introduction
    1. Quick Guide to Large Language Models: Introduction
  • Module 1: Introduction to Large Language Models
    1. Module Introduction
  • Lesson 1: Overview of Large Language Models
    1. Topics
    2. 1.1 What Are Language Models?
    3. 1.2 Popular Modern LLMs
    4. 1.3 Applications of LLMs
  • Lesson 2: Semantic Search with LLMs
    1. Topics
    2. 2.1 Introduction to Semantic Search
    3. 2.2 Building a Semantic Search System
    4. 2.3 Optimizing Semantic Search with Cross-Encoders and Fine-Tuning
  • Lesson 3: First Steps with Prompt Engineering
    1. Topics
    2. 3.1 Introduction to Prompt Engineering
    3. 3.2 Working with Prompts Across Models
    4. 3.3 Building a Retrieval-Augmented Generation Bot with ChatGPT and GPT-4
  • Module 2: Getting the Most Out of LLMs
    1. Module Introduction
  • Lesson 4: Optimizing LLMs with Fine-Tuning
    1. Topics
    2. 4.1 Transfer Learning—A Primer
    3. 4.2 The OpenAI Fine-Tuning API
    4. 4.3 Case Study: Predicting with Amazon Reviews–Part 1
    5. 4.4 Case Study: Predicting with Amazon Reviews–Part 2
  • Lesson 5: Advanced Prompt Engineering
    1. Topics
    2. 5.1 Input/Output Validation
    3. 5.2 Batch Prompting + Prompt Chaining
    4. 5.3 Chain-of-Thought Prompting
    5. 5.4 Preventing Prompt Injection Attacks
    6. 5.5 Assessing an LLM’s Encoded Knowledge Level
  • Lesson 6: Customizing Embeddings + Model Architectures
    1. Topics
    2. 6.1 Case Study—Building an Anime Recommendation System
    3. 6.2 Using OpenAI’s Embedding Models
    4. 6.3 Fine-Tuning an Embedding Model to Capture User Behavior
  • Module 3: Advanced LLM Usage
    1. Module Introduction
  • Lesson 7: Moving Beyond Foundation Models
    1. Topics
    2. 7.1 The Vision Transformer
    3. 7.2 Using Cross Attention to Mix Data Modalities
    4. 7.3 Case Study—Visual QA: Setting Up Our Model
    5. 7.4 Case Study—Visual QA: Setting Up Our Parameters and Data
    6. 7.5 Introduction to Reinforcement Learning from Feedback
    7. 7.6 Aligning FLAN-T5 with Reinforcement Learning from Feedback
  • Lesson 8: Advanced Open-source LLM Fine-Tuning
    1. Topics
    2. 8.1 BERT for Multi-label Classification–Part 1
    3. 8.2 BERT for Multi-label Classification–Part 2
    4. 8.3 Writing LaTeX with GPT-2
    5. 8.4 Case Study: Sinan’s Attempt at Wise Yet Engaging Responses – SAWYER
    6. 8.5 Instruction Alignment of LLMs: Supervised Fine-Tuning
    7. 8.6 Instruction Alignment of LLMs: Reward Modeling
    8. 8.7 Instruction Alignment of LLMs: RLHF
    9. 8.8 Instruction Alignment of LLMs: Using Our Instruction Aligned LLM
  • Lesson 9: Moving LLMs into Production
    1. Topics
    2. 9.1 Cost Projecting and Deploying LLMs to Production
    3. 9.2 Knowledge Distillation
  • Summary
    1. Quick Guide to Large Language Models: Summary

Course prerequisites

  • Python 3 proficiency with some experience working in interactive Python environments including Notebooks (Jupyter/Google Colab/Kaggle Kernels)
  • Comfortable using the Pandas library and either Tensorflow or PyTorch
  • Understanding of ML/deep learning fundamentals including train/test splits, loss/cost functions, and gradient descent

Pictures of the course Quick Guide to ChatGPT Embeddings and Other Large Language Models (LLMs)

Sample video of the course

Installation guide

After Extract, view with your favorite Player.

Subtitle: None

Quality: 720p

download link

Download file – 721 MB

File(s) password: www.downloadly.ir

Size

721 MB

Be the first to comment

Leave a Reply

Your email address will not be published.


*