Description
Course Quick Guide to ChatGPT Embeddings and Other Large Language Models (LLMs). This training course provides you with 9 hours of video training to learn how to use and set up Large Language Models (LLM) such as GPT, T5 and BERT in real projects. With a step-by-step approach, this course teaches you how to build and run large language models and uses real-world examples to better understand the concepts. Topics covered in this course, using BERT Siamese architectures, information retrieval system with OpenAI and GPT3 embeddings, as well as building an image description system with vision transformer and GPT-J, are:
- Build recommendation engines
- Setting up information recovery systems
- Building a description system
The course also fills a gap in the field by providing clear guidelines and best practices for using large language models and will be a valuable resource for anyone looking to use LLMs in their projects.
What you will learn:
- Launching an application using proprietary models, citing an example of a data retrieval system with OpenAI and GPT3 embeds for questions and answers.
- Improved GPT3 performance using custom instances via their API.
- Learn the basics of prompt engineering with GPT3 to achieve more nuanced results by building a chatbot that changes its speaking style to suit the audience (using a data retrieval system).
- Deployment of customized LLM models in the cloud
This course is suitable for people who:
- Machine learning engineers are experienced in machine learning, neural networks and natural language processing (NLP).
- Developers, data scientists, and engineers interested in using Large Language Models (LLM) in their projects.
- People who want to get the best output from GPT-3 or ChatGPT models.
- Those interested in advanced NLP architectures
- Interested in generating and improving the performance of LLM models
- People who are comfortable with libraries like TensorFlow or PyTorch.
- People who are comfortable with linear algebra and vector/matrix operations.
Course details
- Publisher: Oreilly
- teacher: Sinan Ozdemir
- Training level: beginner to advanced
- Training duration: 2 hours 43 minutes
Headlines of the course on 9/2023
- Introduction
- Quick Guide to Large Language Models: Introduction
- Module 1: Introduction to Large Language Models
- Module Introduction
- Lesson 1: Overview of Large Language Models
- Topics
- 1.1 What Are Language Models?
- 1.2 Popular Modern LLMs
- 1.3 Applications of LLMs
- Lesson 2: Semantic Search with LLMs
- Topics
- 2.1 Introduction to Semantic Search
- 2.2 Building a Semantic Search System
- 2.3 Optimizing Semantic Search with Cross-Encoders and Fine-Tuning
- Lesson 3: First Steps with Prompt Engineering
- Topics
- 3.1 Introduction to Prompt Engineering
- 3.2 Working with Prompts Across Models
- 3.3 Building a Retrieval-Augmented Generation Bot with ChatGPT and GPT-4
- Module 2: Getting the Most Out of LLMs
- Module Introduction
- Lesson 4: Optimizing LLMs with Fine-Tuning
- Topics
- 4.1 Transfer Learning—A Primer
- 4.2 The OpenAI Fine-Tuning API
- 4.3 Case Study: Predicting with Amazon Reviews–Part 1
- 4.4 Case Study: Predicting with Amazon Reviews–Part 2
- Lesson 5: Advanced Prompt Engineering
- Topics
- 5.1 Input/Output Validation
- 5.2 Batch Prompting + Prompt Chaining
- 5.3 Chain-of-Thought Prompting
- 5.4 Preventing Prompt Injection Attacks
- 5.5 Assessing an LLM’s Encoded Knowledge Level
- Lesson 6: Customizing Embeddings + Model Architectures
- Topics
- 6.1 Case Study—Building an Anime Recommendation System
- 6.2 Using OpenAI’s Embedding Models
- 6.3 Fine-Tuning an Embedding Model to Capture User Behavior
- Module 3: Advanced LLM Usage
- Module Introduction
- Lesson 7: Moving Beyond Foundation Models
- Topics
- 7.1 The Vision Transformer
- 7.2 Using Cross Attention to Mix Data Modalities
- 7.3 Case Study—Visual QA: Setting Up Our Model
- 7.4 Case Study—Visual QA: Setting Up Our Parameters and Data
- 7.5 Introduction to Reinforcement Learning from Feedback
- 7.6 Aligning FLAN-T5 with Reinforcement Learning from Feedback
- Lesson 8: Advanced Open-source LLM Fine-Tuning
- Topics
- 8.1 BERT for Multi-label Classification–Part 1
- 8.2 BERT for Multi-label Classification–Part 2
- 8.3 Writing LaTeX with GPT-2
- 8.4 Case Study: Sinan’s Attempt at Wise Yet Engaging Responses – SAWYER
- 8.5 Instruction Alignment of LLMs: Supervised Fine-Tuning
- 8.6 Instruction Alignment of LLMs: Reward Modeling
- 8.7 Instruction Alignment of LLMs: RLHF
- 8.8 Instruction Alignment of LLMs: Using Our Instruction Aligned LLM
- Lesson 9: Moving LLMs into Production
- Topics
- 9.1 Cost Projecting and Deploying LLMs to Production
- 9.2 Knowledge Distillation
- Summary
- Quick Guide to Large Language Models: Summary
Course prerequisites
- Python 3 proficiency with some experience working in interactive Python environments including Notebooks (Jupyter/Google Colab/Kaggle Kernels)
- Comfortable using the Pandas library and either Tensorflow or PyTorch
- Understanding of ML/deep learning fundamentals including train/test splits, loss/cost functions, and gradient descent
Pictures of the course Quick Guide to ChatGPT Embeddings and Other Large Language Models (LLMs)
Sample video of the course
Installation guide
After Extract, view with your favorite Player.
Subtitle: None
Quality: 720p
download link
File(s) password: www.downloadly.ir
Size
721 MB
Be the first to comment