Prompt engineering and large language models (LLMs) have emerged as one of the most valuable and versatile skills to have in your toolkit.
With the pace of AI advancing at incredible rates and showing no signs of slowing down, it's become quite clear that those who know how to effectively work with large language models have a serious edge in their careers.
Master the art and science of prompt engineering and harness the power of large language models (LLMs) to build powerful AI applications.
In this Prompt Engineering track, you'll learn essential skills, tools, and techniques for working with and building LLM-enabled applications.
We'll start with the basics of generative AI & LLMs, discuss key concepts in prompt engineering, and build real-world applications using GPT-3, GPT-4, Whisper, LangChain, Pinecone, ChatGPT Plugins, and more.
Project-based learning
A few of the projects we build in the track include:
- Summarizing YouTube videos with OpenAI's Whisper & GPT-3
- Building a PDF to Q&A Assistant Using Embeddings & GPT-3
- Building a GPT-3 Enabled Research Assistant with LangChain & Pinecone
- GPT-4 & Pinecone: Turning Any Website into an AI Assistant
- Building an AI News Assistant ChatGPT Plugin
- Building a Stock Screener ChatGPT Plugin
- AutoGPT & LangChain: Building an Automated Research Assistant
Throughout this Prompt Engineering Track, you'll explore various aspects of working with large language models, including:
- Working with popular foundational models such as GPT-3, GPT-4, and ChatGPT
- Using LangChain, a framework for developing applications powered by language models
- Working with external documents and long term memory using vector databases like Pinecone for augmented queries & semantic search
- Experimenting with autonomous agents like AutoGPT
By the end of this track you'll have a portfolio of notebooks, starter applications, and ChatGPT Plugins that you can use to launch or advance your career in AI.
This track is divided into eight sections that will guide you through beginner and advanced topics, including:
- Introduction to Large Language Models and Generative AI
- Introduction to Prompt Engineering
- GPT-3 Fine-Tuning & Embeddings
- Building Applications with GPT-3 & Whisper
- Building Applications with ChatGPT & GPT-4
- Building Applications with LangChain & Pinecone
- Building ChatGPT Plugins
- Exploring Autonomous Agents like AutoGPT
While this isn't a "course" in the traditional sense, each article and video tutorial in the track is meant to be a standalone guide to help you build essential skills and master this rapidly advancing field.
Note that our premium tutorials assume you have some familiarity with Python, although we do provide all the necessary code and have plenty of free tutorials to help you get started.
Whether you're looking to build AI assistants to augment your existing work, launch a new LLM-enabled app, or just stay up-to-date with the latest advances in AI, we hope these guides help you in your journey to becoming a highly-skilled AI practitioner.
Section 1: Introduction to LLMs & Generative AI
- What is a Large Language Model (LLM)?
- What is Generative AI? Key Concepts & Use Cases
- What is ChatGPT? Key Concepts & Use Cases
Section 2: Introduction to Prompt Engineering
- Introduction to Prompt Engineering: Key Concepts & Use Cases
- Prompt Engineering: Improving Responses & Reliability
- Prompt Engineering: Advanced Techniques
Section 3: GPT-3 Fine-Tuning & Embeddings
- GPT-3 Fine Tuning: Key Concepts & Use Cases
- Making Recommendations Using GPT-3 & Embeddings
- Building a Custom GPT-3 Q&A Bot Using Embeddings
Section 4: Building Applications with GPT-3
- OpenAI Whisper & GPT-3: Summarizing YouTube Videos
- MLQ Academy: Building a YouTube Video Assistant Using GPT-3 & Whisper
- MLQ Academy: Create a Custom Q&A Bot with GPT-3 & Embeddings
- MLQ Academy: PDF to Q&A Assistant Using Embeddings & GPT-3
- MLQ Academy: Buidling a GPT-3 Enabled App with Streamlit
- MLQ Academy: Building an Earnings Call Assistant
Section 5: Building Applications with ChatGPT & GPT-4
- ChatGPT API: Getting Started with OpenAI’s New Model
- GPT-4 & Pinecone: Turning Any Website into an AI Assistant
- GPT-4 & Pinecone - Ask Questions About Any Website: MLQ Academy
- GPT-4 & Streamlit - Building a Frontend for an AI/ML Tutor: MLQ Academy
Section 6: Developing Applications with LangChain & Pinecone
- Prompt Engineering: Getting Started with LangChain
- Getting Started with LangChain: MLQ Academy
- Building a GPT-3 Enabled Document Assistant with LangChain
- Building a GPT-3 Enabled Research Assistant with LangChain & Pinecone
- Building a GPT-3 Enabled Document Assistant with LangChain: MLQ Academy
- Building an AI Research Assistant with LangChain & Pinecone: MLQ Academy
Section 7: Developing ChatGPT Plugins
- ChatGPT Plugins: Key Concepts & Use Cases
- ChatGPT Plugins: How to Build a To-Do List Plugin
- MLQ Academy: Building a To-Do List Plugin
- ChatGPT Plugins: Building an AI News Assistant
- MLQ Academy: AI News Assistant Plugin
- ChatGPT Plugins: Building a Stock Screener Assistant
- MLQ Academy: Stock Screener ChatGPT Plugin
- ChatGPT Plugins: Building an Educational AI/ML Tutor
- MLQ Academy: Building an AI/ML Tutor
Section 8: Exploring Autonomous Systems
- Getting Started with Auto-GPT: an Autonomous GPT-4 Experiment
- MLQ Academy: Getting Started with AutoGPT
- Adding Long Term Memory to AutoGPT with Pinecone
- AutoGPT & LangChain: Building an Automated Research Assistant
- AutoGPT & LangChain: Building an Automated Research Assistant - MLQ Academy
That's it for now! We'll continue to add more guides to the prompt engineering track to help you stay up-to-date with this rapidly advancing field.