Teach Your Kids Generative AI with GPT and Large Language Models

ChatGPT is a game changer. Give your kids the competitive edge by harnessing this transformational technology!

Jumpstart their journey in AI and watch them create apps powered by large language models in Python in this hands-on course.

Course designed by Google and OpenAI engineers.

Now enrolling, limited seats available.

Accredited
Accredited
Earn high school credits
Earn full course credit for an AI course on your transcript.
Hands-on
Hands-on
Build real AI apps
Learn with a live instructor and create real world applications.
Connected
Connected
Like-minded peers
Make lasting connections when you learn in a live cohort.
Sample Applications

@multiple students Grades: 8-12

Sample Apps

See Our Student's Work


Picasso didn't learn by watching lectures. Programming skills are no different.

Active programming is the best way to learn computer science. In this course, students learn concepts by working on curated projects, and get active support from our expert teachers. These are some sample applications that students built within a few weeks using Python and GPT.

Students will learn skills to build complex standalone projects or enhance existing projects with powerful AI-capabilities.

Quotes from Students and Parents

We couldn't have said it better.

Introduction to Generative AI

  9-12 graders

  Credits awarded on transcript  

  Python completed with B- or better

  UC A-G approved for [C] Mathematics credits

  Self paced instructor-guided  

  Personalized 1-1 support

  Office hours 1 hour per week

  1669 per student, per semester  

  90 minutes per class

  4-8 students per class

  Twice per week over 36 weeks

  1669 per student, per semester  

  2 hours per day (summer)  

  8-10 students per class

  4 days per week 2, 4, or 6 weeks

  649 per student, per week  

This is a comprehensive course designed to teach high school students how to build real-world applications in Python using Generative Pre-trained Transformers (GPT), Large Language Models (LLMs), and other Generative AI models. Students will learn Prompt Engineering, Model Fine-tuning, and will build several applications during the course. The skills students learn in this course will enable them to build complex projects for their high school science fairs, hackathons, and other competitions.

Students must have taken an introductory course in Python programming, such as 2Sigma School's Introduction to Computer Science, before enrolling in this course. Prior experience with AI is not required.

Large Language Models (LLMs) have revolutionized the field of Natural Language Processing (NLP) and ChatGPT is just the tip of the iceberg. To leverage the full potential of LLMs, students need to understand how these models work and how to get the best out of them.

Large Language Models are not magic, and they are not deterministic. Unlike traditional computer programs, how to use these models is not obvious, even to the designers of these models. Prompting techniques are ways to guide the model to produce the desired output. It is a cutting-edge field of study and is still evolving.

In this course, students will start learning to program on Jupyter Notebooks, an essential tool for data scientists and machine learning engineers. They will receive access to the latest API from Open AI and other leading generative AI platforms. They will learn how large language models work, how to use them, and how to get the best out of them. They will learn numerous prompting techniques used by professionals to guide language model to produce the desired output, and will integrate the output of these models in their Python code to build their own generative AI applications such as interactive Chatbots with unique personalities.

After building a few Generative AI applications using Prompt Engineering, students will learn about concepts such as tokenization, embeddings, K-Means clustering, t-SNE visualization, and techniques to fine tune their own large language models from Hugging Face.

By the end of the course, students will be fluent with the prompt engineering, fine-tuning, and building applications for custom use cases using the latest in large language models and generative AI.


   University of California A-G approved for [C] Mathematics credits.

Course Outline

    hide details
  1. Python and Jupyter Notebooks Review
    Python's fundamental concepts are reviewed within Jupyter Notebooks, alongside a comparison with Google Colab. The unit reviews Python basics required for this course, including strings, f-strings, and the formatting of extensive text blocks. It includes data structures such as functions, lists, and dictionaries, and more complex structures like a list of dictionaries and a dictionary of lists. Various methods for representing tabular data in Python are discussed, focusing on data formats like JSON, Markdown, CSV, and HTML, along with their respective syntax. Students will learn to parse and display formatted data structures and install required Python libraries.
  2. Prompting Basics
    Students are introduced to the fundamentals of prompting, focusing primarily on the distinction between Basic Language Learning Models (LLMs) and Instruction-Tuned LLMs and providing insight into the workings of LLMs and the supervised learning process inherent to them. We will explore the prompt-based AI process, highlighting the differences between models like GPT and ChatGPT. Students will learn about model hallucinations, their implications, and why they occur. Essential aspects of prompting are covered – using delimiters, structured input and output formats in LLM completions, and the methodology of teaching by example, known as few-shot prompting. The unit covers designing effective prompts to achieve desired outputs with illustrative examples. Students will explore strategies for approaching mathematical questions with LLMs, and understand why these models often struggle with mathematical concepts.
  3. Iterative Prompt Development
    Students learn to generate product descriptions from factual product sheets, focusing on refining details, adhering to word limits, aligning with the audience, emphasizing key points, and maintaining the appropriate tone. They understand the limitations of Language Learning Models (LLMs) in tasks such as counting words in a sentence and learn strategies to extract and systematically organize technical information. The unit demonstrates how to present LLM outputs in structured formats like HTML, JSON, and Markdown and explores the synergy between Python and LLM to leverage the strengths and mitigate the limitations of each. Students will explore practice exercises such as generating a modified chocolate cake recipe for diabetic needs using LLM, creating a marketing product description for a toy based on specifications, and generating a succinct trailer for a Pixar movie derived from its plot summary, allowing them to apply the iterative prompt development strategies in diverse contexts.
  4. Summarizing
    Students will learn to concisely summarize product reviews while adhering to constraints on words, sentences, or characters. The unit explores techniques for focusing summaries on specific attributes like shipping, delivery, price, and value. It distinguishes between extracting specific information and creating a summary, offering insights into summarizing multiple product reviews in a structured manner. Sample exercises students solve include writing programs that can download news from websites and generate personalized summaries and developing applications to search movie reviews from online databases to create personalized movie recommendations based on user preferences, reinforcing the learned summarization techniques through real-world implementations.
  5. Inferring
    Students learn techniques to extract and analyze sentiments from text, categorizing them as positive, negative, or neutral. Next, they extract more specific emotions and compare the process with traditional machine learning for this task. Students will learn to extract specific entities, such as product and company names, from product reviews and create strategies to infer multiple attributes simultaneously. Using these concepts, students will generate email subjects derived from product reviews, create news alerts tailored to track topics of interest, formulate explanations for emotion classifications attributed to renowned quotes, and more.
  6. Transforming
    Students will explore the intricacies of language translation and build a universal translator, learning to adeptly transform language, tone, and data formats such as HTML and JSON. The unit emphasizes accuracy in language use through spellcheck and grammar check tools and teaches rendering text with redlines in markdown to implement compliance with the APA style guide. Hands-on work will guide students to generate automated responses to customer emails, emphasizing the importance of reminding the model to utilize details from the customer’s email, thus applying transformation techniques to create contextually apt and detail-oriented responses.
  7. Managing Conversations
    This unit focuses on comprehending multiple users' roles in Language Learning Model (LLM) chat completions and understanding the components of a stateful, conversational chatbot. It provides a detailed walkthrough on constructing a specialized 'Orderbot,' simulating a restaurant waiter, and guidance on developing a Graphical User Interface (GUI) for web applications, including creating a web-based chat application. Students will then extend the 'Orderbot' with additional features and create their own stateful chatbot as a web application, enabling them to apply the theoretical knowledge in real-world, interactive scenarios.
  8. Tokenization
    Tokenization demystifies how Language Learning Models (LLMs) perceive and process text, elucidating the conversion of text to tokens and offering visualization techniques. It delves into the limitations of LLMs, explaining why they can’t count words in a sentence and discussing the relevance and implications of token limits. It provides insights on determining token count and explores the comparison between using token limits and prompts to regulate output. The unit also presents strategies to manage token limits effectively in extended conversations, offering students a comprehensive understanding of tokenization and its role in optimizing interactions with LLMs.
  9. Embeddings, t-SNE visualization, Clustering
    This unit introduces the intricacies of embeddings, t-SNE visualization, and Clustering, beginning with exploring what embeddings are and how to utilize them effectively. Students will utilize Pandas and GPT to analyze a substantial dataset of 10,000 tweets from a celebrity, generate embeddings for each tweet, and subsequently cluster them. They will visualize high-dimensional vector embeddings, converting them to 2D or 3D space using t-SNE, and employing Python visualization libraries for enhanced representation of embeddings. Students will learn the concept and application of K-means clustering to organize high-dimensional vectors and visualize these clusters in lower-dimensional spaces using t-SNE. Students will then summarize each cluster using GPT and determine the optimal number of clusters. Lastly, comparing word clouds and GPT explanations for each cluster will enable a multifaceted understanding of the topics covered.
  10. Image generation using DALL-E, Stable Diffusion
    In the unit on Image Generation using DALL-E and Stable Diffusion, students explore the fascinating domain of generating images through textual prompts. Students will use APIs for image generation from prompts and methods for saving and displaying the created images. They will learn effective prompts specifically for image generation and get an intuitive understanding of the mechanisms behind stable diffusion. Students will gain hands-on experience with Stable Diffusion to generate images directly within a Jupyter Notebook environment, aiding in consolidating knowledge and skills in image generation techniques.
  11. Application development using LLMs
    In this unit, students are introduced to various strategies to identify and manage model errors effectively. They will learn methodologies like exponential backoff, retries, and batch processing to handle overload situations efficiently. Students will be introduced to LangChain as a crucial tool for developing applications with LLMs and will learn how to utilize LangChain Agents and Tools to construct a comprehensive application. This unit culminates with a capstone project, allowing students to apply their acquired knowledge and skills to create a full application using LangChain, and further reinforcing the understanding of full application development using LLMs.
  12. Fine Tuning an LLM
    This advanced topic is designed as a springboard to a more advanced course on artificial intelligence. In this final unit on Fine-Tuning an LLM, students will gain insights into fine-tuning, learning about its role and impact on model performance. They will delve into the differences and applications of prompt engineering versus fine-tuning and learn about the benefits of fine-tuning. This unit offers an overview of the fine-tuning process and discusses the generalization capabilities of instruction fine-tuning. It draws comparisons between fine-tuning and training a neural network, allowing for a deeper understanding of the underlying principles of model enhancement. Lastly, the chapter covers crucial aspects of model evaluation and error analysis, enabling students with the knowledge to optimize model performance through fine-tuning effectively.

Summer of Code
    see detailed summer schedule

To take any of our courses, students must be familiar with opening a browser, navigating to a website, and joining a Zoom meeting.

Students must have a quiet place to study and participate in the class for the duration of the class. Some students may prefer a headset to isolate any background noise and help them focus in class.

Most course lectures and content may be viewed on mobile devices but programming assignments and certain quizzes require a desktop or laptop computer.

Students are required to have their camera on at all times during the class, unless they have an explicit exception approved by their parent or legal guardian.

Our technology requirements are similar to that of most Online classes.

A desktop or laptop computer running Windows (PC), Mac OS (Mac), or Chrome OS (Chromebook).
Students must be able to run a Zoom Client.
A working microphone, speaker, webcam, and an external mouse.
A high-speed internet connection with at least 15mbps download speed (check your Internet speed).

This course includes several timed tests where you will be asked to complete a given number of questions within a 1-3 hour time limit. These tests are designed to keep you competitively prepared but you can take them as often as you like. We do not proctor these exams, neither do we require that you install special lockdown browser.

In today's environment, when students have access to multiple devices, most attempts to avoid cheating in online exams are symbolic. Our exams are meant to encourage you to learn and push yourself using an honor system.

We do assign a grade at the end of the year based on a number of criteria which includes class participation, completion of assignments, and performance in the tests. We do not reveal the exact formula to minimize students' incentive to optimize for a higher grade.

We believe that your grade in the course should reflect how well you have learnt the skills, and a couple of timed-tests, while traditional, aren't the best way to evaluate your learning.