Stanford CS236: Deep Generative Models I 2023 I Lecture 2 - Background
06 May 2024 · almost 2 years ago

Generative vs. Discriminative Models
- Generative models aim to learn the joint distribution of the input data, P(X, Y), while discriminative models aim to predict the output variable given the input data, P(Y|X).
- Generative models can be used for tasks such as sampling new data, density estimation, anomaly detection, and unsupervised feature learning.
- Discriminative models are often simpler and more efficient to train, but they may not provide as much information about the underlying data distribution.
Challenges in Generative Modeling
- The curse of dimensionality: the number of possible outcomes grows exponentially with the number of random variables.
- Conditional independence assumptions: simplifying assumptions are often necessary to make the problem tractable.
- Complex conditional distributions: both generative and discriminative models involve dealing with complex conditional distributions.
Deep Generative Models
- Deep generative models use neural networks to represent conditional distributions and can be used to generate new data.
- Autoregressive models use a neural network to predict the next element in a sequence given the previous elements.
- Mixture models can be used to model data that comes from multiple distributions.
- Variational autoencoders (VAEs) use a neural network to represent a complicated conditional distribution.
Browse more from
Generative AI

Stanford CS236: Deep Generative Models I 2023 I Lecture 17 - Discrete Latent Variable Models

Stanford CS236: Deep Generative Models I 2023 I Lecture 18 - Diffusion Models for Discrete Data

Stanford CS236: Deep Generative Models I 2023 I Lecture 16 - Score Based Diffusion Models

Generally AI Episode 2: AI-Generated Speech and Music

The Hunt for State of the Art (with Suhail Doshi)
Ready to get started?
Save, summarize & chat with your content.
GET STARTED
IT'S FREE
No credit card required · 30 Day Refund on Premium · 24 Hour Support
