Generative Models
-
Objective: Generative models aim to model the joint probability distribution P(x,y) of the input features x and the output labels y. Essentially, they learn how the data is generated by understanding both the input and output distributions.
-
Capabilities:
- Sample Generation: They can generate new data points that resemble the training data. For example, given a set of images, a generative model can create new, similar images.
- Density Estimation: They can estimate the probability of a given data point.
- Missing Data Imputation: They can predict missing parts of the data by generating plausible replacements.
-
Examples:
- Naive Bayes
- Hidden Markov Models (HMMs)
- Gaussian Mixture Models (GMMs)
- Variational Autoencoders (VAEs)
- Generative Adversarial Networks (GANs)
-
Approach: They work by explicitly modeling the data distribution. For example, they might model P(x∣y) and P(y), then use Bayes’ theorem to compute P(y∣x).
Discriminative Models
-
Objective: Discriminative models aim to model the conditional probability distribution P(y∣x)P(y∣x) directly. They focus on the boundary between different classes rather than how the data is generated.
-
Capabilities:
- Classification: They are primarily used for classifying data points into categories.
- Regression: They can predict continuous output values.
-
Examples:
- Logistic Regression
- Support Vector Machines (SVMs)
- Decision Trees
- Random Forests
- Neural Networks (including CNNs and RNNs)
-
Approach: They work by learning a decision boundary or a direct mapping from inputs to outputs. For example, logistic regression models P(y∣x)P(y∣x) directly without considering the underlying distribution of xx.