Ensemble learning works by generating a set of base models and then combining their predictions in some way. The most common methods include bagging (Bootstrap Aggregating), boosting, and stacking. Bagging reduces variance by averaging the predictions of multiple models trained on different subsets of the data. Boosting focuses on reducing bias by sequentially training models to correct the errors of previous models. Stacking involves training a new model to combine the predictions of several base models.