Machine Learning Algorithms

The focus of this course is on the theory behind machine learning algorithms for regression and classification problems. We first present the general probabilistic approach to prediction problems, and then introduce elements of Vapnik-Chervonenkis Theory, providing theoretical guarantees of empirical risk minimization in the context of binary classification. We also discuss linear techniques for regression and classification, including regularization techniques, such as ridge regression and the lasso. The last section of the course focuses on non-linear algorithms, such as trees, ensemble methods (bagging, random forests), support vector machines, neural networks, and introduce reproducing kernel Hilbert spaces, and their application in machine learning.
  1. Foundations
  2. Linear Regression
  3. Ridge Regression and Lasso
  4. Linear Classifiers
  5. Vapnik-Chervonenkis Theory
  6. Trees
  7. Bagging and Random Forests
  8. Convex Relaxation
  9. Boosting
  10. SVM
  11. RKHS
  12. Neural Networks