CSV 884 - Supervised Learning with Kernels


Instructor Manik Varma
Co-ordinator K. K. Biswas
Student Volunteer Rahul Kumar
Credits 1
Classroom 201, IIA, Bharti Building
Timings 2:00 - 3:30 PM on Tuesdays and Fridays.
Kaggle Competitions       Data Set 1   Data Set 2   Data Set 3   Data Set 4   Data Set 5   Data Set 6


Support Vector Machines (SVMs) are one of the most popular tools used to tackle supervised machine learning problems. They have defined the state-of-the-art on multiple benchmark tasks and are easy to use requiring relatively little machine learning expertise. In this course, we will take a detailed look at the area of supervised machine learning from an SVM perspective. We will start by introducing machine learning and SVM basics and will then go into four current areas of SVM research: (a) formulations; (b) kernels and kernel learning; (c) optimization and (d) prediction. Emphasis will be placed on both the mathematical treatment of SVMs as well as the practical aspects of optimization and implementation. Students are expected to be comfortable with linear algebra, probability/statistics and coding in C and Matlab. By the end of the course, students should be able to use SVMs to solve real world machine learning problems as well as take up research projects extending SVMs in novel directions.

Lectures

Lecture 1 (30-07-2013)
Introduction to ML, features, over fitting and generalization, noise and prior knowledge  
Lecture 1 notes
Chapter 1 of  DHS
Lecture 2 (02-08-2013)
Bayesian vs MAP vs ML approaches  
Lecture 2 notes
Chapter 1 of  PRML
Chapter 3 of  DHS
Lectures 3 (06-08-2013) and 4 (13-08-2013)
Generative vs Discriminative approaches  
Lecture 3 notes
Lecture 4 notes
Chapter from Mitchell
Ng & Jordan NIPS 01
Lectures 5 (16-08-2013) and 6 (23-08-2013)
Linear SVMs
Primal optimization - gradient descent and stochastic gradient descent 
Lecture 5 notes
Lecture 6 notes
Tutorial by Chris Burges
Pegasos

Lecture 7 (15-10-2013)
Linear SVMs continued
Dual optimization - dual co-ordinate ascent 
Lecture 7 notes
Liblinear

Lecture 8 (18-10-2013)
Linear SVMs continued
Cutting plane optimization 
Lecture 8 notes
Joachims KDD 06

Lectures 9 (29-10-2013) and 10 (01-01-2013)
Kernels
Block co-ordinate ascent optimization 
Lecture 9 notes
Lecture 10 notes
SMO - Fan, Chen and Lin JMLR 05

Code

Most code was written five minutes before the start of each lectureand comes with no guarantees, comments or documentation. In particular, no attempt has been made to bulletproof the code. For instance, if there is no feasible solution for your parameter settings then the figure plotting subroutines will crash (the LR and SVM learning routines should be stable). In any case, run the code at your own peril.

Recommended Reading


Back to Manik's Home Page