Instructor | Manik Varma |
Co-ordinator | M. Balakrishnan |
Teaching Assistants | Saurabh Goyal and Dilpreet Kaur |
Credits | 1 |
Classroom | SIT Seminar Room |
Timings | 12:30 - 2:00 PM on Tuesdays and Fridays |
Conservative estimates put the number of IoT devices to be around 50 billion by the year 2020. Most of these devices will be continuously sensing their environment and making decisions. Such decisions will often need to be made on the device itself, rather than being deferred to the cloud, due to latency, bandwidth, privacy and security concerns. However, most of these devices will be severely constrained in terms of processing, storage and power availability and therefore unable to run state-of-the-art algorithms for intelligent decision making.
This course will introduce the area of resource constrained machine learning to students with the objective of studying algorithms that are orders of magnitude more efficient at run time while maintaining prediction accuracy above an acceptable threshold. The course will cover pruning and compression techniques for machine learning models such as trees, neural networks, support vector machines and k-nearest neighbour classifiers as well as hybrid models for greater efficiency. Students are expected to be familiar with introductory machine learning (tress, neural nets, k-NN, SVMs, etc.), linear algebra and probability and statistics. Some familiarity with optimization and signal processing techniques will be helpful.
This will be a discussion based course with a significant self-study component. Students will be expected to have read a research paper before each lecture and come prepared to class for a discussion on the paper and related topics. Students will be assessed based on how well their code performs on benchmark machine learning tasks. The course will have an optional lab component where students will be expected to build an end-to-end real world machine learning application using an Arduino/Raspberry Pi + sensors.
Lecture 1 (31-08-2016) Introduction |
An SVM tutorial by Chris Burges |
Lecture 2 (06-09-2016) Local Deep Kernel Learning |
Paper Slides Lecture 2 notes |
Lecture 3 (09-09-2016) ML algorithms on the Arduino |
LDKL training code LDKL Arduino sketch for prediction Neural Network training code and Arduino prediction sketch |
Lecture 4 (16-09-2016) Optimization basics |
Lecture 4 notes |
Lecture 5 (20-09-2016) Compressing deep neural networks |
Deep Compression paper Lecture 5 notes |
Lecture 6 (27-09-2016) L0 and L1 regularized linear classifiers |
An analysis of L0 methods An L1 tutorial by Mark Schmidt An L1 tutorial by Chih-Jen Lin Lecture 6 notes |
Lecture 7 (04-10-2016) Stochastic Neighbor Compression |
Paper Lecture 7 notes |
Lecture 8 (14-10-2016) Cost Sensitive Feature Selection |
CSTC Greedy Miser Lecture 8 notes |
Lecture 9 (18-10-2016) Model Compression |
Model Compression Do Deep Nets Need to be Deep? Lecture 9 notes |
Lecture 10 (21-10-2016) Face detection |
Viola and Jones Lecture 10 notes |