The NIPS Workshop on Multi-class and Multi-label Learning in Extremely Large Label Spaces
Friday, 8th December 2017, Long Beach, California
Extreme Delights
The workshop's videos are now available on YouTube as a playlist. You can also watch individual talks by clicking on the talk title.

The workshop venue is the Hyatt Beacon Ballroom D+E+F+H.


Introduction & Applications

09:00 - 09:05 Manik Varma (MSR) Introduction
09:05 - 09:35 John Langford (MSR) Dreaming Contextual Memory
09:35 - 10:05 Ed Chi (Google) Learned Deep Retrieval for Recommenders

Deep & Representation Learning

10:05 - 10:35 David Sontag (MIT) Representation Learning for Extreme Multi-class Classification & Density Estimation
10:35 - 11:00 Coffee Break
11:00 - 11:30 Inderjit Dhillon (UT Austin & Amazon) Stabilizing Gradients for Deep Neural Networks with Applications to Extreme Classification
11:30 - 12:00 Wei-cheng Chang (CMU) Deep Learning Approach for Extreme Multi-label Text Classification
12:00 - 13:30 Lunch Unofficial Vowpal Wabbit tutorial by John Langford from 12:30 - 13:20


13:30 - 14:00 Pradeep Ravikumar (CMU) A Parallel Primal-Dual Sparse Method for Extreme Classification
14:00 - 14:15 Maxim Grechkin (UW) EZLearn: Exploiting Organic Supervision in Large-Scale Data Annotation
14:15 - 14:30 Sayantan Dasgupta (Michigan) Multi-label Learning for Large Text Corpora using Latent Variable Model
14:30 - 15:00 Yukihiro Tagami (Yahoo) Extreme Multi-label Learning via Nearest Neighbor Graph Partitioning and Embedding
15:00 – 15:15 Coffee Break


15:15 - 15:45 Mehryar Mohri (NYU) Tight Learning Bounds for Multi-Class Classification
15:45 - 16:00 Ravi Ganti (Walmart Labs) Exploiting Structure in Large Scale Bandit Problems
16:00 - 16:15 Hai S. Le (WUSTL) Precision-Recall versus Accuracy and the Role of Large Data Sets
16:15 - 16:30 Loubna Benabbou (EMI) A Reduction Principle for Generalizing Bona Fide Risk Bounds in Multi-class Setting
16:30 - 17:00 Marius Kloft (Kaiserslautern) Generalization Error Bounds for Extreme Multi-class Classification

Call for papers

Extreme classification is a rapidly growing research area focussing on multi-class and multi-label problems involving an extremely large number of labels. Many applications have been found in diverse areas ranging from language modelling to document tagging in NLP, face recognition to learning universal feature representations in computer vision, gene function prediction in bioinformatics, etc. Extreme classification has also opened up a new paradigm for ranking and recommendation by reformulating them as multi- label learning tasks where each item to be ranked or recommended is treated as a separate label. Such reformulations have led to significant gains over traditional collaborative filtering and content based recommendation techniques. Consequently, extreme classifiers have been deployed in many real-world applications in industry.

Extreme classification raises a number of interesting research questions including those related to:

  • Large scale learning and distributed and parallel training
  • Log-time and log-space prediction and prediction on a test-time budget
  • Label embedding and tree based approaches
  • Crowd sourcing, preference elicitation and other data gathering techniques
  • Bandits, semi-supervised learning and other approaches for dealing with training set biases and label noise
  • Bandits with an extremely large number of arms
  • Fine-grained classification
  • Zero shot learning and extensible output spaces
  • Tackling label polysemy, synonymy and correlations
  • Structured output prediction and multi-task learning
  • Learning from highly imbalanced data
  • Dealing with tail labels and learning from very few data points per label
  • PU learning and learning from missing and incorrect labels
  • Feature extraction, feature sharing, lazy feature evaluation, etc.
  • Performance evaluation
  • Statistical analysis and generalization bounds
  • Applications to new domains

The workshop aims to bring together researchers interested in these areas to encourage discussion and improve upon the state-of-the-art in extreme classification. In particular, we aim to bring together researchers from the natural language processing, computer vision and core machine learning communities to foster interaction and collaboration. Several leading researchers will present invited talks detailing the latest advances in the area. We also seek extended abstracts presenting work in progress which will be reviewed for acceptance as a spotlight + poster or a talk. The workshop should be of interest to researchers in core supervised learning as well as application domains such as recommender systems, computer vision, computational advertising, information retrieval and natural language processing. We expect a healthy participation from both industry and academia.

Submission information

Please submit extended abstracts/full papers at XC17 by 15th October 2017.


Manik Varma (Microsoft Research)
Marius Kloft (Humboldt University)
Krzysztof Dembczyński (Poznan University of Technology)


Extreme Classification Resources

The Extreme Classification Repository: Multi-label Datasets and Code

Previous Events

NIPS 2016 Extreme Classification Workshop
NIPS 2015 Extreme Classification Workshop
ICML 2015 Extreme Classification Workshop
ECML/PKDD Workshop on Big Targets
NIPS 2013 Extreme Classification Workshop
Machine Learning Summit 2013
Large-Scale Hierarchical Classification Workshop 2010