The CVPR 2020 Workshop on Multi-class & Multi-label Learning in Extremely Large Output Spaces
19 June 2020, Seattle, Washington
Extreme Delights
As all of you know, CVPR20 has just recently decided to become a purely virtual meeting due to the ongoing pandemic. Our workshop on extreme classification was designed to bring together researchers from across computer vision and machine learning to share insights and experience in a small group setting. Unfortunately the experience we hoped to provide does not seem to scale well to a virtual meeting. After careful consideration, the organizers have made the difficult decision to cancel the workshop at CVPR20. We hope to run this workshop at CVPR21, and we thank everyone for their interest in this topic.


Coming soon ...

Invited speakers

Trevor Darrel (Berkeley)
Jia Deng (Princeton)
Dhruv Mahajan (Facebook)
Deva Ramanan (CMU & Argo)
Chuck Rosenberg (Pinterest)
Olga Russakovsky (Princeton)

Call for participants

Extreme classification is a rapidly growing research area in computer vision focussing on multi-class and multi-label problems involving an extremely large number of labels (ranging from thousands to billions). Many applications of extreme classification have been found in diverse areas including recognizing faces, retail products and landmarks; image and video tagging; etc. Extreme classification reformulations have led to significant gains over traditional ranking and recommendation techniques for both machine learning and computer vision applications leading to their deployment in several popular products used by millions of people worldwide. This has come about due to recent key advances in modeling structural relations among labels, the development of sub-linear time algorithms for training and inference, the development of appropriate loss-functions which are unbiased with respect to missing labels and provide greater rewards for the accurate prediction of rare labels, etc.

Extreme classification raises a number of interesting research questions including but not limited to:

  • Large-scale fine-grained classification and embeddings
  • Cross-modality modeling of visual and label spaces
  • Distributed and parallel learning in extremely large output spaces
  • Learning from highly imbalanced data
  • Dealing with tail labels and learning from very few data points per label
  • Zero shot learning and extensible output spaces
  • Transfer learning and domain adaptation
  • Modeling structural relations among labels
  • Structured output prediction and multi-task learning
  • Log-time and log-space training and prediction and prediction on a test-time budget
  • Statistical analysis and generalization bounds
  • Applications to new domains

The workshop aims to bring together researchers interested in these areas to encourage discussion and improve upon the state-of-the-art in extreme classification. Several leading researchers will present invited talks detailing the latest advances in the area. The workshop should be of interest to researchers in core supervised learning as well as application domains such as visual recognition, search, and recommender systems. We expect healthy participation from both industry and academia.


Zhen Li (Google)
Manik Varma (Microsoft Research)
Ramin Zabih (Google & Cornell Tech)


Extreme classification resources

The Extreme Classification Repository: Multi-label Datasets and Code

Previous events (not at vision conferences)

The 2018 Dagstuhl Seminar on Extreme Classification
The WWW 2018 Workshop on Extreme Multilabel Classification for Social Media
The NIPS 2017 Extreme Classification Workshop
The NIPS 2016 Extreme Classification Workshop
The NIPS 2015 Extreme Classification Workshop
The ICML 2015 Extreme Classification Workshop
The ECML/PKDD 2015 Workshop on Big Targets
The NIPS 2013 Extreme Classification Workshop
The 2013 Machine Learning Summit