Workshop description

The real challenge for any machine learning system is to be reliable and robust in any situation, even if it is different compared to training conditions. Existing general purpose approaches to domain generalization (DG) — a problem setting that challenges a model to generalize well to data outside the distribution sampled at training time — have failed to consistently outperform standard empirical risk minimization baselines. In this workshop, we aim to work towards answering a single question: what do we need for successful domain generalization? We conjecture that additional information of some form is required for a general purpose learning methods to be successful in the DG setting. The purpose of this workshop is to identify possible sources of such information, and demonstrate how these extra sources of data can be leveraged to construct models that are robust to distribution shift. Specific topics of interest include, but are not limited to:

Information for the day of the workshop

TBD

Important dates

DG Workshop at ICLR2023
Submission deadline: February 3, 2023 at 12:00 AM UTC (anywhere on earth) via OpenReview
Author notifications: March 3, 2023 Meeting: May 5, 2023