Abstract:
A basic assumption of traditional machine learning models is that the training and testing data are identically and independently distributed. However, it does not always hold in practice, resulting in performance deterioration. Domain generalization, which aims to predict well on unseen test domains different from those seen during training, has attracted increasing interest in recent years.
In this talk, we will start with the domain generalization formulation and present brief theoretical analyses. Then we will focus on recent progress on domain generalization. In general, the prediction model consists of a feature extractor and a classifier. Thus methods can be categorized according to their different emphasis on data samples, features, and classifiers.