No Thumbnail Available
Files
Dony_06621900_2024.pdf
Open access - Adobe PDF
- 6.59 MB
Details
- Supervisors
- Faculty
- Degree label
- Abstract
- In Machine Learning, ensuring similarity between training data and the data we want to infer is of prime importance. This work goes over cases where this does not hold, where we are facing a distribution shift. More precisely, it concentrates on the Unsupervised Domain Adaptation framework. Our goal is to better interpret the different methods that have been designed to solve this, and the different shift situations. To do so, we define criteria and associated metrics that must be controlled to tackle the distribution shift. Moreover, we observe how the methods encompass these aspects. Examples are implemented and analyzed for image classification and imaging inverse problems. In these experiments, we observe the success or failure of Domain Adaptation depending on the conditions we defined through this work.