I was wondering about the differences between "multi-task learning" and "domain generalization". It seems to me that both of them are types of inductive transfer learning but I'm not sure of their differences.
Asked
Active
Viewed 298 times
1 Answers
1
Domain generalization: Aims to train a model using multi-domain source data, such that it can directly generalize to new domains without need of retraining. Focusing, Multiple domains on same task
Multi-task learning (MTL): MTL is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. In other words, same domain on multiple tasks
Main Difference:
| Domain generalization | Multi-task learning |
|---|---|
| Multiple domain dataset on same task | Same domain dataset on multiple tasks |
| As its a single task, no need for parallel execution | Multiple tasks are executed in parallel |
Archana David
- 1,279
- 7
- 22