Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across multiple tasks and thus facilitate the individual task learning. It is particularly desirable to share the domain knowledge (among the tasks) when there are a number of related tasks but only limited training data is available for each task.
Download count: 0
- Partial requirement for: Ph.D., Arizona State University, 2011Note typethesis
- Includes bibliographical references (P. 122-131)Note typebibliography
- Field of study: Computer science