Authors
Bingyi Kang, Jiashi Feng
Publication date
2018/8
Conference
The Conference on Uncertainty in Artificial Intelligence (UAI), 2018
Description
Meta learning algorithms are effective at obtaining meta models with the capability of solving new tasks quickly. However, they critically require sufficient tasks for meta model training and the resulted model can only solve new tasks similar to the training ones. These limitations make them suffer performance decline in presence of insufficiency of training tasks in target domains and task heterogeneity—the source (model training) tasks presents different characteristics from target (model application) tasks. To overcome these two significant limitations of existing meta learning algorithms, we introduce the cross-domain meta learning framework and propose a new transferable meta learning (TML) algorithm. TML performs meta task adaptation jointly with meta model learning, which effectively narrows divergence between source and target tasks and enables transferring source metaknowledge to solve target tasks. Thus, the resulted transferable meta model can solve new learning tasks in new domains quickly. We apply the proposed TML to cross-domain fewshot classification problems and evaluate its performance on multiple benchmarks. It performs significantly better and faster than wellestablished meta learning algorithms and finetuned domain-adapted models.
Total citations
20192020202120222023202420254958531