Speech Recognition using Biologically-Inspired Neural Networks
Thomas Bohnstingl, Ayush Garg, et al.
ICASSP 2022
Model-agnostic meta-learning (MAML) is one of the most popular and widely adopted meta-learning algorithms nowadays, which achieves remarkable success in various learning problems. Yet, with the unique design of nested inner-loop and outer-loop updates which govern the task-specific and meta-model-centric learning respectively, the underlying learning objective of MAML still remains implicit and thus impedes a more straightforward understanding of it. In this paper, we provide a new perspective of the working mechanism of MAML. We discover that MAML is analogous to a meta-learner using a supervised contrastive objective function, where the query features are pulled towards the support features of the same class and against those of different classes, in which such contrastiveness is experimentally verified via an analysis based on the cosine similarity. Moreover, we reveal that the vanilla MAML algorithm has an undesirable interference term originating from the random initialization and the cross-task interaction. We therefore propose a simple but effective technique, zeroing trick, to alleviate such interference, where extensive experiments are then conducted on both miniImagenet and Omniglot datasets to demonstrate the consistent improvement brought by our proposed technique thus validating its effectiveness.
Thomas Bohnstingl, Ayush Garg, et al.
ICASSP 2022
Jiaqi Han, Wenbing Huang, et al.
NeurIPS 2022
Jung koo Kang
NeurIPS 2025
Jihun Yun, Aurelie Lozano, et al.
NeurIPS 2021