MT-SLVR: Multi-Task Self-Supervised Learning for Transformation In(Variant) Representations

Published in InterSpeech23, 2023

Recommended citation: Heggan, Calum, et al. "MT-SLVR: Multi-Task Self-Supervised Learning for Transformation In (Variant) Representations." arXiv preprint arXiv:2305.17191 (2023).

Abstract: Contrastive self-supervised learning has gained attention for its ability to create high-quality representations from large unlabelled data sets. A key reason that these powerful features enable data-efficient learning of downstream tasks is that they provide augmentation invariance, which is often a useful inductive bias. However, the amount and type of invariances preferred is not known apriori, and varies across different downstream tasks. We therefore propose a multi-task self-supervised framework (MT-SLVR) that learns both variant and invariant features in a parameter-efficient manner. Our multi-task representation provides a strong and flexible feature that benefits diverse downstream tasks. We evaluate our approach on few-shot classification tasks drawn from a variety of audio domains and demonstrate improved classification performance on all of them

Paper on arXiv
Code
Papers With Code Entry

Recommended citation (BibTex):“@misc{heggan2023mtslvr, title={MT-SLVR: Multi-Task Self-Supervised Learning for Transformation In(Variant) Representations}, author={Calum Heggan and Tim Hospedales and Sam Budgett and Mehrdad Yaghoobi}, year={2023}, eprint={2305.17191}, archivePrefix={arXiv}, primaryClass={cs.LG} }”

InterSpeech23 citation coming over Summer 2023.