Computer Science and Engineering
Smartphones, wearables, and Internet-of-Things (IoT) sensors produce a wealth of data. However, annotating this increasing amount of data is not feasible due to privacy, background knowledge, time, and cost issue. Hence, self-supervised approaches are getting more important these days. The main purpose of SSL models is to extract the informative representations of data, and effectively present them to the upper-level aggregator models for further downstream analysis. In this regard we define multiple projects to advance multimodal self-supervised learning in various directions as follows:
Project 1: Enabling On-Edge Self-Supervised Learning
Recent years have witnessed a significant increase in deploying lightweight machine learning and deep learning models on edge devices. After a model is deployed on edge devices, it is desirable for these devices to learn from newcoming data to continuously improve accuracy. Since the newcoming data will be collected in-the-wild, the data would be unlabelled. Hence, it is important to investigate how to conduct self-supervised learning on the edge devices to leverage the collected data in a timely manner. In this project, we will explore the techniques to conduct self-supervised learning (e.g., contrastive learning) on the edge.
Hence, in this project, we aim to learn from new data with no or as few labels as possible and address the following issues:
• Multimodal data: data is collected from various sensors with distinct characteristics.
• Device limitations (e.g., memory, processing, network bandwidth, ...)
• Robustness against
o missing data/modality
o Noise
• Modality synchronization
Project 2: Continual self-supervised learning with multimodal data
Based on recent advances in self-supervised learning frameworks, these models are remarkably effective in learning high-quality representations of data and provide comparable and even better results than their supervised counterparts when trained offline on unlabelled data at scale. However, we may only have access to some tasks and classes and gradually come across new ones. Hence the ability of the model to learn tasks and new data sequentially/incrementally is essential. In this project, we aim to determine the effectiveness of applying self-supervised learning approaches in a continual learning framework and tackle the issues such as Catastrophic forgetting, No or low-labelled datasets, etc.
Scholarship
- $40,500 per annum
Eligibility
- Domestic applicants only
- PhD only
How to apply
Email a copy of your CV, transcripts, and publications (if any) to flora.salim@unsw.edu.au.
- Overview
- News
- Our team
- References