Description of field of research:

As a cornerstone of modern artificial intelligence (AI), deep learning (DL) has achieved state-of-the-art performances in various applications, such as visual understanding for autonomous driving, speech recognition, and medical diagnosing. Most methods are restricted to the traditional view of a static learning process on fixed datasets.

Lifelong learning (LL) or continual learning (CL) aims to incrementally learn and update the deep learning models on streaming data, making AI evolve in a lifelong circle even after deployment, like humans. The model will continually learn new knowledge from the new datasets/tasks in the stream while maintaining the performance of all tasks. To avoid forgetting in the long-term dynamic learning process, LL or CL methods can maintain a memory to store the representative data samples seen during training.

With the general application of AI and DL in people's daily life, the privacy issue becomes a big challenge for responsible AI and DL. In this project, we will investigate how to build a privacy-preserving memory for lifelong learning from data anonymization and compression perspectives. The developed method will maintain the memory buffer containing data, in which the privacy-sensitive information is suppressed, but the essential semantic information for DL (in LL/CL) can be preserved.

School

Computer Science and Engineering

Research areas

Artificial intelligence | Machine learning | Deep learning | Computer vision

The research will be conducted at the School of CSE together with the supervision team and potential external collaborators.

The outcomes by the end of the project will include a technical report, a code package, and a video demo (to explain the projects, technologies, and achievements). We are planning to extend or write outcomes as a technical paper targeting a publication in a top venue.