Traditional static biometrics, such as face recognition, operate in three settings: (i) user identification, (ii) user verification, and (iii) open-world user recognition. In user identification, the most prominent form involves determining a user's identity based on a biometric sample. The most practical scenario for user identification operates in an open-set setting, where the model either identifies the user or outputs an 'unknown user.' User verification typically involves the user providing identity information (e.g., an ID or RFID tag), and the model compares the biometric sample with stored biometric signatures corresponding to the provided user ID. Open-world user recognition is a more generalized version of user identification, allowing for user re-identification.

Behavioral biometrics are still emerging in all three of these areas and requires solving two fundamental research challenges: (a) Developing robust learning models that can learn from noisy, incomplete, and predominantly unlabelled data from disparate sources, and (b) Developing methods for user re-identification without knowing the true identity of the users. This project aims to develop self-supervised learning-based feature extractors for multi-modal biometric data streams for user identification and user verification in open-world settings.

School

Computer Science and Engineering

Research Area

Machine learning | Behavioural biometrics | Cyber security

  • In this project, student is expected to communicate with the CSE and EE supervisors for the continuous guidance and supervision. The selected student will also have the opportunity to collaborate with the team at the University of Sydney. Weekly meetings will be held with the supervisors to discuss project progress. This project offers the student a chance to engage directly with cutting-edge and emerging areas of cybersecurity, such as behavioral biometrics, multimodal sensors, and machine learning, and to learn research methods applicable to defence-related applications.
  • From this project, student with the help of supervisors are expected to design and develop self-supervised learning-based feature extractors for multi-modal biometric data streams for user identification and user verification. A survey report on existing methods and techniques using self-supervised learning for behavioural biometrics.
Associate Professor Gustavo Batista
opens in a new window
Associate Lecturer Rahat Masood
opens in a new window
Professor Aruna Seneviratne
opens in a new window
  1. Chen, S. Munir, and S. Lin, “Rfcam: Uncertainty-aware fusion of camera and wi-fi for real-time human identification with mobile devices,” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 6, no. 2, pp. 1–29, 2021.
  2. Singh, R. Singh, and A. Ross, “A comprehensive overview of biometric fusion,” Information Fusion, vol. 52, pp. 187–205, 2019.
  3. P. Zhang, T. Li, G. Wang, C. Luo, H. Chen, J. Zhang, D. Wang, and Z. Yu, “Multi-source information fusion based on rough set theory: A review,” Information Fusion, vol. 68, pp. 85–117, 2021.
  4. T. Meng, X. Jing, Z. Yan, and W. Pedrycz, “A survey on machine learning for data fusion,” Information Fusion, vol. 57, pp. 115–129, 2020.
  5. H. Alwassel, D. Mahajan, B. Korbar, L. Torresani, B. Ghanem, and D. Tran, “Self-supervised learning by cross-modal audio-video clustering,” Advances in Neural Information Processing Systems, vol. 33, pp. 9758–9770, 2020.