AIMS Lab Background

AIMS Lab

Human-Centered AI for Health Modeling and Sensing

Advancing AI for speech, audio, and physiological sensing in mobile and ubiquitous health.

AIMS Lab

Human-Centered AI for Health Modeling and Sensing

Lead: Ting Dang, Senior Lecturer, The University of Melbourne

ting.dang@unimelb.edu.au
Google Scholar | ResearchGate | LinkedIn | Twitter | University Profile
Mobile Health - Audio and Speech Processing
Deep Learning - Affective Computing - Time Series Modelling
CV
Ting Dang

About the Lab

The AIMS (Human-Centered AI for Health Modeling and Sensing) Lab is based at the University of Melbourne and is led by Dr. Ting Dang. Our team focuses on human-centered AI and sensing for health delivery. We develop AI models that leverage audio signals (such as speech and breathing) and physiological signals (like PPG and EEG), together with other ubiquitous and wearable sensing data, to detect and monitor health conditions. The lab collaborates closely with clinicians, industry, and interdisciplinary partners to translate algorithms into real-world impact.

Ting Dang is currently a Senior Lecturer at the University of Melbourne, actively leading research at the intersection of human-centered artificial intelligence and health sensing. Previously, she was a Senior Research Scientist at Nokia Bell Labs in the UK, a Senior Research Associate at the University of Cambridge, and a Research Associate at the University of New South Wales (UNSW), where she also earned her Ph.D.

Research Interests

  • Machine learning in mobile health: Pioneering the development of machine learning algorithms tailored for diverse health applications, aimed at enhancing the reliability and effectiveness of ML in screening, diagnosis, and monitoring.
  • Speech and Audio Processing: Investigating advanced signal processing and machine learning techniques for speech and related applications.
  • Time Series Modelling: Enhancing representation learning for time series in real-world challenges.
  • Trustworthy Deep Learning (DL): Improving the interpretability and generalization of DL models for more reliable health outcome predictions.
  • Wearable Sensing: Examining novel sensing opportunities for health monitoring using new forms of resource-constrained IoT wearable devices.

Join the Team

Now Recruiting
Seeking motivated PhD students to join our team!
  • Backgrounds in computer science, electrical engineering, or related areas are especially encouraged to apply. (Note: purely health backgrounds are not the best fit for our current team needs.)
  • We have an exciting opening for candidates passionate about AI-driven modelling of sensor signals and sequential data. Explore topics including deep learning for time series, interpretable AI for health and sensing, and multimodal sensor data integration.
  • Send your CV, transcripts, and a short research proposal aligning with our expertise.
  • CSC students and visiting scholars with relevant backgrounds are also welcome.

News

2026/03: One paper titled "Multi-Scale Diffusion for Bio-topological Representation Learning on Multimodal Brain Graphs" accepted by ACM Transactions on Intelligent Systems and Technology.

2026/01: Five papers accepted at ICASSP 2026.

2025/11: Awarded Google Fund for "Benchmarking Auditory Cognitive Reasoning in Audio-Language Models".

2025/11: Joined Editorial Board of IEEE Transactions on Affective Computing.

2025/09: Two papers and two workshop papers accepted at NeurIPS 2025.

2025/09: One paper accepted by SenSys 2026, titled 'From Cheap to Chic: Enhancing Music Playback Quality of Budget Earphones via Hardware-Aware Learning'.

2025/09: One paper accepted by IEEE Transactions on Affective Computing, titled 'How many raters do we need? Analyses of uncertainty in estimating ambiguity-aware emotion labels'.

2025/09: One paper accepted by ACM Transactions on Computing for Healthcare, titled 'Data-Efficient Psychiatric Disorder Detection via Self-supervised Learning on Frequency-enhanced Brain Networks'.

2025/08: Shortlisted as the Finalist for the Rising Star (Academics) STEM Women in Color Award 2025.

2025/08: Two papers accepted at APSIPA ASC 2025.

2025/07: Senior PC of AAAI 2026.

2025/07: Two papers accepted at UbiComp/ISWC 2025 workshops.

2025/05: Two papers accepted at INTERSPEECH 2025.

2025/04: Joined the Editorial Board of Computer Speech and Language.

2025/04: One paper titled 'Speech Emotion Recognition Via CNN Transforemr and Multidimensional Attention Mechanism' is accepted by Speech Communication.

2025/03: Two US patents are granted.

2025/02: One paper titled 'SQUIREDL: Sparse Sequence-to-Sequence Uncertainty Estimation in Evidential Deep Learning' is accepted by ACM Transactions on Computing for Healthcare.

2024/12: Two papers are accepted by IEEE ICASSP 2025.

2024/12: One US patent titled 'Cancellation of Ultrasonic Signals' is granted.

2024/11: One paper titled 'Multimodal Large Language Models in Human-centered Health: Practical Insights' is accepted by IEEE Pervasive Computing.

2024/10: Served as the Area Chair for ICASSP 2024.

2024/09:One paper titled 'TinyTTA: Efficient Test-time Adaptation via Early-exit Ensembles on Edge Devices' is accepted by NeurIPS 2024.

2024/09: One paper titled 'Efficient and Personalized Mobile Health Event Prediction via Small Language Models' is accepted by MobiCom Workshop EIFCom 2024.

2024/07: One paper titled 'Emotion Recognition Systems Must Embrace Ambiguity' is accepted by ACII Satellite Workshop EASE 2024.

2024/07: One paper titled 'Exploring Large-Scale Language Models to Evaluate EEG-Based Multimodal Data for Mental Health' is accepted by UbiComp Workshop WellComp 2024.

2024/07: Invited talk at University of New South Wales and University of Sydney.

2024/06: One paper titled 'Dual-Constrained Dynamical Neural ODEs for Ambiguity-aware Continuous Emotion Prediction' is accepted by Interspeech 2024.

2024/05: Joined the Editorial Board of IEEE Pervasive Computing.

2024/04: Co-organizing WellComp workshop at UbiComp 2024.

2024/03: Co-charing industry perspectives at MobileHCI 2024.

2024/03: One paper titled "An evaluation of heart rate monitoring with in-ear microphones under motion" is accepted by Pervasive and Mobile Computing.

2024/01: One paper titled "Uncertainty-aware Health Diagnostics via Class-balanced Evidential Deep Learning" is accepted by IEEE Journal of Biomedical and Health Informatics (J-BHI).

2023/10: One review paper titled "Human-centered AI for mobile health sensing: challenges and opportunities" is accepted by Royal Society Open Science!

2023/09: Best paper award from ACII 2023!

2023: Papers accepted by INTERSPEECH, ICASSP, ACII, KDD, IMWUT, JMIR, Speech Communication, etc!