Ting Dang

Senior Lecturer at University of Melbourne, Australia
ting.dang@unimelb.edu.au
Google Scholar - ResearchGate - LinkedIn - Twitter
Mobile Health - Audio and Speech Processing
Deep Learning - Affective Computing - Time Series Modelling

I am a Senior Lecturer at the University of Melbourne and a visiting Fellow at University of New South Wales. Prior to this, I worked as a Senior Research Scientist in Nokia Bell Labs (UK), Senior Research Associate (RA) at the University of Cambridge, and a RA at the University of New South Wales (UNSW), Australia, where I received the Ph.D. degree. My primary research interests are on exploring the potential of audio signals (e.g., speech) via mobile and wearable sensing for automatic mental state (e.g., emotion, depression) prediction and disease (e.g., COVID-19) detection and monitoring. Further, my work aims to develop generalised, interpretable, and robust machine learning models to improve healthcare delivery. I served as the program committee and reviewer for over 30 top-tier journals and conferences, including AAAI, IJCAI, UbiComp, ICASSP, IEEE TAC, IEEE TASLP, JASA, JMIR, etc. I have won the ACII best paper, ICASSP top 3% paper, Asian Dean's Forum Rising Star Women In Engineering Award 2022, and IEEE Early Career Writing Retreat Grant 2019.

Hiring

I am looking for proactive PhD students with a solid Computer Science or Electrical Engineering foundation. Should you be interested in exploring the expansive fields of Mobile Health, Artificial Intelligence, and Speech/Audio Processing, feel free to contact me and include your CV. Full scholarships are available.

Research Interests

My research interests are on human-centred audio sensing and machine learning for mobile health monitoring, which explores the potential of audio signals (e.g., speech, cough) via mobile and wearable sensing for automatic mental state prediction and disease detection and monitoring (e.g., emotion, depression, COVID-19), and develops generalised, interpretable, and robust deep learning models to improve healthcare delivery. Specifically, it includes:
Machine learning in mobile health: exploring the potential and challenges of mobile technologies for health monitoring.
Speech and Audio Processing: investigating advanced signal processing techniques and potential novel applications using speech and audio signals.
Trustworthy Deep Learning (DL): improving the interpretability and generalization in DL for more reliable health outcome predictions.
Wearable Sensing: examining novel sensing opportunities for fitness and well-being monitoring with new forms of resource-constrained IoT wearable device forms.

News

2024/07: One paper titled 'Emotion Recognition Systems Must Embrace Ambiguity' is accepted by EASE co-located with ACII 2024.

2024/07: One paper titled 'StatioCL: Contrastive Learning for Time Series via Non-Stationary and Temporal Contrast' is accepted by CIKM 2024.

2024/07: One paper titled 'Exploring Large-Scale Language Models to Evaluate EEG-Based Multimodal Data for Mental Health' is accepted by WellComp 2024 in conjunction with UbiComp 2024.

2024/07: Served as the PC for the EASE Satellite Workshop at ACII 2024.

2024/07: Invited talk at University of New South Wales and University of Sydney.

2024/06: One paper titled 'Dual-Constrained Dynamical Neural ODEs for Ambiguity-aware Continuous Emotion Prediction' is accepted by Interspeech 2024.

2024/05: Joined the Editorial Board of IEEE Pervasive Computing.

2024/04: Co-organizing WellComp workshop at UbiComp 2024.

2024/03: Co-charing industry perspectives at MobileHCI 2024.

2024/03: One paper titled "An evaluation of heart rate monitoring with in-ear microphones under motion" is accepted by Pervasive and Mobile Computing.

2024/01: One paper titled "Uncertainty-aware Health Diagnostics via Class-balanced Evidential Deep Learning" is accepted by IEEE Journal of Biomedical and Health Informatics (J-BHI).

2023/12: One paper is accepted by HotMobile 2024!

2023/12: Two papers accepted by ICASSP 2024!

2023/10: One review paper titled "Human-centered AI for mobile health sensing: challenges and opportunities" is accepted by Royal Society Open Science!

2023/10: Two papers are accepted by IMWUT!

2023/09: Best paper award from ACII 2023!

2023/08: One paper is accepted by Speech Communication!

2023/07: We will be giving a tutorial on "Multi-model wearable eye and audio for affect analysis" at ACII in MIT media lab this Sep and at ICMI in Paris this Oct!

2023/06: Our paper has been recognized as Top 3% at ICASSP 2023!

2023/06: Papers accepted by INTERSPEECH, ICASSP, ACII, KDD, JMIR, etc!

2023/04: Co-organizing WellComp Workshop 2023 in conjunction with UbiComp!

2023/03: Social media co-chair for INTERSPEECH 2026!

2022/08: Coverage for our recent paper at JMIR by the University of Cambridge Department of Computer Science and Technology!

2022: Papers accepted by JMIR, ICASSP, INTERSPEECH, TSRML2022 in NeurIPS, HotMobile, PerCom, etc!

2021: Papers accepted by NeurIPS, NPJ digital medicine, Frontiers in Computer Science, INTERSPEECH, etc!


Research Projects

. Audio-based Mobile Health Diagnosis
Machine learning (ML) for respiratory disease tracking

This project aims to explore the potential of audio signals for respiratory disease detection and tracking, specifically for COVID-19 detection and progression prediction. By analyzing the audio changes over time, our system can reliably predict and further forecast the individuals' disease progression in the next few weeks. This opens a new pathway for audio-based remote monitoring of respiratory disease.

. Speech based Emotion/Mental Health Recognition
Designing and implementing machine learning (ML) algorithms for speech emotion/mental health recognition

This project aims to develop systems that automatically detect the human emotional state or mental states such as distress level or depression levels from speech signals, which could potentially serve as diagnostic tools in clinical sites, and be widely applied in diverse scenarios such as customer services, self-driving scenarios, international negotiations, etc.

. Earable sensing
Continous monitoring of vital signs

With the rapid adoption of in-ear wearables in daily life, earables offer a new platform for non-invasive and continuous measuring of biomarkers associated with individuals' health status. One of the works focuses on using earables for heart rate monitoring under motions, which shows advantages over conventional photoplethysmography (PPG) based heart rate estimation, due to its pervasiveness during physical activity (specifically while walking and running). Further, we aim to focus on exploring the new possibilities in earables for sensing and modeling in healthcare domains.

. Biological-inspired auditory system modeling
Knowledge-inspired and data-driven DL integration

Mathematical models of cochlear simulate how the human ears process sound, which could enhance the understanding of human auditory systems. While most audio and speech-related tasks currently employ a block-box front-end for feature extraction, how to incorporate the prior knowledge of the human auditory system in the commonly used deep learning approaches is still unknown. This project aims to find the solutions to model the human auditory system and further integrate it with the data-driven DL learning paradigm, to serve as a more generalized front-end for audio and speech-related tasks.


Selected Publications

* represents equal contributions
2024
Y Hu, S Zhang, T Dang, H Jia, FD. Salim, W Hu, and AJ. Quigley
Exploring Large-Scale Language Models to Evaluate EEG-Based Multimodal Data for Mental Health, WellComp co-located with UbiComp 2024

J Wu, T Dang, V Sethu, and E Ambikairajah
Dual-Constrained Dynamical Neural ODEs for Ambiguity-aware Continuous Emotion Prediction, INTERSPEECH 2024

Y Wu, T Dang, D Spathis, H Jia, C Mascolo
StatioCL: Contrastive Learning for Time Series via Non-Stationary and Temporal Contrast, ACM International Conference on Information and Knowledge Management (CIKM) 2024

I Shahid, K Al-Naimi, T Dang, Y Liu, F Kawsar, A Montanari
Towards Enabling DPOAE Estimation on Single-Speaker Earbuds , ICASSP 2024

Z Nan, T Dang, V Sethu, B Ahmed
Variational connectionist temporal classification for order-preserving sequence modeling , ICASSP 2024

J Romero, A Ferlini, D Spathis, T Dang, K Farrahi, F Kawsar, A Montanari
OptiBreathe: An Earable-based PPG System for Continuous Respiration Rate, Breathing Phase, and Tidal Volume Monitoring , HotMobile 2024

T Xia, T Dang, J Han, L Qendro, C Mascolo
Uncertainty-aware Health Diagnostics via Class-balanced Evidential Deep Learning , IEEE Journal of Biomedical and Health Informatics

D Ma, T Dang, M Ding, R Balan
ClearSpeech: Improving Voice Quality of Earbuds Using Both In-Ear and Out-Ear Microphones , UbiComp, 2024

BU Demirel, T Dang, K Al-Naimi, F Kawsar, A Montanari
Unobtrusive Air Leakage Estimation for Earables with In-ear Microphones , UbiComp, 2024
2023

Wu, J., Dang, T., Sethu, V., and Ambikairajah, E.
Belief Mismatch Coefficient (BMC): A Novel Interpretable Measure of Prediction Accuracy for Ambiguous Emotion States. , Affective Computing and Intelligent Interaction (ACII), 2023. 🏆 Best paper award

Dang, T., Han , J.*, Xia, T.*, Bondareva, E., Brown, C., Chauhan, J., Grammenos A., Spathis, D., Cicuta, P., and Mascolo, C.
Conditional Neural ODE Processes for Individual Disease Progression Forecasting: A Case Study on COVID-19, ACM SIGKDD on Knowledge Discovery and Data Mining (KDD) 2023. [Promotion video]

B. Wickramasinghe, E. Ambikairajah, V. Sethu, J. Epps, H. Li, T. Dang
EDNN controlled adaptive front-end for replay attack detection systems, Speech Communication, 154, 102973, 2023.

J. Wu, T. Dang, V. Sethu, E. Ambikairajah.
From Interval to Ordinal: A HMM based Approach for Emotion Label Conversion, Interspeech 2023.

Dang, T., Dimitriadis, A., Wu, J., Sethu, V., and Ambikairajah, E.
Constrained dynamical neural ode for time series modelling: A case study on continuous emotion prediction. , ICASSP, 2023. [Poster]
🏆 Top 3% paper award

J. Han, M. Montagna, A. Grammenos, T. Xia, E. Bondareva, C. Brown, J. Chauhan, T. Dang, D. Spathis, A. Floto, P. Cicuta, and C. Mascolo.
Evaluating Listening Performance for COVID-19 Detection between Clinicians and Machine Learning: A Comparative Study, Journal of Medical Internet Research, 2023

J. Wu, T. Dang, V. Sethu, and E. Ambikairajah.
Multimodal Affect Models: An investigation of relative salience of audio and visual cues for emotion prediction, Frontiers in Computer Science, 2021.

C. Hu, X. Ma, D. Ma, T. Dang
Lightweight and Non-invasive User Authentication on Earables, HotMobile 2023.

2022 and before

T. Dang, J. Han, T. Xia, D. Spathis, E. Bondareva, C. Brown, J. Chauhan, A. Grammenos, A. Hasthanasombat, A. Floto, P. Cicuta, and C. Mascolo.
Exploring longitudinal cough, breath, and voice data for COVID-19 progression prediction via sequential deep learning: model development and validation. , Journal of Medical Internet Research, 2023
🏆 Media Coverage

T. Xia, J. Han, L. Qendro, T. Dang, and C. Mascolo.
Hybrid-EDL: Improving Evidential Deep Learning for Uncertainty Quantification on Imbalanced Data, TSRML in NeurIPS, 2022.

T. Dang, T. Quinnell, and C. Mascolo.
Exploring Semi-supervised Learning for Audio-based COVID-19 Detection using FixMatch, INTERSPEECH 2022.

J. Wu, T. Dang, V. Sethu, J. Epps, E. Ambikairajah.
A Novel Sequential Monte Carlo Framework for Predicting Ambiguous Emotion States, ICASSP 2022.

Han, J., Xia, T., Spathis, D., Bondareva, E., Brown, C., Chauhan, J., Dang, T., Grammenos, A., Hasthanasombat, A., Floto, A. and Cicuta, P., Mascolo, C.
Sounds of COVID-19: exploring realistic performance of audio-based digital testing. , NPJ digital medicine, 202. 🏆 Media Coverage

T. Xia, J. Han, L. Qendro, T. Dang, and C. Mascolo
Uncertainty-Aware COVID-19 Detection from Imbalanced Sound Data, Interspeech 2021.

D. B., T. Dang, V. Sethu, E. Ambikairajah, and S. Fernando
A Novel Bag-of-Optimised-Clusters Front-End for Speech based Continuous Emotion Prediction, Affective Computing and Intelligent Interaction(ACII), 2019

A. Ouyang, T. Dang, V. Sethu, and E. Ambikairajah
Speech Based Emotion Prediction: Can a Linear Model Work?, Interspeech 2019

T. Dang, V. Sethu, and E. Ambikairajah.
Compensation techniques for speaker variability in continuous emotion prediction, IEEE Transaction on Affective Computing, 2018.

T. Dang, V. Sethu, and E. Ambikairajah
Dynamic multi-rater Gaussian Mixture Regression incorporating temporal dependencies of emotion uncertainty using kalman filters, ICASSP 2018.

T. Dang, V. Sethu, J. Epps, and E. Ambikairajah
An investigation of Emotion Prediction Uncertainty Using Gaussian Mixture Regression, Interspeech 2017

T. Dang, B. Stasak, Z. Huang, S. Jayawardena, M. Atcheson, M. Hayat, P. Le, V. Sethu, R. Goecke, and J. Epps
Investigating Word affect Features and Fusion of Probabilistic Predictions Incorporating Uncertainty in AVEC 2017, the 7th Annual Workshop on Audio/Visual Emotion Challenge, ACM Multimedia, 2017

T. Dang, V. Sethu, and E. Ambikairajah
Factor Analysis Based Speaker Normalisation for Continuous Emotion Prediction, Interspeech 2016

Z. Huang, T. Dang, N. Cummins, B. Stasak, P. Le, V. Sethu, and J. Epps
An investigation of annotation delay compensation and output-associative fusion for multi-modal continuous emotion prediction, the 5th International Workshop on Audio/Visual Emotion Challenge, ACM Multimedia, 2015


People

Current students
PhD
  • Jingyao Wu (2020-), University of New South Wales (UNSW), co-supervision with Vidhyasaharan Sethu and Eliathamby Ambikairajah -> [Postdoc Fellowship at MIT, US]
  • Nan Zheng (2021-), UNSW, joint-supervised with Vidhyasaharan Sethu and Beena Ahmed.
  • Yu Wu (2022-), University of Cambridge, mentoring with Prof. Cecilia Mascolo.
Masters
  • Feixiang Zheng (2024-), University of Melbourne.
  • Xuanang Li (2024-), University of Melbourne.
  • Jiaheng Dong(2024-), University of Melbourne.
  • Xin Hong (2024-), University of Melbourne.
  • Jule Valendo Halim (2024-), University of Melbourne.
Past students
PhD mentoring/intership
  • Xijia (Simon) Wei (2023), Intership at Nokia Bell Labs UK, University Colleage London
  • Tong Xia (2021-2023), University of Cambridge, mentoring with Prof. Cecilia Mascolo. [→ Postdoc at University of Cambridge]
  • Kayla Butkow (2021-2022), University of Cambridge, project mentoring with Prof. Cecilia Mascolo.
  • Sotirios Vavaroutas (2022-2023), University of Cambridge, project mentoring with Prof. Cecilia Mascolo.
Master and Bachelor
  • Thomas Quinnell (2021), University of Cambridge. [→ Software engineering at Avos]
  • Haobing Zhu (2020), UNSW
  • Yang Yu (2020), UNSW
  • Jinhao Gu (2020), UNSW [→ PhD at University of Liverpool]
  • Anubhuti Gupta (2020), UNSW.
  • Anda Ouyang (2018), UNSW. [→ PhD at UNSW ]
  • Mo Li, UNSW (2018), UNSW.


Teaching

  • Intorduction to Machine Learning (COMP90049), 2024 (~300 students, postgraduate course)
  • Speech Processing and Machine Learning, 2017-2019 (~25 students, postgraduate course)
  • Strategic Leadership & Ethics, 2019 (~20 students; postgraduate course)
  • Digital Signal Processing, 2017-2019 (~15-40 students)
  • Electrical Circuits, 2016-2017 (~80 students)
  • Design Proficiency, 2018-2019 (~60 students)


Achievements

Invited Talks

  • Talk on 'Machine learning for mobile health via audio' at School of Biomedical Engineering, University of Sydney, Australia, 2024.
  • Talk on 'AI for mobile health' at School of Computer Science and Engineering, University of New South Wales, Australia, 2024.
  • Talk on 'Machine learning for mobile health via audio' at South China Normal University, China, 2023.
  • Talk on 'COVID -19 Disease Progression Prediction and Forecasting via Audio: A Longitudinal Study' by Women@CL at the University of Cambridge, UK, 2022.
  • Talk on 'Computational modeling of ambiguous emotion' in AFAR Lab at the University of Cambridge, 2022.
  • Talk on 'Machine Learning in Mobile Health via Audio: bridging the gap between AI and healthcare' in UCLIC at the University College London, 2022.
  • Talk on 'Speech-based Emotion Prediction' at Tsinghua University, 2020

Selected Honors

  • Shortlisted candidate for Asian Dean's Forum 2022 The Rising Stars - Women in Engineering.
  • Distinguished reviewer award for the high-impact journal IEEE Transactions in Affective Computing, in 2019
  • Outstanding reviewer award for Expert Systems With Applications Elsevier, 2018
  • IEEE Early Career Writing Retreat Grant, 2019
  • Tuition Fee Scholarship (TFS) plus a Research Stipend from UNSW, 2014-2018
  • Top-up Scholarship from Data 61, CSIRO, Australia, 2014-2018
  • ISCA(International Speech Communication Association) Grant, Interspeech, Stockholm 2017
  • Highly Commended Presentation (final 6 presentations) of Postgraduate Research Symposium, UNSW,2017
  • 2nd rank in Audio, Video Emotion Challenge (AVEC) workshop, ACM Multimedia, 2015
  • Excellent Bachelor Graduation Thesis Award of NWPU, 2012


Services

Workshop and tutorial organisation

Editor

  • IEEE Pervasive Computing

Program Committee and Reviewer

  • NeurIPS, AAAI, IJCAI, ICASSP, INTERSPEECH, ACII, etc.
  • IEEE TAC, IEEE TASLP, IMWUT, JIMR, JASA, Scientific Reports, Computer Speech & Languages, Expert Systems with Application, etc.


Experience

Senior Lecturer, 2024-Present
University of Melbourne, Australia
Senior Research Scientist, 2023-2024
Nokia Bell Labs, UK
Senior Research Associate, 2021-2023
University of Cambridge, UK
Research Associate, 2018-2020
University of New South Wales, Australia

Education

PhD, 2014-2018
University of New South Wales, Australia
MEng, 2012-2015
Northwestern Polytechnical University, China
BEng, 2008-2012
Northwestern Polytechnical University, China

PhD Hiring

Join our School of Computing and Information Systems (CIS) at the University of Melbourne, Australia! We are seeking motivated and passionate individuals to embark on a challenging and rewarding PhD journey!

Research area
  • AI in health; speech and audio processing; deep learning; time series modellingl wearable sensing;
University of Melbourne

The University of Melbourne is ranked 13th worldwide in the QS Ranking 2025 and is the top-ranked university in Australia. The School of CIS is an international research leader and education institute. We are focused on four key areas including artificial intelligence, computer science, human-computer interaction, and information systems. You will have the opportunity to collaborate with many excellent minds within the school and to engage with prestigious universities such as the University of Cambridge, as well as industry partners including Bell Labs, Samsung, etc.

Scholarship
  • Full scholarship is available: a tuition fee offset, living allowance ($37,000 per year) and relocation grant.
How to apply
  • The closest round of scholarship application closes on 24 Oct, 2024. Details can be referred to this website..
  • The expected starting date is around beginning of 2025.
Requirements
  • Master's or Bachelor's (Honours) degree with first-class honours.
  • Strong background in computer science, electrical engineering, or other related areas.
  • Relevant research experience or publications in well-known conferences or journals is essential.
  • GPA must be an overall H1 (80–100%) grade in your undergraduate or master's degree, or you must be in the top 5% of your graduating cohort.
  • Good communication and English language skills (IELTS 6.5/TOFEL 79/PTE 58/CAE 176).
  • Proficiency in deep learning and audio processing is crucial, while experience in wearable sensing and signal processing will be a plus.
  • Excellent analytical and software engineering skills with proficiency in Python
  • Experience in machine learning algorithms using PyTorch/TensorFlow, etc.
  • Qualities of being self-motivated, a critical thinker, and a team player.
Contact
  • If you are interested, please email ting.dang@unimelb.edu.au with your CV, transcripts, and/or other related documents. Please be aware that due to the high volume of inquiries, I may not be able to respond to all emails.
  • Please take a look at my recent publications to identify any overlapping interests. Submitting a research proposal or potential projects in a rough format that align with these interests would be very useful.