I am a Senior Lecturer at the University of Melbourne and a visiting Fellow at University of New South Wales. Prior to this, I worked as a Senior Research Scientist in Nokia Bell Labs (UK), Senior Research Associate (RA) at the University of Cambridge, and a RA at the University of New South Wales (UNSW), Australia, where I received the Ph.D. degree. My primary research interests are on exploring the potential of audio signals (e.g., speech) via mobile and wearable sensing for automatic mental state (e.g., emotion, depression) prediction and disease (e.g., COVID-19) detection and monitoring. Further, my work aims to develop generalised, interpretable, and robust machine learning models to improve healthcare delivery. I served as the program committee and reviewer for over 30 top-tier journals and conferences, including AAAI, IJCAI, UbiComp, ICASSP, IEEE TAC, IEEE TASLP, JASA, JMIR, etc. I have won the ACII best paper, ICASSP top 3% paper, Asian Dean's Forum Rising Star Women In Engineering Award 2022, and IEEE Early Career Writing Retreat Grant 2019.
I am looking for proactive PhD students with a solid Computer Science or Electrical Engineering foundation. Should you be interested in exploring the expansive fields of Mobile Health, Artificial Intelligence, and Speech/Audio Processing, feel free to contact me and include your CV. Full scholarships are available.
My research interests are on human-centred audio sensing and machine learning for mobile health monitoring, which explores the potential of audio signals (e.g., speech, cough) via mobile and wearable sensing for automatic mental state prediction and disease detection and monitoring (e.g., emotion, depression, COVID-19), and develops generalised, interpretable, and robust deep learning models to improve healthcare delivery. Specifically, it includes:
Machine learning in mobile health: exploring the potential and challenges of mobile technologies for health monitoring.
Speech and Audio Processing: investigating advanced signal processing techniques and potential novel applications using speech and audio signals.
Trustworthy Deep Learning (DL): improving the interpretability and generalization in DL for more reliable health outcome predictions.
Wearable Sensing: examining novel sensing opportunities for fitness and well-being monitoring with new forms of resource-constrained IoT wearable device forms.
2024/07: One paper titled 'Emotion Recognition Systems Must Embrace Ambiguity' is accepted by EASE co-located with ACII 2024.
2024/07: One paper titled 'StatioCL: Contrastive Learning for Time Series via Non-Stationary and Temporal Contrast' is accepted by CIKM 2024.
2024/07: One paper titled 'Exploring Large-Scale Language Models to Evaluate EEG-Based Multimodal Data for Mental Health' is accepted by WellComp 2024 in conjunction with UbiComp 2024.
2024/07: Served as the PC for the EASE Satellite Workshop at ACII 2024.
2024/07: Invited talk at University of New South Wales and University of Sydney.
2024/06: One paper titled 'Dual-Constrained Dynamical Neural ODEs for Ambiguity-aware Continuous Emotion Prediction' is accepted by Interspeech 2024.
2024/05: Joined the Editorial Board of IEEE Pervasive Computing.
2024/04: Co-organizing WellComp workshop at UbiComp 2024.
2024/03: Co-charing industry perspectives at MobileHCI 2024.
2024/03: One paper titled "An evaluation of heart rate monitoring with in-ear microphones under motion" is accepted by Pervasive and Mobile Computing.
2024/01: One paper titled "Uncertainty-aware Health Diagnostics via Class-balanced Evidential Deep Learning" is accepted by IEEE Journal of Biomedical and Health Informatics (J-BHI).
2023/12: One paper is accepted by HotMobile 2024!
2023/12: Two papers accepted by ICASSP 2024!
2023/10: One review paper titled "Human-centered AI for mobile health sensing: challenges and opportunities" is accepted by Royal Society Open Science!
2023/10: Two papers are accepted by IMWUT!
2023/09: Best paper award from ACII 2023!
2023/08: One paper is accepted by Speech Communication!
2023/07: We will be giving a tutorial on "Multi-model wearable eye and audio for affect analysis" at ACII in MIT media lab this Sep and at ICMI in Paris this Oct!
2023/06: Our paper has been recognized as Top 3% at ICASSP 2023!
2023/06: Papers accepted by INTERSPEECH, ICASSP, ACII, KDD, JMIR, etc!
2023/04: Co-organizing WellComp Workshop 2023 in conjunction with UbiComp!
2023/03: Social media co-chair for INTERSPEECH 2026!
2022/08: Coverage for our recent paper at JMIR by the University of Cambridge Department of Computer Science and Technology!
2022: Papers accepted by JMIR, ICASSP, INTERSPEECH, TSRML2022 in NeurIPS, HotMobile, PerCom, etc!
2021: Papers accepted by NeurIPS, NPJ digital medicine, Frontiers in Computer Science, INTERSPEECH, etc!
Wu, J., Dang, T., Sethu, V., and Ambikairajah, E.
Belief Mismatch Coefficient (BMC): A Novel Interpretable Measure of Prediction Accuracy for Ambiguous Emotion States. ,
Affective Computing and Intelligent Interaction (ACII), 2023.
🏆 Best paper award
Dang, T., Han , J.*, Xia, T.*, Bondareva, E., Brown, C., Chauhan, J., Grammenos A., Spathis, D., Cicuta, P., and Mascolo, C.
Conditional Neural ODE Processes for Individual Disease Progression Forecasting: A Case Study on COVID-19,
ACM SIGKDD on Knowledge Discovery and Data Mining (KDD) 2023.
[Promotion video]
B. Wickramasinghe, E. Ambikairajah, V. Sethu, J. Epps, H. Li, T. Dang
EDNN controlled adaptive front-end for replay attack detection systems,
Speech Communication, 154, 102973, 2023.
J. Wu, T. Dang, V. Sethu, E. Ambikairajah.
From Interval to Ordinal: A HMM based Approach for Emotion Label Conversion, Interspeech 2023.
Dang, T., Dimitriadis, A., Wu, J., Sethu, V., and Ambikairajah, E.
Constrained dynamical neural ode for time series modelling: A case study on continuous emotion prediction. ,
ICASSP, 2023.
[Poster]
🏆 Top 3% paper award
J. Han, M. Montagna, A. Grammenos, T. Xia, E. Bondareva, C. Brown, J. Chauhan, T. Dang, D. Spathis, A. Floto, P. Cicuta, and C. Mascolo.
Evaluating Listening Performance for COVID-19 Detection between Clinicians and Machine Learning: A Comparative Study,
Journal of Medical Internet Research, 2023
J. Wu, T. Dang, V. Sethu, and E. Ambikairajah.
Multimodal Affect Models: An investigation of relative salience of audio and visual cues for emotion prediction, Frontiers in Computer Science, 2021.
C. Hu, X. Ma, D. Ma, T. Dang
Lightweight and Non-invasive User Authentication on Earables, HotMobile 2023.
T. Dang, J. Han, T. Xia, D. Spathis, E. Bondareva, C. Brown, J. Chauhan, A. Grammenos, A. Hasthanasombat, A. Floto, P. Cicuta, and C. Mascolo.
Exploring longitudinal cough, breath, and voice data for COVID-19 progression prediction via sequential deep learning: model development and validation. ,
Journal of Medical Internet Research, 2023
🏆 Media Coverage
T. Xia, J. Han, L. Qendro, T. Dang, and C. Mascolo.
Hybrid-EDL: Improving Evidential Deep Learning for Uncertainty Quantification on Imbalanced Data, TSRML in NeurIPS, 2022.
T. Dang, T. Quinnell, and C. Mascolo.
Exploring Semi-supervised Learning for Audio-based COVID-19 Detection using FixMatch, INTERSPEECH 2022.
J. Wu, T. Dang, V. Sethu, J. Epps, E. Ambikairajah.
A Novel Sequential Monte Carlo Framework for Predicting Ambiguous Emotion States, ICASSP 2022.
Han, J., Xia, T., Spathis, D., Bondareva, E., Brown, C., Chauhan, J., Dang, T., Grammenos, A., Hasthanasombat, A., Floto, A. and Cicuta, P., Mascolo, C.
Sounds of COVID-19: exploring realistic performance of audio-based digital testing. , NPJ digital medicine, 202.
🏆 Media Coverage
T. Xia, J. Han, L. Qendro, T. Dang, and C. Mascolo
Uncertainty-Aware COVID-19 Detection from Imbalanced Sound Data, Interspeech 2021.
D. B., T. Dang, V. Sethu, E. Ambikairajah, and S. Fernando
A Novel Bag-of-Optimised-Clusters Front-End for Speech based Continuous Emotion Prediction, Affective Computing and Intelligent Interaction(ACII), 2019
A. Ouyang, T. Dang, V. Sethu, and E. Ambikairajah
Speech Based Emotion Prediction: Can a Linear Model Work?, Interspeech 2019
T. Dang, V. Sethu, and E. Ambikairajah.
Compensation techniques for speaker variability in continuous emotion prediction, IEEE Transaction on Affective Computing, 2018.
T. Dang, V. Sethu, and E. Ambikairajah
Dynamic multi-rater Gaussian Mixture Regression incorporating temporal dependencies of emotion uncertainty using kalman filters, ICASSP 2018.
T. Dang, V. Sethu, J. Epps, and E. Ambikairajah
An investigation of Emotion Prediction Uncertainty Using Gaussian Mixture Regression, Interspeech 2017
T. Dang, B. Stasak, Z. Huang, S. Jayawardena, M. Atcheson, M. Hayat, P. Le, V. Sethu, R. Goecke, and J. Epps
Investigating Word affect Features and Fusion of Probabilistic Predictions Incorporating Uncertainty in AVEC 2017, the 7th Annual Workshop on Audio/Visual Emotion Challenge, ACM Multimedia, 2017
T. Dang, V. Sethu, and E. Ambikairajah
Factor Analysis Based Speaker Normalisation for Continuous Emotion Prediction, Interspeech 2016
Z. Huang, T. Dang, N. Cummins, B. Stasak, P. Le, V. Sethu, and J. Epps
An investigation of annotation delay compensation and output-associative fusion for multi-modal continuous emotion prediction, the 5th International Workshop on Audio/Visual Emotion Challenge, ACM Multimedia, 2015
Join our School of Computing and Information Systems (CIS) at the University of Melbourne, Australia! We are seeking motivated and passionate individuals to embark on a challenging and rewarding PhD journey!
The University of Melbourne is ranked 13th worldwide in the QS Ranking 2025 and is the top-ranked university in Australia. The School of CIS is an international research leader and education institute. We are focused on four key areas including artificial intelligence, computer science, human-computer interaction, and information systems. You will have the opportunity to collaborate with many excellent minds within the school and to engage with prestigious universities such as the University of Cambridge, as well as industry partners including Bell Labs, Samsung, etc.