ETON Talks

ETON Talk # 1

Quantum Machine Learning: An Interplay Between Quantum Computing and Machine Learning

April 9, 2025 (Wednesday) | 12:00pm - 01:00pm

Samuel Yen-Chi Chen
Senior Research Scientist at Wells Fargo

Abstract:  Quantum Machine Learning (QML) represents an exciting frontier where the power of quantum computing meets the versatility of traditional machine learning. This talk will explore how QML leverages the unique principles of quantum physics to potentially revolutionize machine learning, while also using machine learning to push the boundaries of quantum computing research. We will introduce the role of variational quantum circuits (VQC) in designing QML architectures for noisy intermediate-scale quantum (NISQ) devices and share key insights from our recent research. Additionally, I will discuss the role of AI in quantum computing, highlighting the symbiotic relationship between these fields. Finally, we’ll look toward the future, considering both the opportunities and challenges that lie ahead for QML research.

Biography: Dr. Samuel Yen-Chi Chen received the Ph.D. and B.S. degree in physics and the M.D. degree in medicine from National Taiwan University, Taipei City, Taiwan. He is now a senior research scientist at Wells Fargo Bank. Prior to that, he was an assistant computational scientist in the Computational Science Initiative, Brookhaven National Laboratory. He is the first one to use variational quantum circuits to perform deep reinforcement learning and the inventor of quantum LSTM. His research interests include building quantum machine learning algorithms as well as applying classical machine learning techniques to solve quantum computing challenges such as quantum error correction and quantum architecture search. He is involved in multiple advanced privacy-preserving quantum AI research project and is an experienced distributed computing researcher and developer. He won the First Prize In the Software Competition (Research Category) from Xanadu Quantum Technologies, in 2019. Dr. Chen is a seasoned speaker renowned for his expertise in delivering tutorials on quantum machine learning at prestigious conferences. Notably, he has presented tutorial talks on leveraging quantum neural networks for speech and natural language processing at IJCAI 2021 and ICASSP 2022. At ICASSP 2024, IJCNN 2024 and IEEE QCE 2024, Dr. Chen expanded on this knowledge, providing tutorials on the integration of quantum tensor networks and quantum neural networks for signal processing in machine learning. Moreover, he shared insights into quantum machine learning and its applications in 6G communication at IEEE ICC 2024.

ETON Talk # 2

AI for Barrier-free Human-Computer Interaction: Latest Advances in Multi-modal Cued Speech Recognition and Generation

April 11, 2025 (Friday) | 12:00pm - 01:00pm

Dr. Li Liu
Assistant Professor at Hong Kong University of Science and Technology

Abstract: In today’s world, where AI technology is rapidly evolving, ensuring that everyone can communicate effectively is more important than ever. This is especially true for the deaf and hard-of-hearing community. Our research focuses on enhancing communication through the development of Automatic Cued Speech (CS) systems. In this talk, I will first introduce the effective yet simple CS system, and then discuss our innovative cross-modal mutual-learning framework that uses a low-rank Transformer for improved CS recognition. This system significantly enhances language integration across different modalities through modality-independent codebook representations. Additionally, I will highlight our thought-chain prompt-based framework for CS video generation, which leverages large language models to accurately and diversely link textual descriptions with CS gesture features. Our efforts have led to the creation of the first large-scale multilingual Chinese CS video dataset, setting new standards in CS recognition and generation across languages like Chinese, French, and English. Besides, I will introduce personalized speech generation and face image synthesis, aligned with speech and visual cues. This research is paving the way for more inclusive and effective Human-Computer Interaction, ensuring that technology can truly be accessible to everyone.

Biography:  Dr. Li Liu is an assistant professor at AI Thrust, Information Hub, Hong Kong University of Science and Technology (Guangzhou). She obtained her Ph.D. degree from Gipsa-lab, University Grenoble Alpes, France. Her main research interests include multi-modal audio-visual speech processing, AI robustness and AI for healthcare. As the first author or corresponding author, she has published about 50 papers in top journals and conferences in related fields, including IEEE TPAMI, IEEE TMM, IEEE TMI, NeurIPS, ICCV, ACM MM and ICASSP etc. She was Local Chair (China site) ICASSP 2022 and Area Chair of ICASSP 2024 and 2025. In 2017, she won the French Sephora Berribi Award for Female Scientists in Mathematics and Computer Science. She obtained several scientific research projects, including the NSFC General Project and Guangdong Provincial Natural Science Foundation-General Project etc. Her paper was awarded the Best Student Paper Nomination Award at the International Conference on Social Robotics 2024, and four papers were selected as the Shenzhen Excellent Science and Technology Academic Papers in 2022 and 2023.