Ahmed-Tewfik

Dr. Ahmed H Tewfik

15:45 – 16:45 (IST) | April 09, 2025 (Wednesday)
(Fellow IEEE)
Ahmed Tewfik is a Machine Learning Director at Apple. He earned a B.Sc. from Cairo University in Egypt, followed by M.Sc., E.E., and Sc.D. degrees from the Massachusetts Institute of Technology in Cambridge, MA. Before joining Apple, he was the Cockrell Family Regents Chair in Engineering at the University of Texas at Austin and chaired the Department of Electrical and Computer Engineering from October 2010 to November 2019. Previously, he was the E. F. Johnson Professor of Electronic Communications at the University of Minnesota, worked at Alphatech, Inc., and served as a consultant to several technology companies. From 1997 to 2001, he co-founded and led Cognicity, Inc. as President and CEO, publishing entertainment marketing software tools.

A Fellow of the IEEE, Dr. Tewfik has received numerous distinctions, including the IEEE Third Millennium Award, the IEEE Signal Processing Society Technical Achievement Award, the Leo L. Beranek Meritorious Service Award, and the Norbert Wiener Society Award. He was a Distinguished Lecturer of the IEEE Signal Processing Society from 1997 to 1999, served as its President in 2020 and 2021, and held earlier roles as Vice President for Technical Directions and member of the Board of Governors. He also served as the inaugural Editor-in-Chief of IEEE Signal Processing Letters from 1993 to 1999.

Bridging Generative AI and Statistical Signal Processing

Abstract: Traditional signal processing relied on statistical methods, but recent generative models have greatly improved performance. While many of these gains stem from purely data-driven approaches, some solutions also blend mathematical and data-driven methods. After a brief overview of new developments in the open literature, this talk will highlight two popular classes of generative models—diffusion and structured state space models—and explain how late 20th-century breakthroughs in statistical signal processing and linear systems can reduce model size, training time, and inference time, as well as inspire novel architectures.