Plenary Talks - Stephane Mallat
Can Signal Classification Speak Mathematics?
Dr. Stephane Mallat
Professor, Ecole Polytechnique, France
Abstract
Huge amount of data are acquired by cameras, audio recording devices and all type of new sensors. Technology is solving storage and wide-band access issues, but usage through automatic interpretation and classification lags well behind. The need is there, everybody knows it. A core signal processing issue is to convert large size signals into lower dimensional descriptors that are both stable and sufficiently informative for state the art statistical classifiers. Surprisingly, signal processing made few fundamental contributions during the last 20 years.
Signal processing was a pioneer with automatic speech recognition beginning in the 1950's. Remarkable tools have been introduced such as Mel Frequency Spectrum Coefficients integrated with hidden Markov models. Whereas core speech recognition tools had limited evolutions since the 1990's, important image classification breakthrough came from computer science, with new feature vectors such as SIFT, and efficient neural networks processors. Beautiful algorithms, but no clear mathematical framework.
Signal processing culture and methodology rely on close interactions between mathematics, algorithmic design and hardware technologies. For low level problems, such as signal restoration or compression, stable mathematical foundations are provided by Fourier analysis, Wiener filtering and Shannon information theory. For very high dimensional classification, the lack of mathematics has been a handicap.
Introducing higher level geometrical structures seems necessary to speak mathematics, but what is geometry in very high dimension ? This lecture explains the emergence of simple mathematical concepts including invariants, groups and manifolds, and their connection with signal processing tools and applications. Why is Fourier transform doomed to failure, whereas the Mel frequency spectrum or SIFT image features found an elegant way out ? Why can neural networks be remarkably efficient architectures to solve such problems ? Opening these mathematics windows gives a view on alleys where signal processing methodology can develop efficient solutions, beyond experimental neural networks and "feature" approaches.
Speaker Biography
Stephane Mallat received the Ph.D. degree in electrical engineering from the University of Pennsylvania, Philadelphia, in 1988. In 1988, he joined the Computer Science Department of the Courant Institute of Mathematical Sciences where he was Associate Professor in 1994 and Professor in 1996. Since 1995, he has been a full Professor in the Applied Mathematics Department at Ecole Polytechnique, Paris. From 2001 to 2008 he was a co-founder and CEO of a start-up company.
Dr. Mallat is an IEEE and EURASIP fellow. He received the 1990 IEEE Signal Processing Society's paper award, the 1993 Alfred Sloan fellowship in Mathematics, the 1997 Outstanding Achievement Award from the SPIE Optical Engineering Society, the 1997 Blaise Pascal Prize in applied mathematics from the French Academy of Sciences, the 2004 European IST Grand prize, the 2004 INIST-CNRS prize for most cited French researcher in engineering and computer science, and the 2007 EADS prize of the French Academy of Sciences.
His research interests include signal processing, learning and harmonic analysis.