Tutorial 4: Learning in the Context of Set Theoretic Estimation: An Efficient and Unifying Framework for Adaptive Machine Learning and Signal Processing

Presented by

Sergios Theodoridis, Isao Yamada, and Kostas Slavakis


The goal of this proposal is to present in simple geometric arguments, without delving in many mathematical details, the basic notions as well as the algorithmic rationale on which this powerful algorithmic family is built upon.

The concept of set theoretic estimation traces back to the seminal work of Von Neumann and its nonlinear extensions to the so-called Projections Onto Convex Sets (POCS) theory, for locating points in the intersection of a given finite number of convex sets. The theory was only recently extended to include an infinite number of convex sets and hence it offered itself for use for adaptive time recursive mode of operations.

The power and beauty of this technique is its generality. A single recursive equation, of the simplest form, reminiscent of the celebrated gradient descent family, covers all possible applications. The equation involves only projection operators. All one has to do, by changing the task of interest, is to adopt the appropriate projection operator, which is appropriate for the particular task. From this point of view, classical NLMS and Affine projection algorithms come as special cases of this family.

Moreover, the methodology allows for a "straightforward" way to incorporate convex constraints. After all, satisfying a constraint can be done by projecting an estimate onto the set that is associated with a constraint.

After presenting the basic principles behind convex sets and projections onto convex sets, in simple geometric terms, the tutorial will focus on demonstrating how one can "change" the involved projection operator in order to cover a wide class of problems in Machine Learning and Signal Processing. In this tour we are going to visit the following problems.

  1. The unconstrained regression task, where the projection operator in our update formula becomes the projection onto a hyperslab or onto regions formed by quadratic surfaces. Typical Signal Processing tasks, such as channel estimation, channel equalization, prediction are cast under this framework. Classification can also be treated in a similar way; the only difference is that hyperslabs are replaced by halfspaces. In the sequel, extensions in Reproducing kernel Hilbert spaces will be discussed as a powerful tool to deal, in a unifying way, with nonlinear tasks and related simple projection operators will be presented. Finally, some key concepts of learning in sensor networks will be presented and we will demonstrate how our generic scheme can easily be modified in the context of distributive learning.

    This part will be presented by Prof. S. Theodoridis.

  2. Constrained optimization tasks will be studied by means of two paradigms.

    Adaptive beamforming is an application area of major practical importance. We will demonstrate the way in which our method can efficiently solve adaptive beamforming tasks under linear constraints. However, we will proceed one step further. The task of robust beamforming will be presented, since robust convex constraints can easily be addressed by our methodology; just a few more projections.

    Sparsity-aware learning, has recently attracted a lot of attention, mainly in the context of compressive sensing. However, most of the emphasis, so far, has been given on batch algorithms. In this part of the tutorial, we will see how our theoretical framework can be used to attack adaptive learning under the sparsity constraint of the unknown parameter vector, to be estimated. To this end, weighted l1-balls will be imposed as extra convex constraints into our design. Weighted l1 balls turn out to be equivalent to the soft thresholding rule. To enrich our sparsity-aware learning paradigm with statistical attributes, we will also demonstrate the way that generalized thresholding rules, which are associated even to non-convex penalties, can be easily incorporated in our framework. To do so, we will have to depart from the classical theory of projection operators, and move into the exciting field of quasi-nonexpansive mappings and their associated fixed point sets.

    This part will be presented by Dr. K. Slavakis.

  3. The final part of this tutorial will elaborate on the future extensions of our framework. More specifically, we will present the basic principles of the powerful engine which is behind our methodology; the theory of quasi-nonexpansive mappings and their associated fixed point sets in general Hilbert spaces. We will demonstrate that this theory is the key ingredient behind numerous optimization algorithms in Signal Processing and Machine Learning, and provides with simple solutions to involved optimization tasks, like convexly constrained learning problems, even with infeasible a-priori constraints, proximal algorithms, variational inequality problems, etc. The elegant theory will be presented in simple geometric terms, and several examples will be given in order to highlight the connection of the theory with the already presented framework.

    This part will be presented by Prof. I. Yamada.

Target Audience: The tutorial will be of interest to everybody who is interested in adaptive Signal Processing and Machine Learning Techniques and Algorithms. The participants should know the basics on statistical signal processing and adaptive algorithms. No prior knowledge to the specific techniques is required. The emphasis will be given more on concepts and less to mathematical details.

Speaker Biography

Sergios Theodoridis

He is currently Professor of Signal Processing and Communications in the Department of Informatics and Telecommunications of the University of Athens. His research interests lie in the areas of Adaptive Algorithms and Communications, Machine Learning and Pattern Recognition, Signal Processing for Audio Processing and Retrieval. He is the co-editor of the book "Efficient Algorithms for Signal Processing and System Identification", Prentice Hall 1993, the co-author of the best selling book "Pattern Recognition", Academic Press, 4th ed. 2008, the co-author of the book "Introduction to Pattern Recognition: A MATLAB Approach", Academic Press, 2009, and the co-author of three books in Greek, two of them for the Greek Open University.

He is the co-author of six papers that have received best paper awards including the 2009 IEEE Computational Intelligence Society Transactions on Neural Networks Outstanding paper Award. He has served as an IEEE Signal Processing Society Distinguished Lecturer. He was the advisor of the Ph.D thesis of Dr Mavroforakis, that received the IEEE Computational Intelligence Society as well as the EURASIP Best Ph.D thesis awards. He has served as an Associate editor in all major Signal Processing related journals, including the IEEE Signal Processing Magazine, IEEE Transactions on Signal Processing, IEEE Transaction on Neural Networks and the EURASIP Signal Processing journal.

He was the general chairman of EUSIPCO-98, the Technical Program co-chair for ISCAS-2006 and co-chairman and co-founder of CIP-2008 and co-chairman of CIP-2010. He has served as President of the European Association for Signal Processing (EURASIP) and as member of the Board of Governors for the IEEE CAS Society. He currently serves as member of the Board of Governors (Member-at-Large) of the IEEE SP Society.

He has served as a member of the Greek National Council for Research and Technology and he was Chairman of the SP advisory committee for the Edinburgh Research Partnership (ERP). He has served as vice chairman of the Greek Pedagogical Institute and he was for four years member of the Board of Directors of COSMOTE (the Greek mobile phone operating company). He is Fellow of IET, a Corresponding Fellow of RSE, a Fellow of EURASIP and a Fellow of IEEE.

Isao Yamada

He is a Professor in the Department of Communications and Integrated Systems, Tokyo Institute of Technology. His current research interests are in mathematical signal processing, nonlinear inverse problem and optimization theory.

He received the B.E.degree in computer science from the University of Tsukuba in 1985 and the M.E and Ph.D.degrees in electrical and electronic engineering from the Tokyo Institute of Technology in 1987 and 1990, respectively.


Currently, he is an Associate Editor for the IEEE TRANSACTIONS ON SIGNAL PROCESSING as well as a member of the IEEE Signal Processing Theory and Methods Technical Committee. He is also the chairman of the IEICE Technical Group on Signal Processing (2011).

He received Excellent Paper Awards in 1991, 1995, 2006, 2009, the Young Researcher Award in 1992 and the Achievement Award in 2009 from the IEICE; the ICF Research Award from the International Communications Foundation in 2004; the Docomo Mobile Science Award (Fundamental Science Division) from Mobile Communication Fund in 2005; and the Fujino Prize from Tejima Foundation in 2008.

Kostas Slavakis

Born in Thessaloniki, Greece. He received the M.Eng. and the Ph.D. degrees in Electrical and Electronic Engineering from Tokyo Institute of Technology (TokyoTech), Tokyo, Japan, in 1999 and 2002, respectively.

During his studies in TokyoTech (1996-2002), he was a Japanese Government (MEXT) Scholar, and a member of the Sakaniwa - Isao Yamada Lab. He stayed, again, at TokyoTech, for the period of April 2004 to April 2006, as a Japan Society for the Promotion of Science (JSPS) PostDoctoral Fellow. From July 2006 till Aug. 2007, he was a PostDoctoral Fellow in the Department of Informatics and Telecommunications, University of Athens, Greece, under the ENTER Program.

He was a Visiting Scholar at the School of Computational Sciences, Korea Institute for Advanced Studies (KIAS), Seoul, Korea (Dec. 2005), at the Centre Tecnologic de Telecomunicacions de Catalunya (CTTC), Barcelona, Spain (Jan. 2009), and at the Tokyo Institute of Technology, Tokyo, Japan (July-Aug. 2009).

Since Sept. 2007, he is an Assistant Professor in the Department of Telecommunications Science and Technology, University of Peloponnese, Tripolis, Greece.

He serves as an Associate Editor (since Jan. 2009) and as an Area Editor (since April 2010) of the IEEE Transactions on Signal Processing.

His current research interests include the applications of convex analysis and computational algebraic geometry to signal processing, machine learning, array, and multidimensional systems problems.