Symposium on Systems Theory in Data and Optimization

University of Stuttgart, Germany, Sep 30 – Oct 2, 2024​



Plenary Talks

Anuradha Annaswamy

Anuradha Annaswamy, Massachusetts Institute of Technology

Interplay between Adaptive Control, Learning, and Optimization

Abstract: The design of complex dynamic systems involves the simultaneous realization of several performance metrics ranging from mission goals, safety, management of real-time uncertainties, and optimality. Historically, adaptive control has excelled at real-time control of systems with specific model structures through adaptive rules that learn the underlying parameters while providing strict guarantees on stability, asymptotic performance, and learning. Optimality centric methods such as reinforcement learning are applicable to a broad class of systems and are able to produce near-optimal policies for highly complex control tasks. This is often enabled by significant offline training via simulation or the collection of large input-state datasets. In both methods, the main approach used for updating the parameters is based on a parametrized policy and gradient descent-like algorithms. Related tools of analysis, convergence, and robustness in both fields have a tremendous amount of similarity as well. This talk will focus on the realization of various performance metrics using adaptation, learning, and optimization. The core building blocks of adaptation and parameter learning and their integration with performance, safety, and optimality form the main focus of the talk.
This talk will also examine the similarities and interconnections between adaptive control and reinforcement learning-based control. Concepts in stability, performance, learning, and robustness, common to both methods will be discussed. Building on the similarities in update laws and common concepts, new intersections and opportunities for improved algorithm analysis will be explored. Two specific examples of dynamic systems are used to illustrate the details of the two methods, their advantages, and their deficiencies. We will explore how these methods can be leveraged and integrated to lead to provably correct methods for learning in real-time with guaranteed fast convergence. Examples will be drawn from a range of engineering applications.

Bio: Dr. Anuradha Annaswamy is Founder and Director of the Active-Adaptive Control Laboratory in the Department of Mechanical Engineering at MIT.  Her research interests span adaptive control theory and its applications to aerospace, automotive, propulsion, and energy systems as well as cyber physical systems such as Smart Grids and Smart Cities. She has received best paper awards (Axelby, 1986; CSM, 2010), as well as Distinguished Member and Distinguished Lecturer awards from the IEEE Control Systems Society (CSS), best paper award from the IFAC journal Annual Reviews in Control for 2021-23, and a Presidential Young Investigator award from NSF, 1991-97. She is a Fellow of IEEE and International Federation of Automatic Control. She is the recipient of the Distinguished Alumni award from Indian Institute of Science. 
With Karl Johansson and George Pappas, she edited the CSS report “Control for societal-scale Challenges: Road Map 2030,” published in 2023. She is currently serving as the President-elect of the American Automatic Control Council. She will serve as the Editor in Chief of IEEE Control Systems magazine from January 2025.


Navid Azizan

Navid Azizan, Massachusetts Institute of Technology

Towards Reliable Machine Learning for Real Systems

Abstract: As machine learning models continue to deliver impressive results, the excitement around deploying them in real-world systems grows. However, the reliable deployment of these models in safety-critical systems is hindered by several factors, such as their opaque nature and unpredictable behavior on data points significantly different from their training sets, and their inability to incorporate hard constraints efficiently. This talk presents recent advancements in enhancing the safety and reliability of machine learning models. Specifically, we will discuss: (1) run-time monitors for learning-enabled components, focusing on uncertainty estimation and anomaly detection mechanisms for pre-trained models and latent representations, which mitigate risks associated with unforeseen operational deviations; (2) a framework for provably training neural networks under hard constraints without compromising their universal approximation capabilities, achieved through a differentiable projection layer that enforces constraints by construction while allowing unconstrained optimization of network parameters; and (3) uncertainty-aware model adaptation techniques and control-oriented meta-learning, which enable efficient and robust model adaptation using new data points and in dynamic environments. Together, these advancements pave the way for safe and efficient data-driven control and decision-making systems with intrinsic safety and stability guarantees.

Bio: Navid Azizan is the Esther & Harold E. Edgerton (1927) Assistant Professor at MIT, where he is a Principal Investigator in the Laboratory for Information & Decision Systems (LIDS) and holds dual appointments in the Department of Mechanical Engineering (Control, Instrumentation, & Robotics) and the Schwarzman College of Computing's Institute for Data, Systems, & Society (IDSS). His research interests broadly lie in machine learning, systems and control, mathematical optimization, and network science. His research lab focuses on various aspects of enabling large-scale intelligent systems, with an emphasis on principled learning and optimization algorithms for autonomous systems and societal networks. He obtained his PhD in Computing and Mathematical Sciences (CMS) from the California Institute of Technology (Caltech), co-advised by Babak Hassibi and Adam Wierman, in 2020, his MSc in electrical engineering from the University of Southern California in 2015, and his BSc in electrical engineering with a minor in physics from Sharif University of Technology in 2013. Prior to joining MIT, he completed a postdoc in the Autonomous Systems Laboratory (ASL) at Stanford University in 2021. Additionally, he was a research scientist intern at Google DeepMind in 2019. His work has been recognized by several awards, including Research Awards from Google, Amazon, and MathWorks, the 2020 Information Theory and Applications (ITA) Gold Graduation Award, and the 2016 ACM GREENMETRICS Best Student Paper Award. He was named in the list of Outstanding Academic Leaders in Data from the CDO Magazine in 2024 and 2023, named an Amazon Fellow in Artificial Intelligence in 2017, and a PIMCO Fellow in Data Science in 2018. He was also the first-place winner and a gold medalist at the 2008 National Physics Olympiad in Iran.

 

Francesco Borrelli

Francesco Borrelli, UC Berkeley

Automatic Control in the Era of Artificial Intelligence

Abstract: In an era where Artificial Intelligence (AI) is often seen as a universal solution for any complex problem, this presentation offers a critical examination of its role in the field of automatic control. To be concrete, I will focus on Optimal Control techniques, navigating through its history and addressing the evolution from its traditional model-based roots to the emerging data-driven methodologies empowered by AI.
The presentation will delve into how the theoretical underpinnings of Optimal Control have been historically aligned with computational capabilities, and how this alignment has shifted over the years. This juxtaposition of theory and computation motivates a deeper investigation into the diminishing relevance of certain traditional control methods amidst the AI revolution. We will critically examine scenarios where AI-driven approaches could outperform classical methods, as well as cases where the hype surrounding AI overshadows its actual utility.
The talk will conclude with a view of state-of-the-art optimal control methods in practical applications including self-driving cars, advanced robotics and energy efficient systems. From this perspective, we will identify and explore future potential directions for the field, including the design of learning control architectures which seamlessly integrate predictive capabilities at every level, focusing on systems that can autonomously refine their performance over time through continuous learning and interaction with their environment.

Bio: Francesco Borrelli received his ‘Laurea' degree from the University of Naples Federico II', Italy in 1998, and his PhD from the Automatic Control Laboratory at ETH-Zurich, Switzerland in 2002. He is currently a Professor at the Department of Mechanical Engineering at the University of California, Berkeley, USA, where he conducts research in the field of predictive control.
Professor Borrelli has authored over 200 publications in the field of predictive control and is the author of the book Predictive Control, published by Cambridge University Press. He has received several awards for his contributions to the predictive control field, including the 2009 NSF CAREER Award, the 2012 IEEE Control System Technology Award, and was elected IEEE Fellow in 2016. In 2017, he was awarded the Industrial Achievement Award by the International Federation of Automatic Control (IFAC) Council.
Professor Borrelli has been a consultant for major international corporations since 2004, with his recent industrial activities focusing on the application of predictive control in self-driving vehicles, utility scale solar power plants, automotive control systems, and building energy efficiency control. He was the founder and CTO of BrightBox Technologies Inc, a company focused on cloud-computing optimization for autonomous systems, and was the co-director of the Hyundai Center of Excellence in Integrated Vehicle Safety Systems and Control at UC Berkeley. He is also the founder of WideSense Inc., a company focused on E-Mobility.
Professor Borrelli's research interests include model predictive control, learning, and their application to robotics, transportation, and energy control systems.


Julien Hendrickx

Julien Hendrickx, UCLouvain

Computer-Aided Analysis and Design of Optimization Algorithms

Abstract: We present a method to automatically derive tight numerical worst-case efficiency guarantees for a wide class of first-order optimization algorithms, along with examples of problems on which these worst-case guarantees are achieved. The method can be used via MATLAB and Python toolboxes that allow for simple natural expressions of the algorithms and problem classes.
Our approach relies on two steps: (i) the lossless translation of conceptual assumptions on the problem classes (e.g., function smoothness properties) into finite tractable algebraic relations involving quantities appearing in the algorithm, and (ii) the optimal combination of these relations with the algorithm description to obtain the best possible guarantees via an optimization problem called the "Performance Estimation Problem" (PEP). We demonstrate how the numerical results can often be exploited to identify analytical expressions for the bounds and, in some cases, actual mathematical proofs.
The guarantees obtained outperform many of those available in the literature, sometimes by orders of magnitude, and allow for new recommendations on the optimal algorithmic hyperparameters, leading to improved efficiency, even in cases as simple as gradient descent.
This automated analysis can serve as a guide in the design of algorithms, allowing for rapid prototyping and iterative improvements. But one can go further, and formulate the design of optimization algorithms as a min-max optimization problem, which can be addressed numerically, or even sometimes solved analytically, leading to optimal methods.
We illustrate the PEP approach with some examples and briefly discuss extensions to settings such as decentralized optimization or higher-order methods.
The talk will present joint work with François Glineur, Adrien Taylor, Nizar Bousselmi, Yassine Kamri, Sébastien Colla and Anne Rubbens.

Bio: Julien M. Hendrickx is a professor of mathematical engineering at UCLouvain, in the École Polytechnique de Louvain since 2010, and is currently the head of the ICTEAM institute.
He obtained an engineering degree in applied mathematics (2004) and a PhD in mathematical engineering (2008) from the same university. He has been a visiting researcher at the University of Illinois at Urbana-Champaign in 2003-2004, at National ICT Australia in 2005 and 2006, and at the Massachusetts Institute of Technology in 2006 and 2008. He was a postdoctoral fellow at the Laboratory for Information and Decision Systems at the Massachusetts Institute of Technology in 2009 and 2010, holding postdoctoral fellowships from the F.R.S.-FNRS (Fund for Scientific Research) and the Belgian American Educational Foundation. He was also a resident scholar at the Center for Information and Systems Engineering (Boston University) in 2018-2019, holding a WBI.World excellence fellowship.
Dr. Hendrickx is the recipient of the 2008 EECI award for the best PhD thesis in Europe in the field of Embedded and Networked Control, and the Alcatel-Lucent-Bell 2009 award for a PhD thesis on original new concepts or applications in the domain of information or communication technologies.


Daniel Kuhn

Daniel Kuhn, EPFL

Distributionally Robust Linear Quadratic Control

Abstract: Linear-Quadratic-Gaussian (LQG) control is a fundamental control paradigm that is studied in various fields such as engineering, computer science, economics, and neuroscience. It involves controlling a system with linear dynamics and imperfect observations, subject to additive noise, with the goal of minimizing a quadratic cost function for the state and control variables. In this work, we consider a generalization of the discrete-time, finite-horizon LQG problem, where the noise distributions are unknown and belong to Wasserstein ambiguity sets centered at nominal (Gaussian) distributions. The objective is to minimize a worst-case cost across all distributions in the ambiguity set, including non-Gaussian distributions. Despite the added complexity, we prove that a control policy that is linear in the observations is optimal for this problem, as in the classic LQG problem. We propose a numerical solution method that efficiently characterizes this optimal control policy. Our method uses the Frank-Wolfe algorithm to identify the least-favorable distributions within the Wasserstein ambiguity sets and computes the controller's optimal policy using Kalman filter estimation under these distributions.

Bio: Daniel Kuhn is Professor of Operations Research at the College of Management of Technology at EPFL, where he holds the Chair of Risk Analytics and Optimization (RAO). His research interests are focused on data-driven, stochastic and robust optimization. Before joining EPFL, Daniel Kuhn was a faculty member in the Department of Computing at Imperial College London (2007-2013) and a postdoctoral research associate in the Department of Management Science and Engineering at Stanford University (2005-2006). He holds a PhD in Economics from the University of St. Gallen and an MSc in Theoretical Physics from ETH Zurich. He is the editor-in-chief of Mathematical Programming.


Necmiye Ozay

Necmiye Ozay, University of Michigan

Some fundamental limitations of learning for dynamics and control

Abstract: Data-driven and learning-based methods have attracted considerable attention in recent years both for the analysis of dynamical systems and for control design. While there are many interesting and exciting results in this direction, our understanding of fundamental limitations of learning for control is lagging. This talk will focus on the question of when learning can be hard or impossible in the context of dynamical systems and control. In the first part of the talk, I will discuss a new observation on immersions and how it reveals some potential limitations in learning Koopman embeddings. In the second part of the talk, I will show what makes it hard to learn to stabilize linear systems from a sample-complexity perspective. While these results might seem negative, I will conclude the talk with some thoughts on how they can inspire interesting future directions.

Bio: Necmiye Ozay is the Chen-Luan Family Faculty Development Professor of Electrical and Computer Engineering, and an associate professor of Electrical Engineering and Computer Science, and Robotics at the University of Michigan, Ann Arbor. She received her PhD in Electrical Engineering from Northeastern University in 2010. After a postdoctoral position at Caltech in Computing and Mathematical Sciences, she joined Michigan in 2013. Her research interests include dynamical systems, control, optimization, and formal methods with applications in learning-enabled cyber-physical systems, system identification, verification and validation, and safe autonomy. She received the 1938E Award and a Henry Russel Award from the University of Michigan for her contributions to teaching and research. She received five young investigator awards, including NSF CAREER Award. She is also the recipient of the 2021 Antonio Ruberti Young Researcher Prize from the IEEE Control Systems Society for her fundamental contributions to the control and identification of hybrid and cyber-physical systems.


René Vidal

René Vidal, University of Pennsylvania

Learning Dynamics of Overparametrized Neural Networks

Abstract: This talk will provide a detailed analysis of the dynamics of gradient based methods in overparameterized models. For linear networks, we show that the weights converge to an equilibrium at a linear rate that depends on the imbalance between input and output weights (which is fixed at initialization) and the margin of the initial solution. For ReLU networks, we show that the dynamics has a feature learning phase, where neurons collapse to one of the class centers, and a classifier learning phase, where the loss converges to zero at a rate 1/t.

Bio: René Vidal is the Penn Integrates Knowledge and Rachleff University Professor of Electrical and Systems Engineering & Radiology and the Director of the Center for Innovation in Data Engineering and Science (IDEAS) at the University of Pennsylvania. He is also an Amazon Scholar, an Affiliated Chief Scientist at NORCE, and a former Associate Editor in Chief of TPAMI. His current research focuses on the foundations of deep learning and trustworthy AI and its applications in computer vision and biomedical data science. His lab has made seminal contributions to motion segmentation, action recognition, subspace clustering, matrix factorization, global optimality in deep learning, interpretable AI, and biomedical image analysis. He is an ACM Fellow, AIMBE Fellow, IEEE Fellow, IAPR Fellow and Sloan Fellow, and has received numerous awards for his work, including the IEEE Edward J. McCluskey Technical Achievement Award, D’Alembert Faculty Award, J.K. Aggarwal Prize, ONR Young Investigator Award, NSF CAREER Award as well as best paper awards in machine learning, computer vision, signal processing, controls, and medical robotics.

 

Melanie Zeilinger

Melanie Zeilinger, ETH Zurich

Learning in Optimization-based Control – Guarantees in the Unknown

Abstract: Advancing autonomous systems requires not only improving the control of complex dynamical systems but also achieving complex tasks in challenging environments. This presents numerous challenges for the underlying control algorithms, often making it difficult to provide rigorous guarantees due to the many arising uncertainties. Learning has emerged as a promising means to mitigate uncertainty; however, the recovery of guarantees, particularly concerning safety, is often still lacking.
In this talk, I will present an optimization-based approach to addressing this problem. I will begin by highlighting how safety can be effectively formulated as a planning problem, a concept that can be used both to include safety specifications in an optimization-based controller and to construct a safety filter for control. After establishing a notion of safety, the talk will address concepts of learning dynamics, objective functions, and constraint functions. In particular, it will discuss efficient optimization and learning with (approximate) Gaussian Process models. The concepts in this presentation will be illustrated with applications from autonomous racing and robotics.

Bio: Melanie Zeilinger is an Associate Professor at the Department of Mechanical and Process Engineering at ETH Zurich, where she is leading the Intelligent Control Systems. She received the diploma in Engineering Cybernetics from the University of Stuttgart in Germany in 2006 and the Ph.D. degree in Electrical Engineering from ETH Zurich in 2011. From 2011 to 2012 she was a postdoctoral fellow at the École Polytechnique Fédérale de Lausanne (EPFL), Switzerland. From 2012 to 2015 she was a Postdoctoral Researcher and Marie Curie fellow in a joint program with the University of California at Berkeley, USA, and the Max Planck Institute for Intelligent Systems in Tuebingen, Germany. From 2018 to 2019 she was a professor at the University of Freiburg, Germany. Her awards include the ETH medal for her PhD thesis, an SNF Professorship, the Golden Owl for exceptional teaching at ETH Zurich 2022 and the European Control Award 2023. Her research interests include learning-based control with applications to robotics and biomedical systems.