Introduction to recursive machine learning algorithms
This workshop is a “hands-on” introduction to machine learning methods that “use data as it arrives”. The algorithms update their internal states with every new piece of information, and produce estimates of non-measured or derived magnitudes. Some of the algorithms might revise the information already stored to improve those estimations.
Duration 3 days (24 h)
When 2025
Where OST Campus Rapperswil-Jona
Language English
Participants 10
Fee CHF 1000.--
Topics Machine learning, online algorithms, Kalman filters
Requirements English, linear algebra (basic), probability theory (basic), programming experience
(beginner), Calculus (integrals and derivatives, basic)
Registration The course takes place from 4 persons. If you are interested, please contact juanpablo.carbajal@ost.ch
Due to its requirements the workshop is suited for advanced students, master students, or professionals that need to understand these type of algorithms.
You can find information about the venue here https://www.ost.ch/de/die-ost/campus/campus-rapperswil-jona
The workshop has an applied perspective, hence the following topics will be covered to the depth needed to understand the ideas and apply them to simple examples. For each topic there will be examples and interactive activities.
Workshop structure
The workshop splits the main loop of the Kalman filter algorithm (shaded in red in the image below) in three sessions. Each sessions takes a full day of training.
Session structure. Shaded in red is the main loop of the Kalman filter. Each 1-day session covers a part of it.Source Kalman filter at Wikipedia.
The first session (S1 in the image) covers modeling and the prediction step of the algorithm. The second session (S2 in the image) takes care of the update state which involves concepts from probabilities and the Bayes rule. The last session (not shown in the figure) is about using the algorithm in different applications; some brought by the participants.
First day: modeling
Session 1.1: Overview. Filtering and smoothing.
Session 1.2: Iterated maps. Straight line as iterated map.
Session 1.3: Error propagation. Gaussian distribution.
Session 1.4: Stochastic modeling. Statistical dependence and causality.
Second day: inference
Session 2.1: Recap. Q&A.
Session 2.2: Conditional probability. Bayesian models. Kalman-Bucy filter.
Session 2.3: Data-driven models. Recursive regression. Delayed regression. Extreme learning machines.
Session 2.4: Parameter estimation.
Third day: applications
Session 3.1: Recap. Q&A.
Session 3.3: Ordinary differential equations (ODEs). Discretization of ODEs. 2D object tracking.
Session 3.2: Empyting water tank. Linearization. Inflow estimation.
Session 3.4: Interconnected tanks & participant contributed models
The original name of the workshop was “Introduction to online Bayesian learning”, however, after I got feedback from some prospective participants, I decided to change it to the current version. The word “online” was triggering the wrong mental images (mainly “internet”), and “Bayesian” was meaningless for many. Hence the current title which is way less specific than the old one, and probably too general for the actual contents of the workshop.
Timeline
Below is the planned timeline for each day of the workshop. The specific contents of each session will be defined later on, when I paginate the whole contents into their final version.
Readings
Book for the workshop
Särkkä, S. (2013). Bayesian Filtering and Smoothing (Institute of Mathematical Statistics Textbooks). Cambridge: Cambridge University Press. doi:10.1017/CBO9781139344203
More details on the author’s webpage
Other readings
Simo Särkkä and Arno Solin (2019). Applied Stochastic Differential Equations. Cambridge University Press.
Strogatz, S. H. (2015). Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. CRC press. ISBN 978-0813349107
Carl Edward Rasmussen and Christopher K. I. Williams (2006). Gaussian Processes for Machine Learning. The MIT Press. ISBN 0-262-18253-X. PDF
Lennart Ljung (2010), Perspectives on system identification, Annual Reviews in Control, Volume 34, Issue 1, Pages 1-12, ISSN 1367-5788
In 1960, R.E. Kalman published his famous paper describing a recursive solution to the discrete-data linear filtering problem.
Judea Pearl. 2019. The seven tools of causal inference, with reflections on machine learning. Commun. ACM 62, 3 (March 2019), 54–60. https://doi.org/10.1145/3241036
Peters, J., Janzing, D.,, Schölkopf, B. (2017). Elements of Causal Inference: Foundations and Learning Algorithms. Cambridge, MA: MIT Press. ISBN: 978-0-262-03731-0
Pillonetto, G., Chen, T., Chiuso, A., De Nicolao, G., & Ljung, L. (2022). Regularized system identification: Learning dynamic models from data. Springer.
Kontakt
Dr. Juan Pablo CarbajalIET Institut für EnergietechnikWissenschaftlicher Mitarbeiter
+41 58 257 42 64juanpablo.carbajal@ost.ch