Storm, LudvigStorm, Ludvig2025-09-292025-09-292025-09-29978-91-8115-463-4 (TRYCK)978-91-8115-464-1 (PDF)https://hdl.handle.net/2077/89665Machine learning has in the past decade been successfully applied to a vast range of tasks, ranging from classification, time-series prediction, and optimal navigation strategies. However, the internal mechanisms of many models are still difficult to interpret, and we lack a systematic understanding of when and why they perform successfully. Dynamical-systems theory has long been used to study complex, high-dimensional systems by focusing on their geometric and stability properties. In this thesis, methods from dynamical-systems theory are applied to machine learning models in order to gain new insights into their behaviour, with particular emphasis on finite-time Lyapunov exponents (FTLE) and Lagrangian coherent structures (LCS). In the first part, FTLEs are used to study how feed-forward neural networks organise sensitivity in input space, distinguishing regimes where networks align sensitivity with decision boundaries from regimes where embeddings appear random. In the second part, reservoir computing is analysed from a dynamical-systems perspective, and the maximal Lyapunov exponent of the driven reservoir is identified as the key parameter that controls prediction performance. In the third part, LCS theory is applied to the dynamics of active particles in flows, and it is shown how coherent structures determine when navigation strategies succeed or fail, in particular by explaining the trapping of swimmers in vortical regions. Overall, the thesis demonstrates that concepts originally developed to analyse complex physical systems can be fruitfully applied to machine learning. The use of FTLEs and LCS provides systematic tools for quantifying sensitivity and stability, offering a complementary perspective to existing approaches for analysing when and how machine-learning algorithms are able to learn.engUnderstanding machine learning through dynamical-systems methodsText