MaML Seminar SoSe22

Logistics

Organizers: Valentina Disarlo, Marta Magnani, Diaaeldin Taha
Location: The Seminar takes place in Seminar Raum C (Mathematikon) or online via Zoom. Contact us or join the HEGL Mailing List to get the Zoom coordinates.

Schedule

01.08.2022 – Modeling Non-Euclidean Data via Fréchet Regression


Speaker: Danielle Tucker (Chicago)
Time: 16.00–17.00 CET (over Zoom)
Abstract: Fréchet regression is an extension of classical regression to cover more general types of responses, such as distributions, networks and manifolds. In these models, predictors are Euclidean while responses are metric space valued. Predictor selection is of major relevance for regression modeling in the presence of multiple predictors but has not yet been addressed for Fréchet regression. Due to the metric space valued nature of the responses, Fréchet regression models do not feature model parameters, and this lack of parameters makes it a major challenge to extend existing variable selection methods for linear regression to specifically global Fréchet regression. In this talk, I will share my work, in collaboration with Yichao Wu (UIC) and Hans-Georg Mueller (UC Davis), which addresses this challenge and proposes a novel variable selection method with good practical performance. We provide theoretical support and demonstrate that the proposed variable selection method achieves selection consistency. We also explore the finite sample performance of the proposed method with numerical examples and real data illustrations. If time permits, I will also briefly share my more recent research that developed as a result of this paper, research which seeks to further generalize global and local Fréchet regression and allow for not only the responses to come from a general metric space, but also the predictors. 

25.07.2022 – Topological Graph Neural Networks (POSTPONED)

Speaker: Edward de Brouwer (KU Leuven)
Time: 14.00–15.00 CET (over Zoom) POSTPONED
Abstract: Graph neural networks (GNNs) are a powerful architecture for tackling graph learning tasks, yet have been shown to be oblivious to eminent substructures such as cycles. In this talk, we introduce TOGL, a novel layer that incorporates global topological information of a graph using persistent homology. TOGL can be easily integrated into any type of GNN and is strictly more expressive (in terms the Weisfeiler–Lehman graph isomorphism test) than message-passing GNNs. Augmenting GNNs with TOGL leads to improved predictive performance for graph and node classification tasks, both on synthetic data sets, which can be classified by humans using their topology but not by ordinary GNNs, and on real-world data.

23.05.2022 – Towards Understanding Self-Supervised Representation Learning

Speaker: Nikunj Saunshi (Princeton)
Time: 14.15–15.15 CET (over Zoom)
Abstract: While supervised learning sparked the deep learning boom, it has some critical pitfalls: (1) it requires an abundance of expensive labeled data, and (2) it solves tasks from scratch rather than the human-like approach of leveraging knowledge and skills acquired from prior experiences. Pre-training has emerged as an alternative and effective paradigm, where a model is first trained using easily acquirable data, and later used to solve downstream tasks of interest with much fewer labeled data than supervised learning. Pre-training using unlabeled data, a.k.a. self-supervised learning, has been especially revolutionary, with successes in diverse domains (text, vision, speech, etc.). This raises an interesting and challenging question: why should pre-training on unlabeled data help with seemingly unrelated downstream tasks? In this talk I will present my works that initiate and build theoretical frameworks to study self-supervised learning methods like contrastive learning, language modeling and self-prediction based methods. Central to the framework is the idea that pre-training helps learn low-dimensional representations of data that help solve downstream tasks of interest with linear classifiers, thus requiring little labeled data. A common theme is to mathematically show how appropriate pre-training objectives can extract the downstream signal that is implicitly encoded in the unlabeled data distribution, and how this signal can be decoded from the learned representations using linear classifiers, thus providing a formalization for transference of “skills and knowledge” across tasks.

09.05.2022 – A Simple RL Setup to Find Counterexamples to Open Conjectures in Mathematics

Speaker: Adam Wagner (Tel Aviv)
Time: 14.15–15.15 CET (over Zoom)
Abstract: In this talk we will leverage a reinforcement learning method, specifically the cross-entropy method, to search for counterexamples to several conjectures in graph theory and combinatorics. We will present a very simplistic setup, in which only minimal changes need to be made (namely the reward function used for RL) in order to successfully attack a wide variety of problems. As a result we will resolve several open problems, and find more elegant counterexamples to previously disproved ones.