Keynote Speakers

Learning on Graphs Conference, 2024

Table of Contents

General Schedule

Day 1 – Nov 26, 2024

Size (OOD) generalization of neural models via Algorithmic alignment

Speaker: Yusu Wang

Abstract: Size (or length) generalization is a key challenge in neural algorithmic reasoning. Specifically, when can a neural model with bounded complexity generalize to problem instances of arbitrary size? This problem, known as size generalization, is a special case of out-of-distribution (OOD) generalization. In this talk, I will present three examples of achieving such size generalization by “aligning” the neural models with algorithmic structures. In particular, the first two (quick) examples will be on the simpler “expressivity” question, where the goal is to design neural models that are capable of size generalization to solve potentially hard problems at hand. The main part of the talk will focus on the third example which goes beyond “expressivity” and tackles a much more challenging question of certifying, provably, that a trained model generalizes to input of arbitrary sizes. More precisely, we consider the problem of predicting the output of K-step Bellman-Ford (BF) procedure for computing graph shortest path. It has been observed in the literature that a special family of graph neural networks (which we refer to as BF-GNN) has a natural alignment with the BF procedure. Surprisingly, we show that we can construct a set of only a constant number of small graphs, such that if the neural Bellman-Ford model (even when over-parameterized) has low loss over these graphs, then this model will probably generalize to arbitrary graphs with positive weights. To the best of our knowledge, this is the first provable (and practical) generalization certificate for neural approximation of complex tasks. This result also has interesting implications on training neural algorithmic modules.

Bio: Yusu Wang is Professor in the Halicioglu Data Science Institute (HDSI) at University of California, San Diego, where she also serves as the Director for the NSF National AI Institute TILOS. Prior to joining UCSD, she was Professor in the Computer Science and Engineering Department at the Ohio State University. She obtained her PhD degree from Duke University in 2004 where she received the Best PhD Dissertation Award in the CS Department. From 2004-2005, she was a post-doctoral fellow at Stanford University. Yusu Wang primarily works in geometric and topological data analysis (with a textbook on Computational Topology for Data Anaysis), geometric deep learning and representation learning. She received DOE Early Career Principal Investigator Award in 2006, and NSF Career Award in 2008. She is on the editorial boards for SIAM Journal on Computing (SICOMP) and Journal of Computational Geometry (JoCG). She is a member of the AATRN Advisory Committee and was on the Computational Geometry Steering Committee. She also serves in the SIGACT CATCS committee and AWM Meetings Committee.

Day 2 – Nov 27, 2024

Title (TBD)

Speaker: Zachary Ulissi

Abstract: TBA

Day 3 – Nov 28, 2024

Integrating Large Language Models and Graph Neural Networks

Speaker: Xavier Bresson

Abstract: Pre-trained language models on large-scale datasets have revolutionized text-based applications, enabling new capabilities in natural language processing. When documents are connected, they form a text-attributed graph (TAG), like the Internet, Wikipedia, social networks, scientific literature networks, biological networks, scene graphs, and knowledge graphs. Key applications for TAGs include recommendation (web), classification (node, link, graph), text- and visual-based reasoning, and retrieval-augmented generation (RAG). In this talk, I will introduce two approaches that integrate Large Language Models (LLMs) with Graph Neural Networks (GNNs). The first method demonstrates how LLMs’ reasoning capabilities can enhance TAG node features. The second approach introduces a pioneering technique called GraphRAG, which grounds LLM responses in a relevant sub-graph structure. This scalable technique regularizes the language model, significantly reducing incorrect responses, a.k.a. hallucinations.

Bio: Xavier Bresson is an Associate Professor in the Department of Computer Science at the National University of Singapore (NUS). His research focuses on Graph Deep Learning, a new framework that combines graph theory and neural networks to tackle complex data domains. He received the USD 2M NRF Fellowship, the largest individual grant in Singapore, to develop this new framework. He was also awarded several research grants in the U.S. and Hong Kong. He co-authored one of the most cited works in this domain (10th most cited paper at NeurIPS) and has significantly contributed to mature these emerging techniques. He has organized several conferences, workshops and tutorials on graph deep learning such as the IPAM'23 workshops on “Learning and Emergence in Molecular Systems”, the IPAM'23'21 workshops on “Deep Learning and Combinatorial Optimization”, the MLSys'21 workshop on “Graph Neural Networks and Systems”, the IPAM'19 and IPAM'18 workshops on “New Deep Learning Techniques”, and the NeurIPS'17, CVPR'17 and SIAM'18 tutorials on “Geometric Deep Learning on Graphs and Manifolds”. He has been a regular invited speaker at universities and companies to share his work. He has also been a speaker at the NeurIPS'22, KDD'21’23, AAAI'21 and ICML'20 workshops on “Graph Representation Learning”, and the ICLR'20 workshop on “Deep Neural Models and Differential Equations”. He has taught undergraduate and graduate courses on Deep Learning and Graph Neural Networks since 2014.

Day 4 – Nov 29, 2024

Towards Rational Drug Design with AlphaFold 3

Speaker: Alden Hung

Abstract: TBA

Bio: Dr. Alden Hung, a native of Taiwan, developed a passion for computer programming in high school, achieving recognition through multiple programming competition wins. After earning his medical degree in Taiwan, he pursued his interest in the brain further by completing a degree in System Neuroscience at Johns Hopkins University and a postdoctoral fellowship at the National Institute of Mental Health, NIH. Nine years ago, Dr. Hung transitioned from academia to Google DeepMind, applying his neuroscience expertise to advance deep learning and reinforcement learning. Currently, he is a principal ML research scientist at Isomorphic Labs, focusing on leveraging AI’s potential to revolutionise drug discovery. Notably, he was a core contributor to AlphaFold 3.