Learning on Graphs Conference, 2025
Did you attend one of the tutorials? Please provide feedback on the tutorial you attended by filling out the form below. Your feedback is important to us and will help both the conference and the tutorial organizers improve future events.
Organizers: Vijay Prakash Dwivedi, Charilaos I. Kanatsoulis, Shenyang Huang, Michael Bronstein, Jure Leskovec
Date: December 10, 2025
| US West Coast | Arizona (MST) | US East Coast | Europe (London) | Asia (Beijing) |
|---|---|---|---|---|
| 14:00 | 15:00 | 17:00 | 22:00 | 06:00 (Dec 11) |
Length: 3 hours
Abstract:
We introduce the emerging field of relational deep learning (RDL), where relational databases are represented as graph-structured data by treating each table row as a node and primary-foreign key relationships as edges. We begin by highlighting the key challenges in the formulation of relational entity graphs as well as different aspects of data modeling such as temporality, heterogeneity and scale. Next, we discuss the core methods for RDL, including Graph Neural Networks (GNNs) alongside their temporal and heterogeneous extensions, and graph transformers (GT), while reviewing current benchmark datasets in this domain. Then we discuss recent advancements and frontier architectures in RDL. Finally, we include a hands-on session to demonstrate how to build a complete deep learning pipeline for relational deep learning, including data preparation, featurization, sampling, as well as GNN and GT training and evaluation. Our tutorial will serve as a comprehensive introduction to relational deep learning for the Learning on Graph community.
Website: None
Setup requirements: The tutorial attendees are expected to have a development environment for a general graph machine learning workflow:
Organizers: Xingyue Huang, Ben Finkelshtein, Zifeng Ding
Date: December 11, 2025
| US West Coast | Arizona (MST) | US East Coast | Europe (London) | Asia (Beijing) |
|---|---|---|---|---|
| 14:30 | 15:30 | 17:30 | 22:30 | 06:30 (Dec 11) |
Length: 2.5 hours
Abstract:
Graph-structured data are ubiquitous across science and industry, yet today’s graph ML pipelines remain task- and dataset-specific, limiting robustness and transfer. This long-format tutorial surveys recent advances in graph foundation models (GFMs) through three modules: (1) GFMs for node and graph classification, (2) knowledge graph foundation models (KGFMs), and (3) LLMs as graph foundation models for temporal graphs. Across the modules, we will emphasize architectural choices and designs that respect group-equivariance (e.g., permutation groups of class-feature-label, product groups for multi-relational structure, and temporal symmetries), theoretical properties of recent GFMs (expressivity, equivariance characterizations, scaling behavior), and LLMs-as-GFMs with prompting and agentic search on graphs. Attendees will leave with practical recipes and a principled roadmap for building scalable, equivariant, and trustworthy GFMs, and for integrating LLM-driven agents that reason and act over graphs.
Website: https://github.com/HxyScotthuang/Graph-Foundation-Models-LoG-2025-Tutorial
Setup requirements: None
Organizers: Daniele Malitesta, Fragkiskos D. Malliaros
Date: December 12, 2025
| US West Coast | Arizona (MST) | US East Coast | Europe (London) | Asia (Beijing) |
|---|---|---|---|---|
| 10:30 | 11:30 | 13:30 | 18:30 | 02:30 (Dec 12) |
Length: 2.5 hours
Abstract:
Graph data with multimodal information is ubiquitous, from users posting content on social networks and customers buying products on e-commerce platforms to patients interconnected with diseases and drugs in electronic health records (EHRs). For these reasons, graph machine learning has gradually shifted from a unimodal to a multimodal paradigm over the past few years. Despite their effectiveness, these approaches may be greatly limited if such multimodal information is noisy or (even worse) missing—a quite common situation in real-world scenarios. This tutorial intends to provide one of the first formal and practical outlooks on established and recent techniques to impute missing multimodal information in graph machine learning. By first introducing traditional graph approaches to tackle missing information in unimodal settings, it then presents the current literature on imputation for multimodal data in graph machine learning. Moreover, the tutorial offers an overview on popular applicative scenarios where the missing information issue occurs, such as the recommendation and healthcare domains, highlighting how graphs can be the source of missingness (the former) or the tools to address the missingness of multimodal information (the latter). The applicative scenarios are further explored during a hands-on session, which presents and tests the complete experimental pipeline of two recent solutions.
Website: https://log-centralesupelec.github.io/missing-multimod-gml-log2025/
Setup requirements: None