Tutorial Sessions

Learning on Graphs Conference, 2025

Table of Contents


Tutorial Feedback Form

Did you attend one of the tutorials? Please provide feedback on the tutorial you attended by filling out the form below. Your feedback is important to us and will help both the conference and the tutorial organizers improve future events.



Day 1 – Dec 10, 2025

Relational Deep Learning: Foundations, Advanced Methods and Hands-on Development

Organizers: Vijay Prakash Dwivedi, Charilaos I. Kanatsoulis, Shenyang Huang, Michael Bronstein, Jure Leskovec

Date: December 10, 2025

US West Coast Arizona (MST) US East Coast Europe (London) Asia (Beijing)
14:00 15:00 17:00 22:00 06:00 (Dec 11)

Length: 3 hours

Abstract:
We introduce the emerging field of relational deep learning (RDL), where relational databases are represented as graph-structured data by treating each table row as a node and primary-foreign key relationships as edges. We begin by highlighting the key challenges in the formulation of relational entity graphs as well as different aspects of data modeling such as temporality, heterogeneity and scale. Next, we discuss the core methods for RDL, including Graph Neural Networks (GNNs) alongside their temporal and heterogeneous extensions, and graph transformers (GT), while reviewing current benchmark datasets in this domain. Then we discuss recent advancements and frontier architectures in RDL. Finally, we include a hands-on session to demonstrate how to build a complete deep learning pipeline for relational deep learning, including data preparation, featurization, sampling, as well as GNN and GT training and evaluation. Our tutorial will serve as a comprehensive introduction to relational deep learning for the Learning on Graph community.

Website: None

Setup requirements: The tutorial attendees are expected to have a development environment for a general graph machine learning workflow:

  1. Local Setup
  • Python – Install Python with support for the PyTorch and PyTorch Geometric libraries, capable of running general graph learning workflows.
  • GPU – A GPU with at least 40 GB of memory is recommended for fitting usual batch sizes. (Batch sizes can be adjusted to fit available GPU memory.)
  • Framework requirements – Please configure the same environment as used in RelBench and RelGT.
  1. Browser Setup
  • Internet & Browser – A stable internet connection and a supported web browser are required.
  • Google Account – Ensure you have a Google account with the appropriate access permissions.
  • Google Colab – A Google Colab notebook will be provided which can be run interactively on the browser.

Day 2 – Dec 11, 2025

Graph Foundation Models

Organizers: Xingyue Huang, Ben Finkelshtein, Zifeng Ding

Date: December 11, 2025

US West Coast Arizona (MST) US East Coast Europe (London) Asia (Beijing)
14:30 15:30 17:30 22:30 06:30 (Dec 11)

Length: 2.5 hours

Abstract:
Graph-structured data are ubiquitous across science and industry, yet today’s graph ML pipelines remain task- and dataset-specific, limiting robustness and transfer. This long-format tutorial surveys recent advances in graph foundation models (GFMs) through three modules: (1) GFMs for node and graph classification, (2) knowledge graph foundation models (KGFMs), and (3) LLMs as graph foundation models for temporal graphs. Across the modules, we will emphasize architectural choices and designs that respect group-equivariance (e.g., permutation groups of class-feature-label, product groups for multi-relational structure, and temporal symmetries), theoretical properties of recent GFMs (expressivity, equivariance characterizations, scaling behavior), and LLMs-as-GFMs with prompting and agentic search on graphs. Attendees will leave with practical recipes and a principled roadmap for building scalable, equivariant, and trustworthy GFMs, and for integrating LLM-driven agents that reason and act over graphs.

Website: https://github.com/HxyScotthuang/Graph-Foundation-Models-LoG-2025-Tutorial

Setup requirements: None


Day 3 – Dec 12, 2025

Graph Machine Learning with Missing Multimodal Information

Organizers: Daniele Malitesta, Fragkiskos D. Malliaros

Date: December 12, 2025

US West Coast Arizona (MST) US East Coast Europe (London) Asia (Beijing)
10:30 11:30 13:30 18:30 02:30 (Dec 12)

Length: 2.5 hours

Abstract:
Graph data with multimodal information is ubiquitous, from users posting content on social networks and customers buying products on e-commerce platforms to patients interconnected with diseases and drugs in electronic health records (EHRs). For these reasons, graph machine learning has gradually shifted from a unimodal to a multimodal paradigm over the past few years. Despite their effectiveness, these approaches may be greatly limited if such multimodal information is noisy or (even worse) missing—a quite common situation in real-world scenarios. This tutorial intends to provide one of the first formal and practical outlooks on established and recent techniques to impute missing multimodal information in graph machine learning. By first introducing traditional graph approaches to tackle missing information in unimodal settings, it then presents the current literature on imputation for multimodal data in graph machine learning. Moreover, the tutorial offers an overview on popular applicative scenarios where the missing information issue occurs, such as the recommendation and healthcare domains, highlighting how graphs can be the source of missingness (the former) or the tools to address the missingness of multimodal information (the latter). The applicative scenarios are further explored during a hands-on session, which presents and tests the complete experimental pipeline of two recent solutions.

Website: https://log-centralesupelec.github.io/missing-multimod-gml-log2025/

Setup requirements: None