Keynote Speakers

Learning on Graphs Conference, 2025

Table of Contents

Day 1 – Dec 10, 2025

Title: TBA

Speaker: Aditi Krishnapriyan (UC Berkeley)

Abstract: TBA

Bio: Aditi Krishnapriyan is an Assistant Professor in the Department of Chemical Engineering and EECS at UC Berkeley. She is also a member of Berkeley AI Research (BAIR), part of the AI+Science group in EECS and the theory group in Chemical Engineering, and a faculty scientist in the Applied Mathematics and Computational Research division at LBNL. She is interested in developing methods in machine learning that are driven by the distinct challenges and opportunities in the natural sciences, with particular interest in physics-inspired machine learning methods. Some areas of exploration include general learning strategies exploring the relevance of physical inductive biases and ML models for scientific problems, the advantages that ML can bring to classical physics-based numerical solvers (such as through end-to-end differentiable frameworks and implicit layers), and better learning strategies for distribution shifts in the physical sciences. These methods are informed by and grounded in applications in atomistic and continuum problems, including fluid mechanics, molecular dynamics, and other related areas. This work also includes interfacing with other fields including numerical analysis, dynamical systems theory, quantum mechanics, computational geometry, optimization, and category theory.

Day 2 – Dec 11, 2025

Geometric Deep Learning for Neural Artifacts: Symmetry-Aware Learning across Trained Model Weights, Internal Representations, and Gradients

Speaker: Haggai Maron (Technion / NVIDIA)

Abstract: The explosive growth of deep learning has created fundamentally new data modalities: the mathematical byproducts of neural network development. Practitioners generate vast quantities of valuable but underutilized data daily—trained model weights, internal representations during inference, and gradients during training. These byproducts, which we term neural artifacts, encode crucial information about model behavior, optimization dynamics, and internal computations. Applying machine learning directly to neural artifacts holds transformative potential: learning on weight spaces could revolutionize model selection, editing, and generation; learning on internal representations could enable efficient hallucination detection and reliability analysis in large language models; learning on gradient spaces could transform optimization and interpretability. Yet existing methods capture only a fraction of this potential because they fail to account for the fundamental symmetries inherent to these data types. We argue that this rich symmetry structure necessitates tools from geometric and equivariant deep learning, and that deploying these tools has the potential to transform how we learn from neural artifacts and significantly impact the deep learning community as a whole. In this talk, I will present the general paradigm and four works advancing it: (1) Equivariant Architectures for Learning in Deep Weight Spaces (ICML 2023), establishing theoretical foundations for symmetry-aware weight space learning; (2) Graph Metanetworks (ICLR 2024), enabling unified, architecture-agnostic processing through computational graph representations; (3) GradMetaNets (NeurIPS 2025), designing symmetry-aware architectures for gradient data; and (4) Neural Message-Passing on Attention Graphs for Hallucination Detection, demonstrating how symmetry-aware methods enhance model reliability and interpretability.

Bio: Haggai Maron is an Assistant Professor and the Robert J. Shillman Fellow at the Faculty of Electrical and Computer Engineering at the Technion and a senior research scientist at NVIDIA Research at NVIDIA’s lab in Tel Aviv. His primary research interest is in machine learning, with a focus on deep learning for structured data.

Day 3 – Dec 12, 2025

Title: TBA

Speaker: Michael Galkin (Harvard)

Abstract: TBA

Bio: Michael Galkin is a Research Scientist at Google Research in New York working on GNNs, generalization, and using structured representations for reasoning. His research includes works on graph transformers, geometric deep learning for life sciences and chip design, and efficient kernels for standard and equivariant GNNs.

Day 3 – Dec 12, 2025

Title: TBA

Speaker: Melanie Weber (Harvard)

Abstract: TBA

Bio: Melanie Weber is an Assistant Professor of Applied Mathematics and of Computer Science at Harvard, where she leads the Geometric Machine Learning Group. Her research focuses on utilizing geometric structure in data for the design of efficient Machine Learning and Optimization methods with provable guarantees. This AI Magazine article surveys Geometric Machine Learning, including her work within this area.