Keynote Speakers

Wednesday, 6th May 2026.

Leana Golubchik, University of Souther California, US

Talk title: Systems for AI: Predicting Performance of Machine Learning Workloads.

Deep learning has made substantial strides in many applications; new training techniques, larger datasets, increased computing power, and easy-to-use machine learning frameworks all contribute to this success. Accurate performance prediction (e.g., latency, throughput) of machine learning (ML) workloads is useful for a number of reasons, including resource management, neural architecture search, and efficient training and inference. In this talk, we will focus on our approaches to predicting performance of ML workloads. Our long-term goal is to broaden the population of users capable of developing deep learning models and applying them to novel applications as well as using these models on devices with more constrained resources.

Short Bio

Leana Golubchik Leana Golubchik is the Stephen and Etta Varra Professor of Electrical and Computer Engineering (with a joint appointment in Computer Science) at USC. She also serves as the Director of the Women in Science and Engineering (WiSE) program. Prior to that, she was on the faculty at the University of Maryland and Columbia University. Leana received her PhD from UCLA. Her research interests are broadly in the design and evaluation of large scale distributed systems, including hybrid clouds and data centers and their applications in data analytics, machine learning, and privacy. Leana is the Editor-in-Chief of the ACM Transactions on Modeling and Performance Evaluation of Computing Systems (ToMPECS) and the Chair of the IFIP WG 7.3. She received several awards, including the IBM Faculty Award, the NSF CAREER Award, and the Okawa Foundation Award; she is a member of the IFIP WG 7.3 and a Fellow of AAAS.


Thursday, 7th May 2026.

Didem Unat, Koç University, Turkey

Talk title: Illuminating Data Movement: Profiling Multi-GPU Communication Paths.

GPUs have become the accelerators of choice for HPC and machine learning applications, thanks to their massive parallelism and high memory bandwidth. However, as GPU counts per node and across clusters continue to grow, inter-GPU communication has emerged as a major scalability bottleneck. Advancing GPU-centric communication requires debugging and profiling tools that can detect fine-grained, device-native data transfers, both within and across nodes. In this talk, I will present an overview of GPU-centric communication, highlighting vendor mechanisms and tool support for profiling multi-GPU communication. I will also discuss the need for more user-friendly profiling tools, to help developers understand and optimize data movement across networks.

Short Bio

Didem Unat Dr. Didem Unat received her B.Sc. degree in Computer Engineering from Boğaziçi University and her M.Sc. and Ph.D. degrees in Computer Engineering from the University of California, San Diego. Following her Ph.D., she was awarded the Luis Alvarez Postdoctoral Fellowship at Lawrence Berkeley National Laboratory. Since 2014, Dr. Unat has been leading her research group at Koç University, where she continues her scientific work on programming models, performance tools, and system software for emerging parallel architectures. She received the ACM SIGHPC Emerging Woman Leader in Technical Computing Award in 2021—becoming the first recipient of the award outside the United States. Dr. Unat has also been recognized with several other prestigious awards, including the Marie Skłodowska-Curie Individual Fellowship from the European Commission (2015), the BAGEP Award from the Turkish Academy of Sciences (2019), the Newton Advanced Fellowship from the British Royal Society (2020), and the Scientist of the Year – Young Scientist Award from Bilim Kahramanları Derneği (2021). She also brought to Türkiye the first ERC grant in Computer Science from the European Research Council. Most recently, she spent her 2024–2025 sabbatical at NVIDIA, collaborating with product teams on performance profiling tools.


Friday, 8th May 2026.

Jeff Hammond, NVIDIA

Talk title: State-of-the-Art Communication Software for Supercomputers and Its Applications.

I will talk about high-performance communication software for GPU supercomputers. I will explain NCCL and NVSHMEM, including their historical context from MPI and SHMEM. The functionality and performance will be demonstrated through an example from linear algebra. Real world results from both scientific and commercial AI use cases will be described.

Short Bio

Jeff Hammond Jeff Hammond is a Principal Engineer at NVIDIA in the data center software organization, focused on GPU communications (NCCL and NVSHMEM). He has extensive experience with the design and use of parallel programming models and scientific applications. Jeff’s most notable achievements include the MPI-5 Application Binary Interface standard, development of the MPI-3 one-sided communication software ecosystem, and contributions to the NWChem quantum chemistry project. He received a PhD in Chemistry from the University of Chicago in 2009.