- This event has passed.
LANS Seminar
April 10 @ 10:30 - 11:30 CDT
Seminar Title: Going Beyond Diagonal Preconditioners for Training Neural Networks At-Scale
Speaker: Hao-Jun Michael Shi, Research Scientist, AI and Systems Co-Design team, Meta Platforms
Date/Time: April 10, 2024/ 10:30 AM-1:30 AM
Location: See Meeting URL on the cels-seminars website which will require an Argonne login.
Description: Diagonal scaling-based adaptive subgradient methods such as AdaGrad and Adam(W) have dominated neural network training over the past decade due to their simplicity of implementation, linear memory and computational requirements, and robustness to hyperparameter tuning. While theoretically superior full-matrix preconditioned adaptive subgradient methods exist, their quadratic memory and cubic computational costs prevent them from being applied at-scale. However, recent developments in block-diagonal Kronecker factorization-based preconditioned methods (i..e, Shampoo) that capture useful uncentered correlations within each layer have practically demonstrated faster convergence than diagonal scaling methods while remaining tractable in terms of required memory and compute. In this talk, we provide an overview of the Shampoo algorithm and detail our state-of-the-art distributed PyTorch implementation, including: (1) the heuristics required to make the algorithm work in practice; (2) the performance optimizations necessary to make Shampoo competitive in terms of per-iteration wall-clock time against diagonal scaling methods; and (3) the developments necessary to scale Shampoo to train billion-parameter models. Experiments on ImageNet ResNet50 and the AlgoPerf benchmark demonstrate Shampoo’s potential for training neural networks more efficiently across a broad range of applications.
Bio: Hao-Jun Michael Shi is a Research Scientist in the AI and Systems Co-Design team at Meta Platforms. He obtained his B.S. degree in Applied Mathematics from the University of California, Los Angeles, and his Ph.D. from Northwestern University in Industrial Engineering and Management Sciences. He received the Walter P. Murphy Fellowship and the NSF Graduate Research Fellowship Honorable Mention in 2016 and 2017, and was a top ICML reviewer in 2019. His current research interests are in the design and implementation of scalable and distributed training algorithms and systems for deep learning. He has previously contributed to the areas of stochastic optimization, noisy optimization, and derivative-free optimization as well as recommender systems and embedding compression.
Please note that the meeting URL for this event can be seen on the cels-seminars website which requires an Argonne login.
See all upcoming talks at https://www.anl.gov/mcs/lans-seminars