网站地图 | 联系我们  
首页 中心概况 新闻动态 科研进展 交流合作 人才培养 研究队伍 人才招聘 政策规章 数学交叉科学传播
学术报告
现在位置:首页 > 学术报告

PolarGrad: A Class of Matrix-Gradient Optimizers from a Unifying Preconditioning Perspective
【2025.07.08 15:0-16:0,N219】

【打印】【关闭】

2025.06.09

Colloquia Seminars

      
Speaker Prof.Weijie Su, University of Pennsylvania
Title PolarGrad: A Class of Matrix-Gradient Optimizers from a Unifying Preconditioning Perspective
Time 2025.07.08 15:0-16:0
Venue N219
Abstract The ever-growing scale of deep learning models and datasets underscores the critical importance of efficient optimization methods. While preconditioned gradient methods such as Adam and AdamW are the de facto optimizers for training neural networks and large language models, structure-aware preconditioned optimizers like Shampoo and Muon, which utilize the matrix structure of gradients, have demonstrated promising evidence of faster convergence. In this talk, we introduce a unifying framework for analyzing “matrix-aware” preconditioned methods, which not only sheds light on the effectiveness of Muon and related optimizers but also leads to a class of new structure-aware preconditioned methods. A key contribution of this framework is its precise distinction between preconditioning strategies that treat neural network weights as vectors (addressing curvature anisotropy) versus those that consider their matrix structure (addressing gradient anisotropy). This perspective provides new insights into several empirical phenomena in language model pre-training, including Adam's training instabilities, Muon's accelerated convergence, and the necessity of learning rate warmup for Adam. Building upon this framework, we introduce PolarGrad, a new class of preconditioned optimization methods based on the polar decomposition of matrix-valued gradients. As a special instance, PolarGrad includes Muon with updates scaled by the nuclear norm of the gradients. We provide numerical implementations of these methods, leveraging efficient numerical polar decomposition algorithms for enhanced convergence. Our extensive evaluations across diverse matrix optimization problems and language model pre-training tasks demonstrate that PolarGrad outperforms both Adam and Muon.
Affiliation Associate Professor of Department of Mathematics and Wharton Statistics and Data Science Department of University of Pennsylvania, Co-Director Penn Research in Machine Learning. His research interests: mathematical theory of  deep learning and AI, statistical foundations of large language models, privacy-preserving machine learning, high-dimensional statistics, mathematical optimization. He is fellow of IMS and received Sloan Research Fellowship, SIAM Early Career Prize in Data Science, and NSF CAREER Award. He is on editorial board of Journal of Machine Learning Research, Journal of the American Statistical Association, and Operations Research et al.
欢迎访问国家数学与交叉科学中心 
地址:北京海淀区中关村东路55号 邮编:100190 电话: 86-10-62613242 Fax: 86-10-62616840 邮箱: ncmis@amss.ac.cn