Papers
arXiv:2511.01734

A Proof of Learning Rate Transfer under μP

Published on Nov 3
Authors:

Abstract

Theoretical analysis shows that under $\mu P$ parametrization, the optimal learning rate converges to a non-zero constant in infinite-width MLPs, explaining learning rate transfer, which does not occur with other parametrizations.

AI-generated summary

We provide the first proof of learning rate transfer with width in a linear multi-layer perceptron (MLP) parametrized with muP, a neural network parameterization designed to ``maximize'' feature learning in the infinite-width limit. We show that under mu P, the optimal learning rate converges to a non-zero constant as width goes to infinity, providing a theoretical explanation to learning rate transfer. In contrast, we show that this property fails to hold under alternative parametrizations such as Standard Parametrization (SP) and Neural Tangent Parametrization (NTP). We provide intuitive proofs and support the theoretical findings with extensive empirical results.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2511.01734 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2511.01734 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2511.01734 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.