Weight Tying Biases Token Embeddings Towards the Output Space
2026-03-27 • Computation and Language
Computation and Language
AI summaryⓘ
The authors studied a common method in language models where input and output word embeddings share the same parameters, called weight tying. They found that this shared embedding is more tuned for predicting outputs than for representing inputs because output signals are stronger during early training. This imbalance reduces the effectiveness of early processing layers in the model. By adjusting training to balance input signals, they showed it is possible to reduce this problem. Their work helps explain why weight tying can sometimes hurt model performance, especially in smaller models where embeddings make up a big part of the parameters.
weight tyinglanguage modelsembedding matricesunembeddinginput representationoutput predictiongradientstuned lens analysisresidual streamparameter sharing
Authors
Antonio Lopardo, Avyukth Harish, Catherine Arnett, Akshat Gupta
Abstract
Weight tying, i.e. sharing parameters between input and output embedding matrices, is common practice in language model design, yet its impact on the learned embedding space remains poorly understood. In this paper, we show that tied embedding matrices align more closely with output (unembedding) matrices than with input embeddings of comparable untied models, indicating that the shared matrix is shaped primarily for output prediction rather than input representation. This unembedding bias arises because output gradients dominate early in training. Using tuned lens analysis, we show this negatively affects early-layer computations, which contribute less effectively to the residual stream. Scaling input gradients during training reduces this bias, providing causal evidence for the role of gradient imbalance. This is mechanistic evidence that weight tying optimizes the embedding matrix for output prediction, compromising its role in input representation. These results help explain why weight tying can harm performance at scale and have implications for training smaller LLMs, where the embedding matrix contributes substantially to total parameter count.