Post

Anisotropy Is Inherent to Self-Attention in Transformers

arXiv

UPDATE: This paper was accepted to EACL 2024! 🎉

Abstract
The representation degeneration problem is a phenomenon that is widely observed among self-supervised learning methods based on Transformers. In NLP, it takes the form of anisotropy, a singular property of hidden representations which makes them unexpectedly close to each other in terms of angular distance (cosine-similarity). Some recent works tend to show that anisotropy is a consequence of optimizing the cross-entropy loss on long-tailed distributions of tokens. We show in this paper that anisotropy can also be observed empirically in language models with specific objectives that should not suffer directly from the same consequences. We also show that the anisotropy problem extends to Transformers trained on other modalities. Our observations suggest that anisotropy is actually inherent to Transformers-based models.

This paper was co-authored by my PhD supervisors Eric Villemonte de la Clergerie and Benoît Sagot from Inria’s ALMAnaCH team.

Here is the PDF version of the paper that you can also find here:

Please cite as:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
@inproceedings{godey-etal-2024-anisotropy,
    title = "Anisotropy Is Inherent to Self-Attention in Transformers",
    author = "Godey, Nathan  and
      Clergerie, {\'E}ric  and
      Sagot, Beno{\^\i}t",
    editor = "Graham, Yvette  and
      Purver, Matthew",
    booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = mar,
    year = "2024",
    address = "St. Julian{'}s, Malta",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.eacl-long.3",
    pages = "35--48",
}

This work was funded by the PRAIRIE institute as part of a PhD contract at Inria Paris and Sorbonne Université.

This post is licensed under CC BY 4.0 by the author.