Nathan Godey

MANTa: Efficient Gradient-Based Tokenization for End-to-End Robust Language Modeling

Last year, I got my first paper published as a findings at EMNLP 2022! It was a joint effort with Roman Castagné and was co-authored by my PhD supervisors Eric Villemonte de la Clergerie and Benoît...

How word frequency affects language models

As I started my PhD a few months ago, I believed that contextual embeddings coming from pre-trained language models aimed at representing words in a vector space as humans might in their thought pr...