Unlike “Likely”, “Unlike” is Unlikely: BPE-based Segmentation hurts Morphological Derivations in LLMs - Machine Learning and Information Access
Pré-Publication, Document De Travail Année : 2024

Unlike “Likely”, “Unlike” is Unlikely: BPE-based Segmentation hurts Morphological Derivations in LLMs

Paul Lerner
François Yvon

Résumé

Large Language Models (LLMs) rely on subword vocabularies to process and generate text. However, because subwords are marked as initial- or intra-word, we find that LLMs perform poorly at handling some types of affixations, which hinders their ability to generate novel (unobserved) word forms. The largest models trained on enough data can mitigate this tendency because their initial- and intra-word embeddings are aligned; in-context learning also helps when all examples are selected in a consistent way; but only morphological segmentation can achieve a near-perfect accuracy.
Fichier principal
Vignette du fichier
2025_COLING_pvs(4).pdf (281.83 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04831106 , version 1 (12-12-2024)

Identifiants

  • HAL Id : hal-04831106 , version 1

Citer

Paul Lerner, François Yvon. Unlike “Likely”, “Unlike” is Unlikely: BPE-based Segmentation hurts Morphological Derivations in LLMs. 2024. ⟨hal-04831106⟩
0 Consultations
0 Téléchargements

Partager

More