In this paper we argue that key, often sensational and misleading, claims
regarding linguistic capabilities of Large Language Models (LLMs) are based on
at least two unfounded assumptions; the assumption of language completeness and
the assumption of data completeness. Language completeness assumes that a
distinct and complete thing such as a natural language' exists, the essential
characteristics of which can be effectively and comprehensively modelled by an
LLM. The assumption of data completeness relies on the belief that a language
can be quantified and wholly captured by data. Work within the enactive
approach to cognitive science makes clear that, rather than a distinct and
complete thing, language is a means or way of acting. Languaging is not the
kind of thing that can admit of a complete or comprehensive modelling. From an
enactive perspective we identify three key characteristics of enacted language;
embodiment, participation, and precariousness, that are absent in LLMs, and
likely incompatible in principle with current architectures. We argue that
these absences imply that LLMs are not now and cannot in their present form be
linguistic agents the way humans are. We illustrate the point in particular
through the phenomenon of algospeak’, a recently described pattern of high
stakes human language activity in heavily controlled online environments. On
the basis of these points, we conclude that sensational and misleading claims
about LLM agency and capabilities emerge from a deep misconception of both what
human language is and what LLMs are.