In the first talk, Training to Exhaustion: Linguistic Labour and Model Collapse, Paolo Caffoni addresses how LLMs create a phenomenological suspension of scarcity: textual outputs proliferate without the immediately visible constraints that ordinarily accompany linguistic production, time, attention, fatigue, and other limits of embodied linguistic performance. A chatbot can sustain millions of simultaneous exchanges not because it overcomes these limits, but because it operates by circulating sedimented linguistic labour without exposure to exhaustion. Framing language automation through the lens of financialization, Caffoni argues that "model collapse" names not a technical anomaly but a structural limit, emerging when linguistic value circulates independently of its conditions of production. Read this way, model collapse marks the point at which speculative abundance encounters its own finitude, expressed as energetic and semiotic exhaustion.
In the second talk, What 'Meaning' Means in LLMs' Research: An Interdisciplinary Conceptual Map, Claudia Montanaro addresses how debates over whether LLMs can "understand" or "produce" meaning now span multiple fields, including philosophy, cognitive science, and computational linguistics, where distinct claims are advanced about LLMs' capabilities. Yet comparing these accounts is difficult because 'meaning' is often abstract and inconsistently defined. Based on a systematic review of 53 academic papers, this study identifies six clusters of theoretical approaches to meaning: "Reference and Truth", "Relation", "Cognitive Concept", "Generic Content", "Pragmatics", and "Plural or Beyond Language". The findings show that no single framework dominates and that divergent conceptions of meaning underpin varying claims about LLMs' abilities. Mapping this theoretical pluralism helps prevent the "singleton fallacy", or the tendency to treat 'meaning' as a single, unified phenomenon, and provides groundwork for assessing whether LLMs are prompting a genuinely new geometry of language.
Paolo Caffoni is a PhD candidate at Karlsruhe University of Arts and Design and a research associate with the KIM research group and AI Forensics research group. He serves as an external expert for the ERC project AI Models at Ca' Foscari University of Venice. Trained in literature, semiotics, and curatorial studies in Milan, Caffoni is a faculty member at New Academy of Fine Arts Milan and was part of the curatorial team of the second Yinchuan Biennale in 2018. From 2009 to 2021, he worked as a publishing editor at the Berlin-based publishing house Archive Books and co-directed the exhibition and public program at Archive Kabinett. His essays and contributions are available at paolocaffoni.com.
Claudia Montanaro is a PhD candidate at the University of Amsterdam and a member of the ‘AI, Culture & Society’ research group of the ILLC. Drawing on philosophy and STS, her work focuses on the language used to describe technology, with attention to the conceptual borrowing and linguistic ambiguity currently at play in efforts to make sense of large language models. Bringing philosophy into interdisciplinary settings, she uses empirical methods to examine how language acquires material force by shaping research and design practices.
Riccardo Molin will be moderating the PEPTalk.