“A world—nothing less—is the theme and postulate of the novel,” German philosopher Hans Blumenberg wrote in 1963. At that same moment, AI research, already emerging from its early optimism, turned to “world models” as a means of stabilizing its brittle systems. Today, these two conceptions of “world”—the literary and the computational—converge in large language models (LLMs), which use their latent spaces not just to generate plausible sentences, but entire narratives, even novels, albeit with still uneven results. Yet in what sense are the “worlds” of novels and of AI analogous, and what can each illuminate about the other?
The talk proposes that both novels and LLMs operate within structured networks of relations—assemblages of events, inferences, and expectations—that can yield a form of coherence even when classical causality is weak or absent. Literary techniques from realism to modernism build patterned universes: realist and naturalist fiction through causal-social dynamics, genre fiction through explicit world-building, and modernism through fragmented but still intelligible world-logics. These traditions offer a vocabulary for assessing LLM-generated texts.
Where early systems like SHRDLU pursued explicit symbolic world models and failed outside narrow domains, contemporary LLMs rely on distributed vector spaces that encode statistical regularities without grounding. My own experiments with a fine-tuned German-language model yielded narratives with stylistic unity but little causal depth. Like certain experimental novels, they evoke meaning through a “weak force” of association rather than strong narrative causality. This talk tries to follow these ideas and aims to resist both overhyping LLMs’ understanding and dismissing them as mere mimicry, thus placing AI-generated fiction, as the meeting points of the two uses of “world,” within a broader theory of modeling and meaning.
About the Speaker
Hannes Bajohr, is Assistant Professor of German at the University of California, Berkeley. His research focuses on media studies, political philosophy, philosophical anthropology, and theories of the digital. Recent publications include: Thinking with AI: Machine Learning the Humanities (as editor, London: Open Humanities Press) and "Surface Reading LLMs: Synthetic Text and its Styles" (arXiv preprint, forthcoming in New German Critique). In 2027, the English-language translation of his LLM-co-generated novel (Berlin, Miami) will appear with MIT Press.
This event is generously co-sponsored by the Stanford Literary Lab, Stanford's Division of Literatures, Cultures, and Languages, and the Department of English.
Related Events
Ge Wang | What do We (Really) Want From AI?