The pursuit of artificial general intelligence needs something like a paradigm shift. There is a growing widespread consensus that large language models have hit a wall. The AI labs behind the frontier models all assumed, or at least hoped, that the transformer architecture of generative AI was the necessary foundation, and that scaling with more data and more compute would eventually unlock superintelligence. It turns out, however, that the transformer was never going to be the architecture to get us there. I don’t mean this only in the empirical sense, but in a deeper, ontological one. The transformer is a remarkable breakthrough, yet little reflection has gone into clarifying what it actually accomplished. What it gave us is language fluency and a shallow kind of causal reasoning, impressive achievements, to be sure, but still a long way from what is necessary for genuine thinking.
What is missing is not more layers, more tokens, or more GPUs, but a different grounding altogether. Intelligence does not emerge in a vacuum of symbols, it emerges in relation to a world. Human thought is not simply the rearrangement of words but the orientation of a being that already finds itself immersed in significance, as in tools, practices, relationships, and possibilities. Large language models can generate astonishingly coherent sentences, but they do so without inhabiting the world those sentences presuppose. This absence of world is precisely why fluency stops short of understanding, and why scaling alone cannot bridge the gap between statistical patterning and genuine thought. World Mind begins with the recognition that thinking is not something that happens only in language or only inside our heads, but always through our involvement with the world itself.
World Mind is a project to develop artificial general intelligence based on a theory of how intelligence is possible at all. That may sound abstract, but it matters, because most modern approaches to AI have inherited a very old mistake. Since Descartes, we’ve been told to picture the mind as something like a container of thoughts, an inner realm of representations, detached from the outside world. On this view, intelligence is a kind of computation carried out on mental objects, and the world only enters later as input or output. Heidegger overturned this picture. He showed that intelligence, human or otherwise, cannot be understood apart from what he called being-in-the-world. We do not first have a self-contained mind that then looks outward. Rather, we are always already immersed in a field of significance, that is how we find ourselves among tools, tasks, others, and possibilities. Meaning does not come after thinking, it is the very medium in which thought takes place.
This connection with world may sound as though AI must be developed inside a synthetic organism or some other embodied being with contact to world. For a long time, I thought so myself. But the achievement of the transformer’s QKV mechanism and its attention heads has convinced me otherwise. These innovations revealed that it is possible to simulate aspects of human intelligence in purely computational form, without a body in the biological sense. What attention accomplished was the construction of a kind of relational field, a proto-world, within language itself. That discovery, while limited, opens the door to imagining how machine intelligence might be given a richer connection to world, one that goes beyond scaling language models and toward grounding thought itself. I believe the way to think of the large language model is as a single cognitive layer in need of other cognitive layers much like the human brain operates. Rather than mistaking them for finished minds, we should see large language models as a module of a broader architecture, one that requires additional modules to bring intelligence into relation with world, the very task that World Mind sets out to pursue.
Thank you for reading this inaugural post of my World Mind blog. In the posts ahead, I will focus on the development of World Mind, both as a project aimed at achieving AGI and as the discovery of a theory capable of supporting it. We distinguish between AI and AGI because, in our all-too-human exuberance, we have applied the label “AI” to algorithms that are not intelligent at all. We recognize intelligence when we encounter it in ourselves, yet we scarcely know what it is or how it arises. That is why we cannot simply build ever-larger software systems and assume intelligence will follow. We need a guiding theory.
The success of the transformer was more a fortunate accident than most are willing to admit, and even then it remains far from genuine intelligence. I believe Heidegger provides the basic theory we need to guide the path toward AGI. Yet Heideggerian ontology is incomplete, and so it falls to us as theorists, researchers, and philosophers to develop it further. World Mind is therefore conceived as a two-pronged project: advancing the path toward AGI while completing the ontology that makes it possible.
