In the mid-1960s, at the very moment when artificial intelligence was first being celebrated as the future of science, one voice stood apart. It was the voice of Hubert Dreyfus, a young philosopher at MIT. While his colleagues in computer science were predicting human-level intelligence within a generation, Dreyfus warned that their assumptions were fatally flawed.
Dreyfus was not a programmer, but he had something that few in the AI community possessed, a deep understanding of philosophy, especially phenomenology. He had studied Heidegger and Merleau-Ponty, and he saw that the reigning picture of intelligence, as the manipulation of internal symbols, governed by rules, was blind to the actual conditions of human understanding. For Dreyfus, the crucial insight was that thinking is not a matter of detached computation. It is a way of being-in-the-world.
This made him an outsider. His 1965 RAND Corporation report, Alchemy and Artificial Intelligence, was met with ridicule. Leading researchers dismissed him as a crank who simply didn’t understand computers. Some even tried to have his funding cut off. And yet, many of his predictions came true. He said that rule-based systems would run aground in the face of real-world complexity, and they did. He said that computers would struggle with common sense, and they do. His warnings foreshadowed what became known as the first AI winter, when early optimism collapsed under the weight of unsolved problems.
Dreyfus’s critique was not simply negative. He was pointing toward a deeper truth, which is that intelligence is not contained in formal representations but arises from a situated, embodied relation to the world. A hammer is not first known as an object with properties but as something to build with. A face is not first recognized as a set of features but as an expression of another’s presence. Meaning, in other words, is not abstract it is lived.
These insights were ignored then, but they matter now more than ever. Today’s large language models are not rule-based, yet they inherit the same blind spot. They generate fluent sentences, but they do not inhabit the world those sentences presuppose. They recombine patterns without ever experiencing the urgency of care, the pull of mood, or the context of practice. In this sense, they repeat the mistake Dreyfus warned about, that of mistaking the surface of intelligence for its ground.
Remembering Dreyfus is not just a matter of giving credit to a forgotten oracle. It is a reminder that philosophy matters for AI. Without Heidegger’s insight into being-in-the-world, we risk building ever-larger machines that remain worldless, i.e., brilliant mimics that miss the essence of thought. World Mind begins from this recognition. It takes Dreyfus not as an enemy of AI but as an early guide, showing us that the path forward is not to abandon philosophy, but to let it shape what we build.
But here is where our moment differs from his. Dreyfus was a critic, standing outside the project of AI and warning of its blind spots. I stand in a different place. World Mind is not about critiquing AI from the sidelines but about taking up the Heideggerian frame as a foundation for building it differently. What Dreyfus saw as a limit, I see as a path. His insight into world and being was not a barrier to artificial intelligence but a clue to how it might truly begin.
