Why Philosophy is Useless and Yet Matters for AI

Philosophy is often accused of being useless, and in a certain sense that’s true. Philosophy doesn’t build bridges, cure diseases, or put rockets on the moon. It doesn’t provide grounded methods for solving practical problems. In fact, whenever philosophy does discover a ground, it hands that ground over to science. Physics, chemistry, biology, history, ethics, these were all once philosophy until they found stable ground and methods and became sciences in their own right. Philosophy remains only with the questions that resist such grounding. And this is exactly why it matters for artificial intelligence.

AI is one of those questions. We know how to make machines fluent in language, we know how to scale models with data and compute, but we don’t know what thinking is, or what it would take to realize it in a machine. Human cognition itself is still poorly understood, let alone the deeper mystery of human being. Unlike space exploration, which extended an already solid science, AI is building on shifting sands. That ungroundedness makes AI as much a philosophical problem as a technical one.

So the irony is that philosophy looks useless precisely because it cannot ground itself in fixed methods or guaranteed results. Yet in situations where no such ground exists, when we don’t even know what the right assumptions are, then philosophy becomes the only mode of thinking capable of moving us forward. It doesn’t hand us a recipe, but it helps us recognize the stakes, clarify the hidden presuppositions, and ask the questions that no technical model can yet resolve.

This is where AI stands today. Transformers and large language models are dazzling achievements, but they have been built on borrowed assumptions, assumptions about meaning, about reasoning, about what it means to “know.” The models work spectacularly well, but no one can say with confidence what kind of intelligence they really embody, or whether scaling alone will ever amount to thinking. Here, philosophy matters, because it helps surface the fact that AI is not simply an engineering project but a confrontation with the question of intelligence itself.

That is why World Mind takes philosophy seriously. It is not about importing abstract speculation into engineering, but about recognizing that AI development has already been philosophical all along, it’s just been unacknowledged. The choice of training objectives, the faith in scaling laws, the assumptions about language as a proxy for thought, these are philosophical positions, even if disguised as technical dogma. To move forward responsibly, we need philosophy not to give us answers, but to remind us of the questions, so that the science of AI, when it finally emerges, will rest on something firmer than unexamined hope.

Leave a Reply